Hate speech detection is important in that the spread of hate speech strengthens critical social discrimination against its target social group not only online but also in the real world. We propose Masked Rationale Prediction (MRP) to improve the performance of hate speech detection considering two important aspects—the model bias and explainability. Understanding the context of hate speech is important for hate speech detection. Hate speech cannot be identified based solely on the presence of specific words considered hateful. However, existing models are easily biased on the specific expressions and make wrong detection results. Even though they correctly predict, the model rationale is often not explained in a convincing manner. Thus, to implement a hate speech detection model, bias and explainability should be considered. MRP is a task to predict the masked human rationales—snippets of a sentence that are grounds for human judgment—by referring to surrounding tokens combined with their unmasked rationales. the human rationales are randomly masked and inputted into the model by being combined with each of the tokens. We pre-finetune a pre-trained model on MRP as an intermediate task and then finetune on hate speech detection. As the model learns its reasoning ability based on rationales by MRP, it performs hate speech detection robustly in terms of bias and explainability. The proposed method generally achieves state-of-the-art performance in various metrics, demonstrating its effectiveness for hate speech detection.