The input data and data labels are required in supervised learning. However, labeling is an expensive task, and if automated, there is no guarantee that the label will always be correct. There are various methods to solve noisy label problems. Previous works are solved by reinforcing the direction of the gradient of clean data and neutralized the direction of the gradient of noisy labels. However, if the gradient is continuously strengthened for clean data, overfitting occurs, which reduces generalization performance. We refined the model's prediction to converge the gradient direction of the noisy data to the clean data direction. And we add decay to prevent convergence to the noisy label through regularization. In this paper, we experimentally show that the method of strengthening the gradient direction of clean data and neutralizing the gradient of noisy labels is overfitting for clean data and that overfitting is prevented by applying our proposed method. It also shows that the performance is improved compared to other SOTA methods. As a result, our proposed method proposes regularization in noisy labels environment, which prevents overfitting to clean data and proposes negative regularization (NR), which improves performance by strengthening in the direction of real labels for noisy labels.