Negative Regularization

Subtitle
Prevent Overfitting to Clean Data in Noisy Dataset
Author(s)
노준호
Advisor
황원준
Department
일반대학원 인공지능학과
Publisher
The Graduate School, Ajou University
Publication Year
2021-08
Language
eng
Keyword
Deep LearningLearning with Noisy Labels
Alternative Abstract
The input data and data labels are required in supervised learning. However, labeling is an expensive task, and if automated, there is no guarantee that the label will always be correct. There are various methods to solve noisy label problems. Previous works are solved by reinforcing the direction of the gradient of clean data and neutralized the direction of the gradient of noisy labels. However, if the gradient is continuously strengthened for clean data, overfitting occurs, which reduces generalization performance. We refined the model's prediction to converge the gradient direction of the noisy data to the clean data direction. And we add decay to prevent convergence to the noisy label through regularization. In this paper, we experimentally show that the method of strengthening the gradient direction of clean data and neutralizing the gradient of noisy labels is overfitting for clean data and that overfitting is prevented by applying our proposed method. It also shows that the performance is improved compared to other SOTA methods. As a result, our proposed method proposes regularization in noisy labels environment, which prevents overfitting to clean data and proposes negative regularization (NR), which improves performance by strengthening in the direction of real labels for noisy labels.
URI
https://dspace.ajou.ac.kr/handle/2018.oak/20417
Fulltext

Appears in Collections:
Graduate School of Ajou University > Department of Artificial Intelligence > 3. Theses(Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse