As the research on deep learning models are actively carried out to solve real-life applied tasks, there have been efforts to train the models more accurately and sophisticatedly. To this end, well-refined label annotated data in the various domain is being produced. The main drawback of these models is that they always have to be labeled correctly for supervised learning. However, the real-life applicable model should operate well even when there are few or no such labels. Unsupervised learning is a method of learning without labels, among which self-supervised learning is a method of using a mechanism to give the model a hint about input data.
In this thesis, we have developed a methodology to learn representation from structure information via self-supervised learning. Specifically, we applied this to two different tasks; First, we applied the methodology to improve the sentiment analysis model using a graph-based ranking mechanism, and second, we applied it to confidence-based multi-class anomaly detection with deep clustering. With the first task, we propose the GRAB vector(GRAph-Based vector), which consists of vectorized keyword-based morphemes or summaries extracted from the graph-based ranking mechanism, which is a representation of the data structure information. Then we applied the GRAB vector to the sentiment analysis task, which is one of the NLP(Natural Language Processing) tasks, and we propose a more accurate and robust model, GRAB-BERT(GRAB vector-BERT model). Also, to analyze the effect of the GRAB vector on the model, we compared the performances of recurrent- and parallel-based models with or without application of the GRAB vector on both English and Korean text samples. Our results demonstrate that applying the GRAB vector to models improved the performance of sentiment analysis. With the second task, we propose a novel anomaly detection method using self-labeling. The self-labels are assigned by the clustering from the data structure information. We enabled the multi-class anomaly detection via confidence-based anomaly detection by using the assigned self-labels. Even only with the basic structures of neural network classifiers, our method outperformed the comparing model in the suggested scenarios and the multi-class anomaly detection.
Overall, we demonstrated the effectiveness of learning representation from structure information applied on two tasks, and we present the direction of the self-supervised training mechanisms for deep learning models that can be trained without labels.