Autoencoder for Stable Embedding

DC Field Value Language
dc.contributor.advisor신현정-
dc.contributor.author김재현-
dc.date.accessioned2022-11-29T03:01:17Z-
dc.date.available2022-11-29T03:01:17Z-
dc.date.issued2022-08-
dc.identifier.other32290-
dc.identifier.urihttps://dspace.ajou.ac.kr/handle/2018.oak/20973-
dc.description학위논문(석사)--아주대학교 일반대학원 :인공지능학과,2022. 8-
dc.description.tableofcontents1.Introduction 1 2.Fundamentals 6 2.1 Autoencoder 7 3.Proposed Method: Autoencoder for Stable Embedding 9 3.1 Autoencoder for Stable Embedding 10 3.2 Manifold Variance Matrix and Score 14 4.Experiments 16 4.1 Data setups 17 4.2 Experimental Setups 19 4.3 Experiment results 27 5.Conclusion 51 References 55-
dc.language.isokor-
dc.publisherThe Graduate School, Ajou University-
dc.rights아주대학교 논문은 저작권에 의해 보호받습니다.-
dc.titleAutoencoder for Stable Embedding-
dc.typeThesis-
dc.contributor.affiliation아주대학교 일반대학원-
dc.contributor.alternativeNameJaehyun Kim-
dc.contributor.department일반대학원 인공지능학과-
dc.date.awarded2022. 8-
dc.description.degreeMaster-
dc.identifier.localId1254206-
dc.identifier.uciI804:41038-000000032290-
dc.identifier.urlhttps://dcoll.ajou.ac.kr/dcollection/common/orgView/000000032290-
dc.subject.keywordautoencoder-
dc.subject.keyworddimensionality reduction-
dc.subject.keywordembedding-
dc.subject.keywordfeature extraction-
dc.subject.keywordneural network-
dc.description.alternativeAbstractAutoencoders are widely used machine learning models for dimensionality reduction and feature extraction tasks. However, these autoencoders have some limitations. The first is that the embedding space appearing in the model varies according to training even under the same conditions. The second is that the performance deviations also occur due to the variability of the embedding space. Third, the distance metric of the input data is lost in the embedding space, so the overall information of the data is lost, and the usability of the extracted features is hindered. This also means losing the manifold of the original data in the embedding space. Therefore, this study proposes a model that applies a new loss term with the difference between the pairwise similarity of the input data and the pairwise similarity of the bottleneck as a loss to solve the limitations of the autoencoder. This loss term is defined as a similarity loss. So, the proposed model is trained by adding the reconstruction loss used as a loss in a general autoencoder and a similarity loss as the total loss. As a result of the experiment, it was confirmed that the proposed model to which the similarity loss was applied showed up to about 49.58% lower embedding space variability than the general autoencoder and maintained the pairwise distance of the original data in the bottleneck. In addition, the deviation in performance was reduced by up to about 88.05%, and the reconstruction loss was reduced by about 23.79%. When the feature sets extracted from the proposed model and the general autoencoder were respectively applied to the classification model and the performance was compared, the case where the features extracted from the proposed model were applied showed about 15.31% higher classification performance. therefore, it is confirmed that the proposed model solves the variation in embedding space, performance deviation, and distance relationship loss of original data, which appears in existing autoencoders, and can increase performance and extract more useful features.-
Appears in Collections:
Graduate School of Ajou University > Department of Artificial Intelligence > 3. Theses(Master)
Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse