Unsupervised Text Style Transfer through Style Embedding
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 손경아 | - |
dc.contributor.author | 김희진 | - |
dc.date.accessioned | 2022-11-29T02:32:32Z | - |
dc.date.available | 2022-11-29T02:32:32Z | - |
dc.date.issued | 2021-02 | - |
dc.identifier.other | 30570 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/handle/2018.oak/20074 | - |
dc.description | 학위논문(석사)--아주대학교 일반대학원 :인공지능학과,2021. 2 | - |
dc.description.tableofcontents | I.Introduction 1 II.Related Works 3 A. Token Embedding 3 B.Transformer 3 C.Image translation 4 D.Text style transfer 4 III.Methodology 7 A.Overview 7 B.Style module 8 C.Sentence generation 9 D.Additional settings 10 IV.Experiment 12 A.Implementation 12 B.Datasets 12 V.Results 14 A.Quantitative evaluation protocol 14 B.Comparison results 15 C.Model analysis results 17 VI.Conclusion 23 References 24 | - |
dc.language.iso | eng | - |
dc.publisher | The Graduate School, Ajou University | - |
dc.rights | 아주대학교 논문은 저작권에 의해 보호받습니다. | - |
dc.title | Unsupervised Text Style Transfer through Style Embedding | - |
dc.type | Thesis | - |
dc.contributor.affiliation | 아주대학교 일반대학원 | - |
dc.contributor.alternativeName | Heejin Kim | - |
dc.contributor.department | 일반대학원 인공지능학과 | - |
dc.date.awarded | 2021. 2 | - |
dc.description.degree | Master | - |
dc.identifier.localId | 1203462 | - |
dc.identifier.uci | I804:41038-000000030570 | - |
dc.identifier.url | http://dcoll.ajou.ac.kr:9080/dcollection/common/orgView/000000030570 | - |
dc.subject.keyword | Deep learning | - |
dc.subject.keyword | Natural language processing | - |
dc.subject.keyword | Text generation | - |
dc.subject.keyword | Text style transfer | - |
dc.description.alternativeAbstract | Unsupervised text style transfer problem is to generate a sentence that reflects the newly given style while preserving the content of the input sentence. It also aims to generate sentences naturally. Text style transfer has been solved by a supervised method using a parallel dataset (Jhamtani, 2017). However, there is a difficulty in that there are few data corresponding to each other for each style domain, and because of this, information on the parts to be preserved and the parts to be changed is not clear when the domain is transferred. So, it is difficult to avoid losing content when focusing on style change. The primary approach to solve this problem is disentanglement between content and style (Shen, 2017; Hu, 2017; Fu, 2018; John, 2019). This approach changes only the information about the style to keep the content. Other approaches do not separate style and content. Instead, a style classifier is used to change sentences’ style. However, there is only one output generated by both approaches. Therefore, neither approach can adjust the strength of the style. Also, the model from the previous approach typically does two things, sentence reconstruction, and style control. It complicates the overall architecture of the model. We utilize the Transformer-based autoencoder model for sentence generation, and the style embedding is learned in the style module, directly. This separation allows each module to concentrate more on its own duty. Moreover, we can control the style strength of the generated sentence by adjusting the style embedding. Therefore, our approach can alter the style strength and simplify the model architecture. In addition, experimental results prove that our approach excels in style transfer performance and content retention performance. | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.