Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이기천 | - |
dc.date.accessioned | 2022-12-09T00:33:46Z | - |
dc.date.available | 2022-12-09T00:33:46Z | - |
dc.date.issued | 2022-08 | - |
dc.identifier.citation | APPLIED SCIENCES-BASEL, v. 12, NO. 16, article no. 7968, Page. 1-16 | en_US |
dc.identifier.issn | 2076-3417;2076-3417 | en_US |
dc.identifier.uri | https://www.mdpi.com/2076-3417/12/16/7968 | en_US |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/178075 | - |
dc.description.abstract | In natural language processing (NLP), Transformer is widely used and has reached the state-of-the-art level in numerous NLP tasks such as language modeling, summarization, and classification. Moreover, a variational autoencoder (VAE) is an efficient generative model in representation learning, combining deep learning with statistical inference in encoded representations. However, the use of VAE in natural language processing often brings forth practical difficulties such as a posterior collapse, also known as Kullback–Leibler (KL) vanishing. To mitigate this problem, while taking advantage of the parallelization of language data processing, we propose a new language representation model as the integration of two seemingly different deep learning models, which is a Transformer model solely coupled with a variational autoencoder. We compare the proposed model with previous works, such as a VAE connected with a recurrent neural network (RNN). Our experiments with four real-life datasets show that implementation with KL annealing mitigates posterior collapses. The results also show that the proposed Transformer model outperforms RNN-based models in reconstruction and representation learning, and that the encoded representations of the proposed model are more informative than other tested models. | en_US |
dc.description.sponsorship | This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2020R1F1A1076278). This work was also supported by `Human Resources Program in Energy Technology' of the Korea Institute of Energy Technology Evaluation and Planning (KETEP), granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20204010600090). | en_US |
dc.language | en | en_US |
dc.publisher | MDPI | en_US |
dc.source | 91152_이기천.pdf | - |
dc.subject | natural language processing | en_US |
dc.subject | transformer | en_US |
dc.subject | variational autoencoder | en_US |
dc.subject | text mining | en_US |
dc.title | Informative Language Encoding by Variational Autoencoders Using Transformer | en_US |
dc.type | Article | en_US |
dc.relation.no | 16 | - |
dc.relation.volume | 12 | - |
dc.identifier.doi | 10.3390/app12167968 | en_US |
dc.relation.page | 1-16 | - |
dc.relation.journal | APPLIED SCIENCES-BASEL | - |
dc.contributor.googleauthor | Ok, Changwon | - |
dc.contributor.googleauthor | Lee, Geonseok | - |
dc.contributor.googleauthor | Lee, Kichun | - |
dc.sector.campus | S | - |
dc.sector.daehak | 공과대학 | - |
dc.sector.department | 산업공학과 | - |
dc.identifier.pid | skylee | - |
dc.identifier.orcid | https://orcid.org/0000-0002-5184-7151 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.