301 155

Full metadata record

DC FieldValueLanguage
dc.contributor.author이기천-
dc.date.accessioned2022-12-09T00:33:46Z-
dc.date.available2022-12-09T00:33:46Z-
dc.date.issued2022-08-
dc.identifier.citationAPPLIED SCIENCES-BASEL, v. 12, NO. 16, article no. 7968, Page. 1-16en_US
dc.identifier.issn2076-3417;2076-3417en_US
dc.identifier.urihttps://www.mdpi.com/2076-3417/12/16/7968en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/178075-
dc.description.abstractIn natural language processing (NLP), Transformer is widely used and has reached the state-of-the-art level in numerous NLP tasks such as language modeling, summarization, and classification. Moreover, a variational autoencoder (VAE) is an efficient generative model in representation learning, combining deep learning with statistical inference in encoded representations. However, the use of VAE in natural language processing often brings forth practical difficulties such as a posterior collapse, also known as Kullback–Leibler (KL) vanishing. To mitigate this problem, while taking advantage of the parallelization of language data processing, we propose a new language representation model as the integration of two seemingly different deep learning models, which is a Transformer model solely coupled with a variational autoencoder. We compare the proposed model with previous works, such as a VAE connected with a recurrent neural network (RNN). Our experiments with four real-life datasets show that implementation with KL annealing mitigates posterior collapses. The results also show that the proposed Transformer model outperforms RNN-based models in reconstruction and representation learning, and that the encoded representations of the proposed model are more informative than other tested models.en_US
dc.description.sponsorshipThis work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2020R1F1A1076278). This work was also supported by `Human Resources Program in Energy Technology' of the Korea Institute of Energy Technology Evaluation and Planning (KETEP), granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20204010600090).en_US
dc.languageenen_US
dc.publisherMDPIen_US
dc.source91152_이기천.pdf-
dc.subjectnatural language processingen_US
dc.subjecttransformeren_US
dc.subjectvariational autoencoderen_US
dc.subjecttext miningen_US
dc.titleInformative Language Encoding by Variational Autoencoders Using Transformeren_US
dc.typeArticleen_US
dc.relation.no16-
dc.relation.volume12-
dc.identifier.doi10.3390/app12167968en_US
dc.relation.page1-16-
dc.relation.journalAPPLIED SCIENCES-BASEL-
dc.contributor.googleauthorOk, Changwon-
dc.contributor.googleauthorLee, Geonseok-
dc.contributor.googleauthorLee, Kichun-
dc.sector.campusS-
dc.sector.daehak공과대학-
dc.sector.department산업공학과-
dc.identifier.pidskylee-
dc.identifier.orcidhttps://orcid.org/0000-0002-5184-7151-


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE