154 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author정우환-
dc.date.accessioned2024-01-09T03:32:36Z-
dc.date.available2024-01-09T03:32:36Z-
dc.date.issued2023-12-10-
dc.identifier.citationFindings of the Association for Computational Linguisticsen_US
dc.identifier.urihttps://arxiv.org/abs/2310.13312en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/188097-
dc.description.abstractOver the past few years, various domain-specific pretrained language models (PLMs) have been proposed and have outperformed general-domain PLMs in specialized areas such as biomedical, scientific, and clinical domains. In addition, financial PLMs have been studied because of the high economic impact of financial data analysis. However, we found that financial PLMs were not pretrained on sufficiently diverse financial data. This lack of diverse training data leads to a subpar generalization performance, resulting in general-purpose PLMs, including BERT, often outperforming financial PLMs on many downstream tasks. To address this issue, we collected a broad range of financial corpus and trained the Financial Language Model (FiLM) on these diverse datasets. Our experimental results confirm that FiLM outperforms not only existing financial PLMs but also general domain PLMs. Furthermore, we provide empirical evidence that this improvement can be achieved even for unseen corpus groups.en_US
dc.description.sponsorshipThis work was supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. RS-2023-00261068, Development of Lightweight Multimodal AntiPhishing Models and Split-Learning Techniques for Privacy-Preserving Anti-Phishing), (No.RS2022-00155885, Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA)), and (2018-0-00192, the National Program for Excellence in SW). This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF2022R1G1A1013549). Finally, we thank the reviewers for their detailed feedback, which helped to improve the quality of this paperen_US
dc.languageen_USen_US
dc.publisherAssociation for Computational Linguisticsen_US
dc.relation.ispartofseriesEMNLP 2023;2101-2112-
dc.subjectComputation and Language (cs.CL)en_US
dc.titleExploring the Impact of Corpus Diversity on Financial Pretrained Language Modelsen_US
dc.typeArticleen_US
dc.identifier.doi10.18653/v1/2023.findings-emnlp.138en_US
dc.relation.page2101-2112-
dc.contributor.googleauthorChoe, Jaeyoung-
dc.contributor.googleauthorNoh, Keonwoong-
dc.contributor.googleauthorKim, Nayeon-
dc.contributor.googleauthorAhn, Seyun-
dc.contributor.googleauthorJung, Woohwan-
dc.relation.code20230059-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF COMPUTING[E]-
dc.sector.departmentDEPARTMENT OF ARTIFICIAL INTELLIGENCE-
dc.identifier.pidwhjung-
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE