117 0

Exploring the Impact of Corpus Diversity on Financial Pretrained Language Models

Title
Exploring the Impact of Corpus Diversity on Financial Pretrained Language Models
Author
정우환
Keywords
Computation and Language (cs.CL)
Issue Date
2023-12-10
Publisher
Association for Computational Linguistics
Citation
Findings of the Association for Computational Linguistics
Abstract
Over the past few years, various domain-specific pretrained language models (PLMs) have been proposed and have outperformed general-domain PLMs in specialized areas such as biomedical, scientific, and clinical domains. In addition, financial PLMs have been studied because of the high economic impact of financial data analysis. However, we found that financial PLMs were not pretrained on sufficiently diverse financial data. This lack of diverse training data leads to a subpar generalization performance, resulting in general-purpose PLMs, including BERT, often outperforming financial PLMs on many downstream tasks. To address this issue, we collected a broad range of financial corpus and trained the Financial Language Model (FiLM) on these diverse datasets. Our experimental results confirm that FiLM outperforms not only existing financial PLMs but also general domain PLMs. Furthermore, we provide empirical evidence that this improvement can be achieved even for unseen corpus groups.
URI
https://arxiv.org/abs/2310.13312https://repository.hanyang.ac.kr/handle/20.500.11754/188097
DOI
10.18653/v1/2023.findings-emnlp.138
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE