Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 최용석 | - |
dc.contributor.author | 설지우 | - |
dc.date.accessioned | 2024-03-01T07:38:33Z | - |
dc.date.available | 2024-03-01T07:38:33Z | - |
dc.date.issued | 2024. 2 | - |
dc.identifier.uri | http://hanyang.dcollection.net/common/orgView/200000719540 | en_US |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/188384 | - |
dc.description.abstract | An important problem of the sequence-to-sequence neural models widely used in abstractive summarization is exposure bias. To alleviate this problem, re-ranking systems have been applied in recent years. Despite some performance improvements, this approach remains underexplored. Previous work has mostly specified the rank through the ROUGE score and aligned candidate summaries, but there can be quite a large gap between the lexical overlap metric and semantic similarity. In this paper, we propose a novel training method in which a re-ranker balances the lexical and semantic quality. We further newly define false positives (semantic mistakes) in ranking and present a strategy to reduce their influence. Experiments on the CNN/DailyMail and XSum datasets show that our method can estimate the meaning of summaries without seriously degrading the lexical aspect. More specifically, it achieves an 89.67 BERTScore on the CNN/DailyMail dataset, reaching new state-of-the-art performance. | - |
dc.publisher | 한양대학교 대학원 | - |
dc.title | Balancing Lexical and Semantic Quality in Abstractive Summarization | - |
dc.type | Theses | - |
dc.contributor.googleauthor | 설지우 | - |
dc.contributor.alternativeauthor | Jeewoo Sul | - |
dc.sector.campus | S | - |
dc.sector.daehak | 대학원 | - |
dc.sector.department | 컴퓨터·소프트웨어학과 | - |
dc.description.degree | Master | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.