Alfear, Noof, Kazakov, Dimitar Lubomirov orcid.org/0000-0002-0637-8106 and Al-Khalifa, Hend (2024) Meta-Evaluation of Sentence Simplification Metrics. In: Proceedings of LREC-COLING 2024:The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Joint International Conference on Computational Linguistics, Language Resources and Evaluation, 20-25 May 2024 , ITA
Abstract
Automatic Text Simplification (ATS) is one of the major Natural Language Processing (NLP) tasks, which aims to help people understand text that is above their reading abilities and comprehension. ATS models reconstruct the text into a simpler format by deletion, substitution, addition or splitting, while preserving the original meaning and maintaining correct grammar. Simplified sentences are usually evaluated by human experts based on three main factors: simplicity, adequacy and fluency or by calculating automatic evaluation metrics. In this paper, we conduct a meta-evaluation of reference-based automatic metrics for English sentence simplification using high-quality, human-annotated dataset, NEWSELA-LIKERT. We study the behavior of several evaluation metrics at sentence level across four different sentence simplification models. All the models were trained on the NEWSELA-AUTO dataset. The correlation between the metrics’ scores and human judgements was analyzed and the results used to recommend the most appropriate metrics for this task.
Metadata
Item Type: | Proceedings Paper |
---|---|
Authors/Creators: |
|
Dates: |
|
Institution: | The University of York |
Academic Units: | The University of York > Faculty of Sciences (York) > Computer Science (York) |
Depositing User: | Pure (York) |
Date Deposited: | 04 Apr 2024 08:20 |
Last Modified: | 02 Apr 2025 23:34 |
Status: | Published |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:211130 |