
There is a more recent version of this eprint available. Click here to view it.
Li, Y., Zhang, G., Yang, B. et al. (4 more authors) (Submitted: 2022) HERB: Measuring hierarchical regional bias in pre-trained language models. [Preprint - arXiv] (Submitted)
Abstract
Fairness has become a trending topic in natural language processing (NLP), which addresses biases targeting certain social groups such as genders and religions. However, regional bias in language models (LMs), a long-standing global discrimination problem, still remains unexplored. This paper bridges the gap by analysing the regional bias learned by the pre-trained language models that are broadly used in NLP tasks. In addition to verifying the existence of regional bias in LMs, we find that the biases on regional groups can be strongly influenced by the geographical clustering of the groups. We accordingly propose a HiErarchical Regional Bias evaluation method (HERB) utilising the information from the sub-region clusters to quantify the bias in pre-trained LMs. Experiments show that our hierarchical metric can effectively evaluate the regional bias with respect to comprehensive topics and measure the potential regional bias that can be propagated to downstream tasks. Our codes are available at this https URL. (https://github.com/Bernard-Yang/HERB)
Metadata
Item Type: | Preprint |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2024 The Author(s). For reuse permissions, please contact the Author(s). |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield) |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 05 Jun 2024 15:56 |
Last Modified: | 05 Jun 2024 15:56 |
Status: | Submitted |
Identification Number: | 10.48550/arXiv.2211.02882 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:213156 |
Available Versions of this Item
- HERB: Measuring hierarchical regional bias in pre-trained language models. (deposited 05 Jun 2024 15:56) [Currently Displayed]