A comparative study of using pre-trained language models for toxic comment classification

Zhao, Z., Zhang, Z. and Hopfgartner, F. orcid.org/0000-0003-0380-6088 (2021) A comparative study of using pre-trained language models for toxic comment classification. In: Leskovec, J., Grobelnik, M., Najork, M., Tang, J. and Zia, L., (eds.) Companion Proceedings of the Web Conference 2021 (WWW ’21 Companion). SocialNLP 2021 : The 9th International Workshop on Natural Language Processing for Social Media, 19 Apr 2021, Virtual conference. ACM Digital Library , pp. 500-507. ISBN 9781450383134

Abstract

Metadata

Authors/Creators:
Copyright, Publisher and Additional Information: © 2021 IW3C2 (International World Wide Web Conference Committee), published under Creative Commons CC-BY 4.0 License (http://creativecommons.org/licenses/by/4.0).
Keywords: toxic comment; hate speech; neural networks; language model; fine-tuning; pre-training; BERT; RoBERTa; XLM
Dates:
  • Accepted: 16 April 2021
  • Published (online): 19 April 2021
  • Published: April 2021
Institution: The University of Sheffield
Academic Units: The University of Sheffield > Faculty of Social Sciences (Sheffield) > Information School (Sheffield)
Depositing User: Symplectic Sheffield
Date Deposited: 20 May 2021 07:15
Last Modified: 09 Jun 2021 12:59
Status: Published
Publisher: ACM Digital Library
Refereed: Yes
Identification Number: https://doi.org/10.1145/3442442.3452313
Related URLs:

Share / Export

Statistics