Measuring and mitigating gender bias in legal contextualized language models

M Bozdag, N Sevim, A Koç - ACM Transactions on Knowledge Discovery …, 2024 - dl.acm.org
ACM Transactions on Knowledge Discovery from Data, 2024dl.acm.org
Transformer-based contextualized language models constitute the state-of-the-art in several
natural language processing (NLP) tasks and applications. Despite their utility,
contextualized models can contain human-like social biases, as their training corpora
generally consist of human-generated text. Evaluating and removing social biases in NLP
models has been a major research endeavor. In parallel, NLP approaches in the legal
domain, namely, legal NLP or computational law, have also been increasing. Eliminating …
Transformer-based contextualized language models constitute the state-of-the-art in several natural language processing (NLP) tasks and applications. Despite their utility, contextualized models can contain human-like social biases, as their training corpora generally consist of human-generated text. Evaluating and removing social biases in NLP models has been a major research endeavor. In parallel, NLP approaches in the legal domain, namely, legal NLP or computational law, have also been increasing. Eliminating unwanted bias in legal NLP is crucial, since the law has the utmost importance and effect on people. In this work, we focus on the gender bias encoded in BERT-based models. We propose a new template-based bias measurement method with a new bias evaluation corpus using crime words from the FBI database. This method quantifies the gender bias present in BERT-based models for legal applications. Furthermore, we propose a new fine-tuning-based debiasing method using the European Court of Human Rights (ECtHR) corpus to debias legal pre-trained models. We test the debiased models’ language understanding performance on the LexGLUE benchmark to confirm that the underlying semantic vector space is not perturbed during the debiasing process. Finally, we propose a bias penalty for the performance scores to emphasize the effect of gender bias on model performance.
ACM Digital Library
Showing the best result for this search. See all results