×
May 3, 2021 · In this paper, we study how an state-of-the-art recurrent neural language model behaves when trained on data, which under-represents females.
Feb 26, 2023 · In this paper, we study how an state-of-the-art recurrent neural language model behaves when trained on data, which under-represents females.
This paper studies how an state-of-the-art recurrent neural language model behaves when trained on data, which under-represents females, using pre-trained ...
Results show that language models inherit higher bias when trained on unbalanced data when using pre-trained embeddings, in comparison with using embeddings ...
Non-contextual word embedding models have been shown to inherit human-like stereotypical biases of gender, race and religion from the training corpora. To ...
Gender bias is highly impacting natural lan- guage processing applications. Word embed- dings have clearly been proven both to keep.
This work presents a comprehensive approach to reduce bias in word embedding vectors and evaluate the impact on various Natural Language Processing (NLP) tasks.
In summary, our findings suggest that debiasing methods can be effective in reducing gender bias in language models, but the degree of this effective- ness ...
People also ask
Jan 9, 2024 · Gender bias occurs when a gender-neutral term, like 'doctor,' is more likely to be linked with the gender-related term 'men' rather than 'woman.
In this paper, we propose a novel use of soft computing to mitigate gender bias in word embedding models.