Detecting and Reducing Gender Bias in Spanish Texts Generated with ChatGPT and Mistral Chatbots: The Lovelace Project
DOI:
https://doi.org/10.33422/womensconf.v3i1.466Keywords:
artificial intelligence, gender bias, inclusive language, chatGPT, MistralAbstract
Current Artificial Intelligence (AI) systems can effortlessly and instantaneously generate text, images, songs, and videos. This capability will lead us to a future where a significant portion of available information will be partially or wholly generated by AI. In this context, it is crucial to ensure that AI-generated texts and images do not perpetuate or exacerbate existing gender biases. We examined the behavior of two common AI chatbots, ChatGPT and Mistral, when generating text in Spanish, both in terms of language inclusiveness and perpetuation of traditional male/female roles. Our analysis revealed that both tools demonstrated relatively low gender bias in terms of reinforcing traditional gender roles but exhibited higher gender bias concerning language inclusiveness, at least in the Spanish language. Additionally, although ChatGPT showed lower overall gender bias than Mistral, Mistral provided users with more control to modify its behavior through prompt modifiers. As a final conclusion, while both AIs exhibit some degree of gender bias in their responses, this bias is significantly lower than the gender bias present in their human-authored source materials.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Irene Carrillo, Cesar Fernandez, M. Asuncion Vicente, Mercedes Guilabert, Alicia Sánchez, Eva Gil, Almudena Arroyo, María Calderón, M. Concepción Carratalá, Adriana López, Angela Coves, Elisa Chilet, Sergio Valero, Carolina Senabre

This work is licensed under a Creative Commons Attribution 4.0 International License.