Detecting and Reducing Gender Bias in Spanish Texts Generated with ChatGPT and Mistral Chatbots: The Lovelace Project

Authors

  • Irene Carrillo Miguel Hernandez University, Elche, Spain
  • Cesar Fernandez Miguel Hernandez University, Elche, Spain
  • M. Asuncion Vicente Miguel Hernandez University, Elche, Spain
  • Mercedes Guilabert Miguel Hernandez University, Elche, Spain
  • Alicia Sánchez Miguel Hernandez University, Elche, Spain
  • Eva Gil FISABIO Foundation, Valencia, Spain
  • Almudena Arroyo Sevilla University, Spain
  • María Calderón Sevilla University, Spain
  • M. Concepción Carratalá Sevilla University, Spain
  • Adriana López Miguel Hernandez University, Elche, Spain
  • Angela Coves Miguel Hernandez University, Elche, Spain
  • Elisa Chilet Miguel Hernandez University, Elche, Spain
  • Sergio Valero Miguel Hernandez University, Elche, Spain
  • Carolina Senabre Miguel Hernandez University, Elche, Spain

DOI:

https://doi.org/10.33422/womensconf.v3i1.466

Keywords:

artificial intelligence, gender bias, inclusive language, chatGPT, Mistral

Abstract

Current Artificial Intelligence (AI) systems can effortlessly and instantaneously generate text, images, songs, and videos. This capability will lead us to a future where a significant portion of available information will be partially or wholly generated by AI. In this context, it is crucial to ensure that AI-generated texts and images do not perpetuate or exacerbate existing gender biases. We examined the behavior of two common AI chatbots, ChatGPT and Mistral, when generating text in Spanish, both in terms of language inclusiveness and perpetuation of traditional male/female roles. Our analysis revealed that both tools demonstrated relatively low gender bias in terms of reinforcing traditional gender roles but exhibited higher gender bias concerning language inclusiveness, at least in the Spanish language. Additionally, although ChatGPT showed lower overall gender bias than Mistral, Mistral provided users with more control to modify its behavior through prompt modifiers. As a final conclusion, while both AIs exhibit some degree of gender bias in their responses, this bias is significantly lower than the gender bias present in their human-authored source materials.

Downloads

Published

2024-11-10

How to Cite

Carrillo, I., Fernandez, C., Vicente, M. A., Guilabert, M., Sánchez, A., Gil, E., Arroyo, A., Calderón, M., Carratalá, M. C., López, A., Coves, A., Chilet, E., Valero, S., & Senabre, C. (2024). Detecting and Reducing Gender Bias in Spanish Texts Generated with ChatGPT and Mistral Chatbots: The Lovelace Project. Proceedings of The Global Conference on Women’s Studies, 3(1), 29–42. https://doi.org/10.33422/womensconf.v3i1.466