Interdisciplinary Strategies to Mitigate Gender Bias in AI-Generated Imagery

Authors

  • Maria Asunción Vicente Miguel Hernandez University, Elche, Spain
  • César Fernández Miguel Hernandez University, Elche, Spain

DOI:

https://doi.org/10.33422/womensconf.v4i1.1390

Keywords:

ethics, fairness, images, machine learning, technology

Abstract

This work reports preliminary findings from Project LENA, an interdisciplinary investigation of gender in images produced by artificial intelligence (AI) platforms. Employing a mixed-methods framework that combines experimental prompt engineering, quantitative-qualitative visual analysis, and critical evaluation, we examine how textual prompts, model architectures (e.g. DALL·E, Stable Diffusion, Grok), and training datasets collectively shape stereotyped gender representations. Initial results show that inclusive prompting, diversified training corpora, and heightened user awareness substantially reduce biased outputs, yielding more equitable imagery. Building on these insights, we propose practical guidelines for the ethical deployment of generative AI, with special attention to educational settings. Project LENA underscores the importance of critical digital literacy, algorithmic transparency, and an intersectional feminist perspective in fostering an inclusive digital culture that empowers educators and learners to engage reflectively and transformatively with emerging visual-generation technologies. We conclude by outlining future research that integrates human-centered design principles with policy frameworks.

Metrics

Metrics Loading ...

Downloads

Published

2026-02-07

How to Cite

Vicente, M. A., & Fernández, C. (2026). Interdisciplinary Strategies to Mitigate Gender Bias in AI-Generated Imagery. Proceedings of The Global Conference on Women’s Studies, 4(1), 71–94. https://doi.org/10.33422/womensconf.v4i1.1390

Most read articles by the same author(s)