Interdisciplinary Strategies to Mitigate Gender Bias in AI-Generated Imagery
DOI:
https://doi.org/10.33422/womensconf.v4i1.1390Keywords:
ethics, fairness, images, machine learning, technologyAbstract
This work reports preliminary findings from Project LENA, an interdisciplinary investigation of gender in images produced by artificial intelligence (AI) platforms. Employing a mixed-methods framework that combines experimental prompt engineering, quantitative-qualitative visual analysis, and critical evaluation, we examine how textual prompts, model architectures (e.g. DALL·E, Stable Diffusion, Grok), and training datasets collectively shape stereotyped gender representations. Initial results show that inclusive prompting, diversified training corpora, and heightened user awareness substantially reduce biased outputs, yielding more equitable imagery. Building on these insights, we propose practical guidelines for the ethical deployment of generative AI, with special attention to educational settings. Project LENA underscores the importance of critical digital literacy, algorithmic transparency, and an intersectional feminist perspective in fostering an inclusive digital culture that empowers educators and learners to engage reflectively and transformatively with emerging visual-generation technologies. We conclude by outlining future research that integrates human-centered design principles with policy frameworks.
Metrics
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Maria Asunción Vicente, César Fernández

This work is licensed under a Creative Commons Attribution 4.0 International License.




