A Performative Approach for Rethinking the Question of Gender Bias in AI

Proceedings of The 6th Global Conference on Women’s Studies

Year: 2024

DOI:

[PDF]

A Performative Approach for Rethinking the Question of Gender Bias in AI

Gabriele Nino and Francesca Alessandra Lisi

 

ABSTRACT:

The aim of this contribution is to show how gender discrimination is produced in Artificial Intelligence (AI) systems. Many scholars have shown how AI potentially undermines women and the LGBT+ people’s rights as it is capable of amplifying forms of discrimination that are already widespread in society the context of AI ethics, the concept of “fairness” indicates precisely the capacity of an algorithm to generate results dealing with sensitive categories such as gender, ethnicity, religion, sexual orientation, and disability without producing forms of discrimination and prejudice. We explore the three main ways that have been proposed to investigate the emergence of biases in AI systems: technical investigation, counterfactual reasoning, and constructivist methodology. This analysis reveals the need to consider the socio-political dimension in which AI systems are developed within their ethical evaluation. It is in the conjuncture between the social and the purely technical spheres that it is possible to understand how gender biases are encoded in AI systems. To investigate this conjuncture, we apply the theory of gender performativity as theorized by Judith Butler and Karen Barad, showing how AI operates in the social fabric by materializing determinate and specific patriarchal configurations of gender. Finally, we argue about the need to redefine the notion of fairness in order to better understand how gender dimensions are treated in machine learning (ML) systems. To this end, we explore the notion of gender-positioning.

 

keywords: Algorithmic Discrimination; Gender bias; Performative Theory; AI ethics; Fairness in AI