Proceedings of The World Conference on Gender Equality
Violence Against Women in the Age of Artificial Intelligence
A study conducted in 2019 indicates that non-consensual pornographic contents constitute 96% of all Deepfakes, and all pornographic Deepfakes target women without any exception (Ajder et al., 2019, p. 1-2). Although researchers have found in part of a recent study that men are rarely depicted in non-consensual pornographic Deepfakes (Kugler & Pace, 2021, p. 613, fn. 4), the fact is still that women are more vulnerable in practice, and that Deepfake technologies have become a new tool of violence against women (Öhman, 2020, p. 133-134; Harwell, 2018). Considering that the vast majority of Deepfakes constitutes severe interference with privacy and personality rights of women, who are unable to cope with this level of harm especially in countries where victim blaming is common (Collins, 2019), the focus of policymakers must be on non-consensual pornographic Deepfakes. In this sense, the conference paper primarily aims (i) to reflect on the key discussions in the legal literature and the approaches of the several countries; (ii) to discuss whether it is necessary or possible for countries to act shoulder-to-shoulder in order to create a single regulatory response rather than offering piecemeal solutions across different jurisdictions; (iii) and to determine from our point of view the ideal content of the potential regulation dedicated to non-consensual pornographic Deepfakes. We will try to promote the ethical use of Deepfakes while striking a delicate balance that will not undermine fundamental rights and freedoms.
keywords: algorithms, artificial intelligence, Deepfakes, discrimination, privacy