The Bullying Game: Sexism Based Toxic Language Analysis on Online Games Chat Logs by Text Mining

Proceedings of The 13th International Conference on Modern Research in Management, Economics and Accounting

Year: 2021

DOI:

[Fulltext PDF]

The Bullying Game: Sexism Based Toxic Language Analysis on Online Games Chat Logs by Text Mining

Aslı Ekiciler, İmran Ahioğlu, Nihan Yıldırım, İpek İlkkaracan Ajas, and Tolga Kaya

 

ABSTRACT: 

As a unique type of social network, the online gaming industry is a fast-growing, changing, and men-dominated field which attracts diverse backgrounds. Being dominated by male users, game developers, game players, game investors, the non-inclusiveness and gender inequality reside as salient problems in the community. In the online gaming communities, most women players report toxic and offensive language or verbal abuse against them. Symbolic interactionists and feminists assume that words matter since any terms that dehumanize others can make it easier to harm others. Observing and reporting the toxic behavior, sexism, harassment that occur as a critical need in preventing cyberbullying and help gender diversity and equality grow in the online gaming industry. However, the research on this topic is still rare, except for some milestone works. By the aim of contributing to the theory and practice of sexist toxic language detection in the online gaming community, we focus on the analysis and automatic detection of toxic comments in online gamers’ communities context. As an analytical system proposal to reveal sexist toxic language in online gaming platforms, we adapted QCA by MaXQDA tool. Also, we applied Naïve Bayes Classifier for text mining to classify if a chat log content is sexist and toxic. We also refined the text mining model Laplace estimator and re-tested the model’s accuracy. Data visualization techniques also provided the most toxic words used against women in online gaming communities. QCA results and text mining applications provided similar results. The study also revealed that the NB classifier‘s accuracy rate did not change by the Laplace estimator. Findings are expected to raise awareness about gender-based toxic language usage. Applying the proposed mining model can inspire similar research and practical immediate solutions on easing the moderation and disinfection of these communities from gender-based discrimination and sexist bullying.

keywords: Verbal Harassment, Multiplayer Online Games, Chat Logs, Toxic language, Sexism, gender bias, Text Mining, Naïve Bayes Classifier, Women Players.