Toxicity Detection in Online Games
Toxicity detection in online multiplayer games like League of Legends is challenging because of the real time detection requirement and the multiple modes of communication within a game. This post elaborates on the work that CAH affiliates Adrian Lim, Lynnette Ng and Michael Miller Yoder will be presenting at the ACM Ethical Games Conference in January 2024 (http://ethicalgamesconference.org/).
Online multiplayer games like League of Legends, Counter Strike and Skribbl.io create experiences through user-to-user interactions. While player interactions enhance the gameplay experience, the ability to interact with other players gives a platform for toxic behavior to manifest, potentially ruining the experience for other players and harming the success of the game. In this post we describe four modes where toxicity can manifest within an online game: text, image, audio, behavioral; and the challenges in detecting toxicity within these modes.
Text is the foundation of player interaction within games where many games offer an in-game chat function. Many of the current research in toxicity detection in text involves machine learning algorithms focusing on detecting racism, sexism, hate speech etc. However, in games, standard toxic text detection needs refining to include evolving gamer lexicon and misspellings that will evade detection, especially so in fast-paced environments like multiplayer team shooter games.
In games like Skribbl.io or Drawful where users draw content as communicative devices to players, the mode for toxicity detection is the image. Identifying toxic content within these images involves aligning the visual image feature with textual analysis to understand the context of the drawing. This poses a challenge beyond simply identifying harmful images, for in such games, freehand drawings can be fluid and it can be difficult to draw a structured image from a mouse or a touchscreen, hence a higher amount of image interpretation is required.
Audio is culture and region sensitive. There is a long string of research on voice-to-text for the English languages, but this field is not as studied for other languages such as Chinese dialects. With gaming slang, it is possible to be toxic without explicitly using profanity and audio filters must continually evolve with the ever-changing online slang.
The last mode of toxicity is behavioral toxicity. Examples are intentional feeding where the player deliberately gets killed by the opponent team, or a negative attitude where the players deliberately disrupt the game with the intention to lose or surrender. Detecting these behavioral toxicities is difficult to fully automate because it relies on intricate understanding of the gameplay and the general player interactions with each other and the anomalies, which can vary from game-to-game or season-to-season.
With this understanding of toxicity in online games, we hope to open research directions and conversations on real-time multimodal toxicity detection within online games.