The game industry has long been troubled by malicious activities utilizing game bots. The game bots disturb other game players and destroy the environmental system of the games. For these reasons, the game industry put their best efforts to detect the game bots among players' characters using the learning-based detections. However, one problem with the detection methodologies is that they do not provide rational explanations about their decisions. To resolve this problem, in this work, we investigate the explainabilities of the game bot detection. We develop the XAI model using a dataset from the Korean MMORPG, AION, which includes game logs of human players and game bots. More than one classification model has been applied to the dataset to be analyzed by applying interpretable models. This provides us explanations about the game bots' behavior, and the truthfulness of the explanations has been evaluated. Besides, interpretability contributes to minimizing false detection, which imposes unfair restrictions on human players.
|Title of host publication||ICAART 2021 - Proceedings of the 13th International Conference on Agents and Artificial Intelligence|
|Editors||Ana Paula Rocha, Luc Steels, Jaap van den Herik|
|Number of pages||8|
|Publication status||Published - 2021|
|Event||13th International Conference on Agents and Artificial Intelligence, ICAART 2021 - Virtual, Online|
Duration: 2021 Feb 4 → 2021 Feb 6
|Name||ICAART 2021 - Proceedings of the 13th International Conference on Agents and Artificial Intelligence|
|Conference||13th International Conference on Agents and Artificial Intelligence, ICAART 2021|
|Period||21/2/4 → 21/2/6|
Bibliographical notePublisher Copyright:
© 2021 by SCITEPRESS - Science and Technology Publications, Lda.
- Explainable artificial intelligence
- Game bot detection
ASJC Scopus subject areas
- Artificial Intelligence