Home
Scholarly Works
XAI for intrusion detection system: comparing...
Journal article

XAI for intrusion detection system: comparing explanations based on global and local scope

Abstract

Intrusion Detection System is a device or software in the field of cybersecurity that has become an essential tool in computer networks to provide a secured network environment. Machine Learning based IDS offers a self-learning solution and provides better performance when compared to traditional IDS. As the predictive performance of IDS is based on conflicting criteria, the underlying algorithms are becoming more complex and hence, less transparent. Explainable Artificial Intelligence is a set of frameworks that help to develop interpretable and inclusive machine learning models. In this paper, we use Permutation Importance, SHapley Additive exPlanation, Local Interpretable Model-Agnostic Explanation algorithms, Contextual Importance and Utility algorithms, covering both global and local scope of explanation to IDSs on Random Forest, eXtreme Gradient Boosting and Light Gradient Boosting machine learning models along with a comparison of explanations in terms of accuracy, consistency and stability. This comparison can help cyber security personnel to have a better understanding of the predictions of cyber-attacks in the network traffic. A case study focusing on DoS attack variants shows some useful insights on the impact of features in prediction performance.

Authors

Hariharan S; Rejimol Robinson RR; Prasad RR; Thomas C; Balakrishnan N

Journal

Journal of Computer Virology and Hacking Techniques, Vol. 19, No. 2, pp. 217–239

Publisher

Springer Nature

Publication Date

June 1, 2023

DOI

10.1007/s11416-022-00441-2

ISSN

1772-9890

Contact the Experts team