• search hit 17 of 70648
Back to Result List

Irreconcilable differences? Investigating consensus of post-hoc XAI for ML-NIDS via decomposition

  • Explainable Artificial Intelligence (XAI) is essential for the acceptance of machine learning (ML) models, especially in critical domains like network security. Administrators need interpretable explanations to validate decisions, yet existing XAI methods often suffer from low consensus, where different techniques yield conflicting explanations. A key factor contributing to this issue is the presence of correlated features, which allows multiple equivalent but divergent explanations. While decorrelation techniques, such as Principal Component Analysis (PCA), can mitigate this, they often reduce interpretability by abstracting original features into complex combinations. This work investigates whether feature decorrelation via decomposition techniques can improve consensus among post-hoc XAI methods in the context of ML-based network intrusion detection (ML-NIDS). Using both NIDS and synthetic data, we analyze the effect of decorrelation across different models and preprocessing. WeExplainable Artificial Intelligence (XAI) is essential for the acceptance of machine learning (ML) models, especially in critical domains like network security. Administrators need interpretable explanations to validate decisions, yet existing XAI methods often suffer from low consensus, where different techniques yield conflicting explanations. A key factor contributing to this issue is the presence of correlated features, which allows multiple equivalent but divergent explanations. While decorrelation techniques, such as Principal Component Analysis (PCA), can mitigate this, they often reduce interpretability by abstracting original features into complex combinations. This work investigates whether feature decorrelation via decomposition techniques can improve consensus among post-hoc XAI methods in the context of ML-based network intrusion detection (ML-NIDS). Using both NIDS and synthetic data, we analyze the effect of decorrelation across different models and preprocessing. We find that decorrelation can significantly improve consensus, but its effectiveness is highly dependent on the underlying model, preprocessing, and dataset characteristics. We also explore sparsity-inducing variants of PCA to partially recover interpretability, though results vary depending on the level of sparsity enforced.show moreshow less

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Katharina Dietz, Johannes SchleicherGND, Stefan Geißler, Michael SeufertORCiDGND, Tobias Hoßfeld
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/125882
Parent Title (English):AI and Sustainability in the Future of Network and Service Management: 21st International Conference on Network and Service Management (CNSM), September 2025, Bologna, Italy
Type:Conference Proceeding
Language:English
Date of Publication (online):2025/10/17
Year of first Publication:2025
Publishing Institution:Universität Augsburg
Release Date:2025/10/17
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Vernetzte Systeme und Kommunikationsnetze
Latest Publications (not yet published in print):Aktuelle Publikationen (noch nicht gedruckt erschienen)