• search hit 34 of 48
Back to Result List

Alterfactual explanations - the relevance of irrelevance for explaining AI systems

  • Explanation mechanisms from the field of Counterfactual Thinking are a widely-used paradigm for Explainable Artificial Intelligence (XAI), as they follow a natural way of reasoning that humans are familiar with. However, all common approaches from this field are based on communicating information about features or characteristics that are especially important for an AI's decision. We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system. Therefore, we introduce a new way of explaining AI systems. Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered. By doing so, the user directly sees which characteristics of the input data can change arbitrarily without influencing the AI's decision. We evaluate our approachExplanation mechanisms from the field of Counterfactual Thinking are a widely-used paradigm for Explainable Artificial Intelligence (XAI), as they follow a natural way of reasoning that humans are familiar with. However, all common approaches from this field are based on communicating information about features or characteristics that are especially important for an AI's decision. We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system. Therefore, we introduce a new way of explaining AI systems. Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered. By doing so, the user directly sees which characteristics of the input data can change arbitrarily without influencing the AI's decision. We evaluate our approach in an extensive user study, revealing that it is able to significantly contribute to the participants' understanding of an AI. We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.show moreshow less

Download full text files

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Silvan MertesORCiDGND, Christina Karle, Tobias HuberORCiDGND, Katharina WeitzORCiDGND, Ruben SchlagowskiORCiDGND, Elisabeth AndréORCiDGND
URN:urn:nbn:de:bvb:384-opus4-990615
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/99061
URL:https://sites.google.com/view/xai2022
Parent Title (English):IJCAI 2022 - Workshop on Explainable Artificial Intelligence (XAI), Saturday, 23 July, 2022
Publisher:International Joint Conferences on Artificial Intelligence
Editor:Rosina O. Weber, Ofra Amir, Tim Miller
Type:Conference Proceeding
Language:English
Date of Publication (online):2022/11/03
Year of first Publication:2022
Publishing Institution:Universität Augsburg
Release Date:2022/11/03
First Page:5
Last Page:11
DOI:https://doi.org/10.48550/arXiv.2207.09374
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Menschzentrierte Künstliche Intelligenz
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Licence (German):CC-BY-SA 4.0: Creative Commons: Namensnennung - Weitergabe unter gleichen Bedingungen