Heider, Michael
Refine
Has Fulltext
- yes (2)
Year of publication
- 2023 (2)
Document Type
- Article (2)
Language
- English (2)
Keywords
- Artificial Intelligence (2) (remove)
Institute
Automated prediction systems based on machine learning (ML) are employed in practical applications with increasing frequency and stakeholders demand explanations of their decisions. ML algorithms that learn accurate sets of rules, such as learning classifier systems (LCSs), produce transparent and human-readable models by design. However, whether such models can be effectively used, both for predictions and analyses, strongly relies on the optimal placement and selection of rules (in ML this task is known as model selection). In this article, we broaden a previous analysis on a variety of techniques to efficiently place good rules within the search space based on their local prediction errors as well as their generality. This investigation is done within a specific pre-existing LCS, named SupRB, where the placement of rules and the selection of good subsets of rules are strictly separated—in contrast to other LCSs where these tasks sometimes blend. We compare two baselines, random search and
-evolution strategy (ES), with six novelty search variants: three novelty-/fitness weighing variants and for each of those two differing approaches on the usage of the archiving mechanism. We find that random search is not sufficient and sensible criteria, i.e., error and generality, are indeed needed. However, we cannot confirm that the more complicated-to-explain novelty search variants would provide better results than -ES which allows a good balance between low error and low complexity in the resulting models.
In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach’s use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.