Disentangling the model selection tasks for improved explainability in a rule-based machine learning system

  • With the increasing capabilities of machine learning (ML) and other artificial intelligence (AI) methods comes a growing interest from many fields of application to employ these methods to increase automation of work tasks and improve the efficiency and effectiveness of operations. However, the systems will only see effective use if they are trusted by those responsible for the task itself. False predictions and flawed decisions can have detrimental effects, for example, on human life in medical applications or financial interest in industry. Therefore, it is reasonable for stakeholders of these systems to want to understand the reasoning of AI systems. Methods that can make this insight available to stakeholders have increasingly be summarized under the term explainable AI (XAI). While approaches exist towards making black-box models explainable, the use of inherently explainable models can be more straightforward and promising. One family of algorithms producing inherentlyWith the increasing capabilities of machine learning (ML) and other artificial intelligence (AI) methods comes a growing interest from many fields of application to employ these methods to increase automation of work tasks and improve the efficiency and effectiveness of operations. However, the systems will only see effective use if they are trusted by those responsible for the task itself. False predictions and flawed decisions can have detrimental effects, for example, on human life in medical applications or financial interest in industry. Therefore, it is reasonable for stakeholders of these systems to want to understand the reasoning of AI systems. Methods that can make this insight available to stakeholders have increasingly be summarized under the term explainable AI (XAI). While approaches exist towards making black-box models explainable, the use of inherently explainable models can be more straightforward and promising. One family of algorithms producing inherently explainable models are. Learning Classifier Systems (LCSs). Despite their name, they are a general rule-based ML (RBML) method and representatives for all major ML tasks have been proposed. To classify LCSs based on their mode of operation, this work introduces a new system that is more precise than the current stateof- the-art and based on descriptive ML terminology. While most researchers in the past have focused primarily on LCSs’ algorithmic aspects, this work adopts a distinct perspective by approaching them through the lens of optimization. It discusses LCSs with regards to typical tasks involved in creating an ML model and what specific elements have to be optimized and how this is typically done. Critically, the task of model selection is usually performed by some metaheuristic component and involves the subtasks of how many rules to use and where to place them. This work also proposes a template to assess use case–specific explainability requirements based on multiple stakeholders’ inputs and extensively demonstrates its usage in a real-world manufacturing setting. There, stakeholders indeed request XAI models over black-box approaches and, according to their answers, LCSs should be a good fit. Additionally, the results laid out what LCS models in that application should look like which is, however, not achievable with the major state-of-the-art LCSs. Therefore, a new LCS, called the Supervised Rule-based learning system (SupRB), is introduced in this work that is simpler than previous LCSs with clearer optimization objectives and models that can fulfil the stakeholders’ requirements. In extensive testing on real-world data, SupRB demonstrates its capabilities of producing small yet accurate models that outperform those of well-established methods. This work also investigates numerous possible extensions for each component of SupRB with a special focus on its optimizers and presents the findings of the multiple studies in a comprehensive manner based on descriptive statistics, visualizations of results, and rigorous statistical testing. Then various paths for future research and application of SupRB are laid out which can advance the field of XAI considerably.show moreshow less

Download full text files

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Michael HeiderORCiDGND
URN:urn:nbn:de:bvb:384-opus4-1205045
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/120504
Advisor:Jörg Hähner
Type:Doctoral Thesis
Language:English
Year of first Publication:2025
Publishing Institution:Universität Augsburg
Granting Institution:Universität Augsburg, Fakultät für Angewandte Informatik
Date of final exam:2025/02/13
Release Date:2025/04/15
Tag:evolutionary computation; explainable AI; learning classifier systems; rule-based learning; machine learning
Pagenumber:206
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Organic Computing
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Licence (German):Deutsches Urheberrecht mit Print on Demand