How to design an LCS to create explainable AI models for real-world applications
- With the ever increasing capabilities of modern AI systems comes a greatly growing interest among various non-technical stakeholders in employing "AI" to improve their existing systems or workflows. This rise is especially present in industrial settings, e.g. manufacturing, where—in the past—the usage of AI has usually been limited due to various challenges surrounding the gathering of data. However, there have been concentrated efforts to automate machinery that come with an increase in usable data, which—paired with the wish of some stakeholders to automate through "AI"—makes new applications of AI available. Nevertheless, many stakeholders, especially those that will interact with the system on a daily basis, will not sufficiently trust AI models, hindering their adoption. This issue can be alleviated by using explainable models that can create trust through their transparency, rather than solely through statistical evaluations.
In this extended abstract, past work on how toWith the ever increasing capabilities of modern AI systems comes a greatly growing interest among various non-technical stakeholders in employing "AI" to improve their existing systems or workflows. This rise is especially present in industrial settings, e.g. manufacturing, where—in the past—the usage of AI has usually been limited due to various challenges surrounding the gathering of data. However, there have been concentrated efforts to automate machinery that come with an increase in usable data, which—paired with the wish of some stakeholders to automate through "AI"—makes new applications of AI available. Nevertheless, many stakeholders, especially those that will interact with the system on a daily basis, will not sufficiently trust AI models, hindering their adoption. This issue can be alleviated by using explainable models that can create trust through their transparency, rather than solely through statistical evaluations.
In this extended abstract, past work on how to determine specific requirements of various stakeholder groups on the model structure is reintroduced and one result from a real-world case study is discussed. Additionally, an approach to design a Learning Classifier System that delivers such models is highlighted.…


| Author: | Michael HeiderORCiDGND |
|---|---|
| URN: | urn:nbn:de:bvb:384-opus4-1259489 |
| Frontdoor URL | https://opus.bibliothek.uni-augsburg.de/opus4/125948 |
| ISBN: | 979-8-4007-1464-1OPAC |
| Parent Title (English): | GECCO '25 Companion: proceedings of the Genetic and Evolutionary Computation Conference Companion, 14-18 July 2025, Malaga, Spain |
| Publisher: | Association for Computing Machinery (ACM) |
| Place of publication: | New York, NY |
| Editor: | Gabriela Ochoa, Bogdan Filipič |
| Type: | Conference Proceeding |
| Language: | English |
| Year of first Publication: | 2025 |
| Publishing Institution: | Universität Augsburg |
| Release Date: | 2025/10/23 |
| First Page: | 2258 |
| Last Page: | 2260 |
| DOI: | https://doi.org/10.1145/3712255.3734305 |
| Institutes: | Fakultät für Angewandte Informatik |
| Fakultät für Angewandte Informatik / Institut für Informatik | |
| Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Organic Computing | |
| Dewey Decimal Classification: | 0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik |
| Licence (German): | CC-BY-SA 4.0: Creative Commons: Namensnennung - Weitergabe unter gleichen Bedingungen |



