• Deutsch
Login

Open Access

  • Home
  • Search
  • Browse
  • Publish/report a document
  • Help
  • Publication Lists

Heider, Michael

Refine

Has Fulltext

  • yes (11)
  • no (3)

Author

  • Stegherr, Helena (14) (remove)

Year of publication

  • 2024 (1)
  • 2023 (7)
  • 2022 (5)
  • 2021 (1)

Document Type

  • Article (7)
  • Conference Proceeding (4)
  • Part of a Book (2)
  • Report (1)

Language

  • English (14)

Keywords

  • Artificial Intelligence (2)
  • Computer Science Applications (2)
  • Computational Theory and Mathematics (1)
  • Computer Graphics and Computer-Aided Design (1)
  • Computer Networks and Communications (1)
  • General Computer Science (1)
  • Software (1)
  • evolutionary algorithms (1)
  • metaheuristics (1)
  • optimization (1)
+ more

Institute

  • Fakultät für Angewandte Informatik (14)
  • Institut für Informatik (14)
  • Lehrstuhl für Organic Computing (14)
  • Nachhaltigkeitsziele (1)
  • Ziel 15 - Leben an Land (1)

14 search hits

  • 1 to 14
  • 10
  • 20
  • 50
  • 100

Sort by

  • Year
  • Year
  • Title
  • Title
  • Author
  • Author
GRAHF: a hyper-heuristic framework for evolving heterogeneous island model topologies (2024)
Wurth, Jonathan ; Stegherr, Helena ; Heider, Michael ; Hähner, Jörg
Fast, flexible, and fearless: a rust framework for the modular construction of metaheuristics (2023)
Wurth, Jonathan ; Stegherr, Helena ; Heider, Michael ; Luley, Leopold ; Hähner, Jörg
We present MAHF, a Rust framework for the modular construction and subsequent evaluation of evolutionary algorithms, but also any other metaheuristic framework, including non-population-based and constructive approaches. We achieve high modularity and flexibility by splitting algorithms into components with a uniform interface and allowing communication through a shared blackboard. Nevertheless, MAHF is aimed at being easy to use and adapt to the specific purposes of different practitioners. To this end, this paper focuses on providing a general description of the design of MAHF before illustrating its application with a variety of different use cases, ranging from simple extension of the set of implemented components and the subsequent construction of algorithms not present within the framework to hybridization approaches, which are often difficult to realize in specialized software frameworks. By providing these comprehensive examples, we aim to encourage others to utilize MAHF for their needs, evaluate its effectiveness, and improve upon its application.
Assisting convergence behaviour characterisation with unsupervised clustering (2023)
Stegherr, Helena ; Heider, Michael ; Hähner, Jörg
Analysing the behaviour of metaheuristics comprehensively and thereby enhancing explainability requires large empirical studies. However, the amount of data gathered in such experiments is often too large to be examined and evaluated visually. This necessitates establishing more efficient analysis procedures, but care has to be taken so that these do not obscure important information. This paper examines the suitability of clustering methods to assist in the characterisation of the behaviour of metaheuristics. The convergence behaviour is used as an example as its empirical analysis often requires looking at convergence curve plots, which is extremely tedious for large algorithmic datasets. We used the well-known K-Means clustering method and examined the results for different cluster sizes. Furthermore, we evaluated the clusters with respect to the characteristics they utilise and compared those with characteristics applied when a researcher inspects convergence curve plots. We found that clustering is a suitable technique to assist in the analysis of convergence behaviour, as the clusters strongly correspond to the grouping that would be done by a researcher, though the procedure still requires background knowledge to determine an adequate number of clusters. Overall, this enables us to inspect only few curves per cluster instead of all individual curves.
Discovering rules for rule-based machine learning with the help of novelty search (2023)
Heider, Michael ; Stegherr, Helena ; Pätzel, David ; Sraj, Roman ; Wurth, Jonathan ; Volger, Benedikt ; Hähner, Jörg
Automated prediction systems based on machine learning (ML) are employed in practical applications with increasing frequency and stakeholders demand explanations of their decisions. ML algorithms that learn accurate sets of rules, such as learning classifier systems (LCSs), produce transparent and human-readable models by design. However, whether such models can be effectively used, both for predictions and analyses, strongly relies on the optimal placement and selection of rules (in ML this task is known as model selection). In this article, we broaden a previous analysis on a variety of techniques to efficiently place good rules within the search space based on their local prediction errors as well as their generality. This investigation is done within a specific pre-existing LCS, named SupRB, where the placement of rules and the selection of good subsets of rules are strictly separated—in contrast to other LCSs where these tasks sometimes blend. We compare two baselines, random search and -evolution strategy (ES), with six novelty search variants: three novelty-/fitness weighing variants and for each of those two differing approaches on the usage of the archiving mechanism. We find that random search is not sufficient and sensible criteria, i.e., error and generality, are indeed needed. However, we cannot confirm that the more complicated-to-explain novelty search variants would provide better results than -ES which allows a good balance between low error and low complexity in the resulting models.
Assessing model requirements for explainable AI: a template and exemplary case study (2023)
Heider, Michael ; Stegherr, Helena ; Nordsieck, Richard ; Hähner, Jörg
In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach’s use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.
SupRB in the context of rule-based machine learning methods: a comparative study (2023)
Heider, Michael ; Stegherr, Helena ; Sraj, Roman ; Pätzel, David ; Wurth, Jonathan ; Hähner, Jörg
Approaches for rule discovery in a learning classifier system (2022)
Heider, Michael ; Stegherr, Helena ; Pätzel, David ; Sraj, Roman ; Wurth, Jonathan ; Volger, Benedikt ; Hähner, Jörg
To fill the increasing demand for explanations of decisions made by automated prediction systems, machine learning (ML) techniques that produce inherently transparent models are directly suited. Learning Classifier Systems (LCSs), a family of rule-based learners, produce transparent models by design. However, the usefulness of such models, both for predictions and analyses, heavily depends on the placement and selection of rules (combined constituting the ML task of model selection). In this paper, we investigate a variety of techniques to efficiently place good rules within the search space based on their local prediction errors as well as their generality. This investigation is done within a specific LCS, named SupRB, where the placement of rules and the selection of good subsets of rules are strictly separated in contrast to other LCSs where these tasks sometimes blend. We compare a Random Search, (1,λ)-ES and three Novelty Search variants. We find that there is a definitive need to guide the search based on some sensible criteria, i.e. error and generality, rather than just placing rules randomly and selecting better performing ones but also find that Novelty Search variants do not beat the easier to understand (1,λ)-ES.
Investigating the impact of independent rule fitnesses in a learning classifier system (2022)
Heider, Michael ; Stegherr, Helena ; Wurth, Jonathan ; Sraj, Roman ; Hähner, Jörg
Achieving at least some level of explainability requires complex analyses for many machine learning systems, such as common black-box models. We recently proposed a new rule-based learning system, SupRB, to construct compact, interpretable and transparent models by utilizing separate optimizers for the model selection tasks concerning rule discovery and rule set composition. This allows users to specifically tailor their model structure to fulfil use-case specific explainability requirements. From an optimization perspective, this allows us to define clearer goals and we find that—in contrast to many state of the art systems—this allows us to keep rule fitnesses independent. In this paper we investigate this system’s performance thoroughly on a set of regression problems and compare it against XCSF, a prominent rule-based learning system. We find the overall results of SupRB’s evaluation comparable to XCSF’s while allowing easier control of model structure and showing a substantially smaller sensitivity to random seeds and data splits. This increased control can aid in subsequently providing explanations for both training and final structure of the model.
A metaheuristic perspective on learning classifier systems (2023)
Heider, Michael ; Pätzel, David ; Stegherr, Helena ; Hähner, Jörg
A framework for modular construction and evaluation of metaheuristics (2023)
Stegherr, Helena ; Luley, Leopold ; Wurth, Jonathan ; Heider, Michael ; Hähner, Jörg
This paper presents MAHF, a software framework for the highly flexible construction of metaheuristics from individual components and the subsequent evaluation of these algorithms. At that, MAHF is developed specifically for the experimental analysis of the algorithmic behaviour during the optimization process with a focus on the influences of the algorithm’s components. Furthermore, uncommon and incompletely examined operators or frameworks of “novel” metaheuristics are included as well, so that their usefulness can be assessed. In the following, we will elaborate on MAHF’s structure and its general goals and application possibilities. Concerning MAHF’s component structure, we will provide examples of its usage and extension to ensure that it is reusable by others as well.
Comparing different metaheuristics for model selection in a supervised learning classifier system (2022)
Wurth, Jonathan ; Heider, Michael ; Stegherr, Helena ; Sraj, Roman ; Hähner, Jörg
Separating rule discovery and global solution composition in a learning classifier system (2022)
Heider, Michael ; Stegherr, Helena ; Wurth, Jonathan ; Sraj, Roman ; Hähner, Jörg
Design of large-scale metaheuristic component studies (2021)
Stegherr, Helena ; Heider, Michael ; Luley, Leopold ; Hähner, Jörg
Classifying metaheuristics: towards a unified multi-level classification system (2022)
Stegherr, Helena ; Heider, Michael ; Hähner, Jörg
  • 1 to 14

OPUS4 Logo

  • Contact
  • Imprint
  • Sitelinks