Refine
Has Fulltext
- yes (4)
Document Type
- Article (3)
- Doctoral Thesis (1)
Language
- English (4)
Keywords
- Adaptives System (1)
- Künstliche Intelligenz (1)
- Maschinelles Lernen (1)
- Programmverifikation (1)
- Qualitätssicherung (1)
- Software Engineering (1)
- Softwaretest (1)
- artificial intelligence (1)
- machine learning (1)
- quality assurance (1)
With self-adaptive systems a new class of reactive software systems is recently gaining lots of attention. Able to adapt their actual approach to meet given goals at runtime based on previously gained insights such systems actually appear to be kind of artificially intelligent and able to learn. Autonomous vehicles, robots, and adaptive production plants are just a few of the instances for which practical application promises huge efficiency enhancements for industry. Apart from all the advances being made in this area there still is a blocker for practical application in critical fields: how to adequately test a system whose runtime approach is actually unknown?
As this thesis elaborates, traditional test strategies for reactive systems are not feasible anymore. Building on Harel and Pnueli’s notion of a development process for reactive systems an extension for self-adaptivity as well as particular challenges and requirements for testing self-adaptive systems are derived. We will see that test strategies for self-adaptive systems should be adaptive as well. A number of experiments is reported in which Machine Learning approaches were used for solution. Considering a couple of case studies, such as a Smart Vacuum Cleaner, a Smart Energy Grid, and a Self-Organizing Production Cell, those experiments are meant to provide different aspects and possible approaches for systematically testing self-adaptive systems. Requirements and outlooks for future work are given.
Industrie 4.0 introduces decentralized, self-organizing and self-learning systems for production control. At the same time, new machine learning algorithms are getting increasingly powerful and solve real world problems. We apply Google DeepMind’s Deep Q Network (DQN) agent algorithm for Reinforcement Learning (RL) to production scheduling to achieve the Industrie 4.0 vision for production control. In an RL environment cooperative DQN agents, which utilize deep neural networks, are trained with user-defined objectives to optimize scheduling. We validate our system with a small factory simulation, which is modeling an abstracted frontend-of-line semiconductor production facility.