Towards self-explaining assistance systems in tomorrow's factories
- With the advent of highly digitized manufacturing environments, more options towards automation become available. While, previously, operators often adjusted settings manually (and directly in hardware), it becomes increasingly possible to make those adjustments at control panels semi-remotely. Additionally, machines are becoming more complex and more interdependent, increasing the demand on the operator. To alleviate this, digital agents based on artificial intelligence methods should be employed. However, these agents might face mistrust if they are not sufficiently transparent in how predictions and forecasts are made. In this paper, a doctoral project focussing on inherently explainable machine learning methods is motivated based on previous results regarding explainability requirements in realworld industrial settings.



