Explainable cooperative machine learning - an approach towards comprehensible machine learning for non-verbal behaviour analysis
- The influence of artificial intelligence systems on our society is steadily increasing. While in the beginning, those systems were mainly subject to research, now they are regularly employed and integrated into a variety of use cases, some of which are sensitive in nature, such as analysing patients' affective states during psychotherapy based on non-verbal behaviour.
This broad application of AI systems was made possible by substantial advances in model performance over the last years. However, this increase in performance came alongside a significant increase in complexity, resulting in a level of complexity exceeding human understanding. This led to the development of dedicated methods to regain insights into the decision process of complex machine learning models. In fact, an own research field emerged known as Explainable Artificial Intelligence. While those methods are capable of increasing the interpretability of machine learning models, they often require expertise inThe influence of artificial intelligence systems on our society is steadily increasing. While in the beginning, those systems were mainly subject to research, now they are regularly employed and integrated into a variety of use cases, some of which are sensitive in nature, such as analysing patients' affective states during psychotherapy based on non-verbal behaviour.
This broad application of AI systems was made possible by substantial advances in model performance over the last years. However, this increase in performance came alongside a significant increase in complexity, resulting in a level of complexity exceeding human understanding. This led to the development of dedicated methods to regain insights into the decision process of complex machine learning models. In fact, an own research field emerged known as Explainable Artificial Intelligence. While those methods are capable of increasing the interpretability of machine learning models, they often require expertise in machine learning. However, as we discussed earlier, nowadays, not only are machine learning experts confronted with such systems, but also people who are proficient in their domain, e.g. non-verbal behaviour analysis, but lack knowledge in machine learning. Therefore, this thesis focuses on the question of how comprehensibility when dealing with machine learning models can also be provided for domain experts without expertise in machine learning. Two approaches have been identified to realise this goal.
The first approach we identified to achieve comprehensibility for users is to provide access to explainable AI methods through easy-to-apply interfaces when confronted with black-box models. In the first part of this thesis, we present an explainable AI extension for the existing non-verbal analysis and annotation tool NOVA. A tool commonly applied by non-verbal behaviour analysts that provides access to machine learning models that support users during analysis and annotation of non-verbal behaviour. This extension provides access to 14 different explainable AI methods encapsulated in dedicated user interfaces. Detailed information about the various methods and how to apply them is provided. Additionally, we introduce the explainable cooperative machine learning workflow, which combines active learning and semi-supervised learning with explainable AI to empower the human in the loop during human and machine learning model collaboration. Our proposed workflow provides users access to additional information about a model's learnt knowledge during the active learning step through the interfaces introduced by the explainable AI extension, ultimately aiding users in building a correct mental model of the machine learning model and potentially increasing trust in the long term. The applicability and helpfulness of the proposed extension are evaluated in two user studies.
Even though the latest advances in machine learning solely rely on deep-learning architectures, alternative models, such as Decision Trees and Bayesian Networks, are available that can inherently provide interpretability. The second part of this thesis investigates the applicability of such models and compares them to state-of-the-art deep-learning architectures. Moreover, a guideline for the application of those models is presented. In addition, a hybrid approach is presented that combines the interpretable structure of a Bayesian Network with the predictive capabilities of deep-learning models.…

