In the face and heart of data scarcity in Industry 5.0: exploring applicability of facial and physiological AI models for operator well-being in human-robot collaboration

  • Over the past decade, research has focused on integrating collaborative robots, or cobots, into assembly lines. The envisioned future industrial workplaces involve close collaboration between human workers and cobots. With the advent of Industry 5.0, human-centered approaches to facilitate human-robot collaboration (HRC) have gained significant traction. These approaches go beyond ensuring physical safety, emphasizing the mental health and well-being of industrial workers. To achieve this goal, cobots have to be equipped with capabilities to detect real-time worker states. Despite various investigations into user states related to well-being in different domains, the manifestations of these states in industrial settings are relatively unexplored. Hence, a critical gap exists in our understanding of whether machine learning models developed for other contexts are applicable to industrial HRC. Many aspects of existing datasets pose challenges to the applicability of the machineOver the past decade, research has focused on integrating collaborative robots, or cobots, into assembly lines. The envisioned future industrial workplaces involve close collaboration between human workers and cobots. With the advent of Industry 5.0, human-centered approaches to facilitate human-robot collaboration (HRC) have gained significant traction. These approaches go beyond ensuring physical safety, emphasizing the mental health and well-being of industrial workers. To achieve this goal, cobots have to be equipped with capabilities to detect real-time worker states. Despite various investigations into user states related to well-being in different domains, the manifestations of these states in industrial settings are relatively unexplored. Hence, a critical gap exists in our understanding of whether machine learning models developed for other contexts are applicable to industrial HRC. Many aspects of existing datasets pose challenges to the applicability of the machine learning models in industrial settings. On the one hand, most datasets for well-being-related states (e.g., pain, distraction) are typically small and lack variation in recording conditions, raising concerns about whether models trained on these datasets learn generic or dataset-specific features. On the other hand, although states like stress are well-researched, there are limited public datasets involving HRC tasks. This limitation is exacerbated by the lack of long-term studies involving industrial HRC tasks, limiting our understanding of worker states (e.g., boredom, flow) that emerge over a long period of familiar and repetitive tasks. These limitations of existing datasets form the motivation for the works presented in this thesis. This thesis explores applicability through multiple lenses: transferability (leveraging features from a related task), generalizability (ensuring models perform well on multiple datasets), replicability (testing approaches on various datasets and recording conditions), reproducibility (recreating industrial HRC experiences), and versatility (utilizing features/models for multiple tasks). The investigations of this thesis are presented in two parts. The first part addresses transferability, generalizability, and replicability by utilizing transfer learning techniques to train various models and assess them using explainable AI methods and cross-dataset evaluations. The second part addresses reproducibility and versatility by analyzing user studies in simulated industrial HRC scenarios with durations ranging from half an hour to several days. The results of this thesis not only demonstrate approaches to develop models applicable to industrial HRC settings but also identify potential avenues for improvement. These findings form the foundations for developing models that enhance human-robot collaboration in industrial environments by focusing on both efficiency and worker well-being.show moreshow less

Download full text files

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Pooja PrajodORCiDGND
URN:urn:nbn:de:bvb:384-opus4-1169507
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/116950
Advisor:Elisabeth AndréORCiDGND
Type:Doctoral Thesis
Language:English
Date of Publication (online):2024/12/16
Year of first Publication:2024
Publishing Institution:Universität Augsburg
Granting Institution:Universität Augsburg, Fakultät für Angewandte Informatik
Date of final exam:2024/11/21
Release Date:2024/12/16
GND-Keyword:Roboter; Mensch-Maschine-System; Kooperation; Bestärkendes Lernen <Künstliche Intelligenz>; Explainable Artificial Intelligence
Page Number:322
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Menschzentrierte Künstliche Intelligenz
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Licence (German):Deutsches Urheberrecht mit Print on Demand