User-centred proactive dialogue modelling for trustworthy conversational assistants

  • The current wave of artificial intelligence and technological advancements has brought intelligent assistants into our daily lives. Such personal assistants help us with simple tasks, like providing news or weather information, smart home control, or entertainment using natural language. Despite their appraisal as intelligent entities, personal or conversational assistants are in general still stuck in the role of butlers and reactive bystanders that act upon commands. For enhancing the cooperation capability of these systems and unfolding their technical competencies to the fullest, the integration of proactivity has become an emerging research topic in this area. Proactivity implies that technical systems, such as conversational assistants, possess the ability to detect a user’s need for assistance and to initiate appropriate actions accordingly. Even though related work shows the potential benefits of proactive behaviour with regard to human-machine cooperation, the acceptance ofThe current wave of artificial intelligence and technological advancements has brought intelligent assistants into our daily lives. Such personal assistants help us with simple tasks, like providing news or weather information, smart home control, or entertainment using natural language. Despite their appraisal as intelligent entities, personal or conversational assistants are in general still stuck in the role of butlers and reactive bystanders that act upon commands. For enhancing the cooperation capability of these systems and unfolding their technical competencies to the fullest, the integration of proactivity has become an emerging research topic in this area. Proactivity implies that technical systems, such as conversational assistants, possess the ability to detect a user’s need for assistance and to initiate appropriate actions accordingly. Even though related work shows the potential benefits of proactive behaviour with regard to human-machine cooperation, the acceptance of proactive technology is still low due to an expectation gap between system behaviour and user requirements. For closing this gap, we propose to equip proactive systems with proactive dialogue management in order to include the user in the system’s decision processes and negotiate appropriate actions. However, how to computationally model timely and relevant proactive dialogue without giving the user the perception of being controlled or invading their privacy is an open question. Inappropriate proactive behaviour may have devastating effects on the cooperation and lead to diminished trust in the system which may compromise the acceptance of this technology. Therefore, this work aims at providing accepted and trustworthy proactive assistance by developing socially and task-effective dialogue models with the overall goal of improving the cooperation between humans and machines. For this, three major contributions are provided. As the first contribution, we present a proactive dialogue model for human-machine cooperation. This concept builds upon two exploratory pilot studies observing the user perception of state-of-the-art approaches for inferring user and system requirements of proactive dialogue for application in cooperative contexts. Based on the outcome of the initial studies we conduct an requirement analysis and provide a taxonomy of proactive dialogue for cooperation. Here, we introduce proactive dialogue act types which represent different autonomy levels of proactive dialogue behaviour. Proactive dialogue in general is considered as the initiation of supporting dialogues for facilitating task execution. Besides, we propose a cognitive system architecture with the goal of implementing proactive dialogue in a technical system using methods of artificial intelligence and human-computer interaction. As a second contribution, we present the design and evaluation of four user-centred proactive dialogue strategies based on the developed proactive dialogue model. Here, the goal is to provide an understanding of the effects of proactive dialogue design on the cooperation not only from a usability point of view but also from a social, user-centred perspective including a system’s trustworthiness. For this, we develop and implement several conversational assistance prototypes, both low- and high-fidelity, that are capable of proactive dialogue. In laboratory and more realistic user studies, we shed light on the effects of proactive dialogue on a system’s usability as well as human-computer trust dependent on task context, user characteristics, and state. These experiments allow to synthesise guidelines for the implementation of user-centered proactive dialogue management into cooperative conversational assistants. As a third and last contribution, we fuse the gained understanding of the social impact of proactive dialogue for implementing user-centred proactive dialogue models with the goal of achieving trusted and task-effective conversational assistants for improving cooperation. In this regard, we provide findings considering the user expectations of useradaptive proactive dialogue and the feasibility of utilising a trust measure for dialogue adaptation. For enabling statistically-driven adaptation methods, a proactive dialogue data corpus is collected and annotated with several features including trust. Based on the provided data, we advance the state-of-the-art for computationally modelling trust during conversational cooperation and present approaches for real-time prediction of trust during dialogue. Evaluation of the trust predictors shows the utility of our approach by achieving reasonable recall and accuracy. Trust prediction is then included in a conversational assistant for realising trust-adaptive proactive dialogue management. For dialogue management, we develop and implement a rule-based and reinforcement learning approach. The high trustworthiness and usability of trust-adaptive proactive dialogue management are proven in a user simulator study, for which a new socially aware user simulator has been developed. In summary, we provide the first user-centred approach for integrating the concept of proactivity in human-computer dialogue. Here, we enhance the social awareness of artificially intelligent systems by equipping them with the ability to reason about their own trustworthiness during cooperation and adapt their proactive dialogue behaviour accordingly. Finally, this enables machines to provide more human-like and natural decisionmaking for appropriately assisting humans in complex task environments. This forms an important step on the way from mere conversational assistants to personal advisors.show moreshow less

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Matthias KrausORCiDGND
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/101301
URL:https://hdl.handle.net/10481/77693
ISBN:978-84-1117-573-9OPAC
Publisher:Universidad de Granada
Place of publication:Granada
Type:Book
Language:English
Year of first Publication:2022
Release Date:2023/01/30
Pagenumber:287
Note:
Dissertation, Universität Granada/Ulm, 2022
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Menschzentrierte Künstliche Intelligenz
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Licence (German):CC-BY-NC-ND 4.0: Creative Commons: Namensnennung - Nicht kommerziell - Keine Bearbeitung (mit Print on Demand)