System of robot learning from multi-modal demonstration and natural language instruction

  • Collaborative robots are set to play an important role in the future of the manufacturing industry. They need to be able to work outside of the fencing and perform new tasks to individual customer specifications. The necessity of frequent robot re-programming is a great challenge for small and medium sized companies alike. Learning from demonstration is a promising approach that aims to enable robots to acquire from their end users new task knowledge consisting of a sequence of actions, the associated skills, and the context in which the task is executed. Current systems have limited support for integrating semantics and environmental changes. This paper introduces a system combining several modalities as demonstration interfaces, including natural language instruction, visual observation and hand-guiding, which enables the robot to learn a task comprising a goal concept, a plan and basic actions, with consideration for the current environment state. The task thus learned can then beCollaborative robots are set to play an important role in the future of the manufacturing industry. They need to be able to work outside of the fencing and perform new tasks to individual customer specifications. The necessity of frequent robot re-programming is a great challenge for small and medium sized companies alike. Learning from demonstration is a promising approach that aims to enable robots to acquire from their end users new task knowledge consisting of a sequence of actions, the associated skills, and the context in which the task is executed. Current systems have limited support for integrating semantics and environmental changes. This paper introduces a system combining several modalities as demonstration interfaces, including natural language instruction, visual observation and hand-guiding, which enables the robot to learn a task comprising a goal concept, a plan and basic actions, with consideration for the current environment state. The task thus learned can then be generalized to similar tasks involving different initial and goal states.show moreshow less

Download full text files

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Shuang Lu, Julia Berger, Johannes SchilpGND
URN:urn:nbn:de:bvb:384-opus4-1172983
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/117298
ISSN:2212-8271OPAC
Parent Title (English):Procedia CIRP
Publisher:Elsevier BV
Place of publication:Amsterdam
Type:Article
Language:English
Year of first Publication:2022
Publishing Institution:Universität Augsburg
Release Date:2024/12/04
Volume:107
First Page:914
Last Page:919
DOI:https://doi.org/10.1016/j.procir.2022.05.084
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Ingenieurinformatik mit Schwerpunkt Produktionsinformatik
Nachhaltigkeitsziele
Nachhaltigkeitsziele / Ziel 9 - Industrie, Innovation und Infrastruktur
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Licence (German):CC-BY-NC-ND 4.0: Creative Commons: Namensnennung - Nicht kommerziell - Keine Bearbeitung (mit Print on Demand)