SensorClouds - A framework for real-time processing of multi-modal sensor data for human-robot-collaboration
- With the advent of industry 4.0 with its goals to make production more flexible and products more individual, the need for robots which can collaborate with humans to perform manufacturing is growing. In healthcare, the aging populations of western countries and the growing labor shortage increase the need for robotic assistants capable of relieving workers of menial and strenuous tasks. Both these fields of application require robots to be able to perceive their environment in order to safely interact with humans and perform their tasks correctly. This work presents SensorClouds, a modular framework for the processing of multi-modal sensor data in realtime for applications involving human-robot-collaboration. The framework is competitive in performance with similar approaches, yet far more flexible since it is not limited to binary occupancy in its environment model but instead allows the dynamic specification of arbitrary modalities in order to enable more complex sensor dataWith the advent of industry 4.0 with its goals to make production more flexible and products more individual, the need for robots which can collaborate with humans to perform manufacturing is growing. In healthcare, the aging populations of western countries and the growing labor shortage increase the need for robotic assistants capable of relieving workers of menial and strenuous tasks. Both these fields of application require robots to be able to perceive their environment in order to safely interact with humans and perform their tasks correctly. This work presents SensorClouds, a modular framework for the processing of multi-modal sensor data in realtime for applications involving human-robot-collaboration. The framework is competitive in performance with similar approaches, yet far more flexible since it is not limited to binary occupancy in its environment model but instead allows the dynamic specification of arbitrary modalities in order to enable more complex sensor data processing and a more informed representation of the robot’s surroundings. The architecture aids developers of modules in the creation of massively parallel algorithms by taking over the parallelization aspect and requiring only the implementation of processing kernels for single data points. Application developers can use these modules to quickly solve complex sensor fusion tasks. Module interoperability is guaranteed through the enforcement of data access contracts. This work also includes methods for reconstructing three-dimensional data from sensors which do not inherently provide it so that this data can then also be included in the environment model alongside natively three-dimensional data.…
Author: | Alexander PoeppelORCiDGND |
---|---|
URN: | urn:nbn:de:bvb:384-opus4-1146208 |
Frontdoor URL | https://opus.bibliothek.uni-augsburg.de/opus4/114620 |
Advisor: | Wolfgang ReifORCiDGND |
Type: | Doctoral Thesis |
Language: | English |
Date of Publication (online): | 2024/08/09 |
Year of first Publication: | 2024 |
Publishing Institution: | Universität Augsburg |
Granting Institution: | Universität Augsburg, Fakultät für Angewandte Informatik |
Date of final exam: | 2023/10/23 |
Release Date: | 2024/08/09 |
Tag: | GPU acceleration; capacitive sensors; sensor fusion; software architectures; visual sensors |
GND-Keyword: | Virtueller Sensor; Softwarearchitektur; Hardwarebeschleunigung; Grafikprozessor; Sensor; Echtzeitverarbeitung |
First Page: | viii, 178 |
Institutes: | Fakultät für Angewandte Informatik |
Fakultät für Angewandte Informatik / Institut für Software & Systems Engineering | |
Dewey Decimal Classification: | 0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik |
Licence (German): | ![]() |