A comparison of model confidence metrics on visual manufacturing quality data

  • After ground-breaking achievements through the application of modern deep learning, there is a considerable push towards using machine learning systems for visual inspection tasks part of most industrial manufacturing processes. But whilst there exist a lot of successful proof-of-concept implementations, productive use proves problematic. Whilst missing interpretability is one concern, the constant presence of data drift is another. Changes in pre-materials or process and degradation of sensors or product redesigns impose constant change towards statically trained machine learning models. To handle these kind of changes, a measurement of system confidence is needed. Since pure model output probabilities often lack in this concern better solutions are required. In this work, we compare and contrast several pre-existing methods used to describe model confidence. In contrast to previous works, they are evaluated on a large set of real-world manufacturing data. It is shown that utilizingAfter ground-breaking achievements through the application of modern deep learning, there is a considerable push towards using machine learning systems for visual inspection tasks part of most industrial manufacturing processes. But whilst there exist a lot of successful proof-of-concept implementations, productive use proves problematic. Whilst missing interpretability is one concern, the constant presence of data drift is another. Changes in pre-materials or process and degradation of sensors or product redesigns impose constant change towards statically trained machine learning models. To handle these kind of changes, a measurement of system confidence is needed. Since pure model output probabilities often lack in this concern better solutions are required. In this work, we compare and contrast several pre-existing methods used to describe model confidence. In contrast to previous works, they are evaluated on a large set of real-world manufacturing data. It is shown that utilizing an approach based on auto-encoder reconstruction error proves to be most promising in all scenarios tested.show moreshow less

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Philipp MaschaORCiD
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/115068
ISBN:9789811978661OPAC
ISBN:9789811978678OPAC
ISSN:2367-3370OPAC
ISSN:2367-3389OPAC
Parent Title (English):Computer Vision and Machine Intelligence: proceedings of CVMI 2022
Publisher:Springer Nature
Place of publication:Singapore
Editor:Massimo Tistarelli, Shiv Ram Dubey, Satish Kumar Singh, Xiaoyi Jiang
Type:Conference Proceeding
Language:English
Year of first Publication:2023
Release Date:2024/09/02
First Page:165
Last Page:177
Series:Lecture Notes in Networks and Systems ; 586
DOI:https://doi.org/10.1007/978-981-19-7867-8_14
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Nachhaltigkeitsziele
Nachhaltigkeitsziele / Ziel 9 - Industrie, Innovation und Infrastruktur
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik