Levels of explicability for medical artificial intelligence: what do we normatively need and what can we technically reach?

  • Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI? Arguments We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes toDefinition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI? Arguments We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example. Conclusion We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.show moreshow less

Download full text files

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch, Cristian TimmermannORCiDGND
URN:urn:nbn:de:bvb:384-opus4-1034504
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/103450
ISSN:0935-7335OPAC
ISSN:1437-1618OPAC
Parent Title (German):Ethik in der Medizin
Publisher:Springer Science and Business Media LLC
Place of publication:Berlin
Type:Article
Language:English
Year of first Publication:2023
Publishing Institution:Universität Augsburg
Release Date:2023/04/05
Tag:Health Policy; Philosophy; Health (social science); Issues, ethics and legal aspects
Volume:35
First Page:173
Last Page:199
DOI:https://doi.org/10.1007/s00481-023-00761-x
Institutes:Medizinische Fakultät
Medizinische Fakultät / Professur für Ethik der Medizin
Dewey Decimal Classification:6 Technik, Medizin, angewandte Wissenschaften / 61 Medizin und Gesundheit / 610 Medizin und Gesundheit
Licence (German):CC-BY 4.0: Creative Commons: Namensnennung (mit Print on Demand)