Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

被引:783
作者
Amann, Julia [1 ]
Blasimme, Alessandro [1 ]
Vayena, Effy [1 ]
Frey, Dietmar [2 ]
Madai, Vince I. [2 ,3 ]
机构
[1] Swiss Fed Inst Technol, Hlth Eth & Policy Lab, Dept Hlth Sci & Technol, Hottingerstr 10, CH-8092 Zurich, Switzerland
[2] Charite Univ Med Berlin, Charite Lab Artificial Intelligence Med CLAIM, Berlin, Germany
[3] Birmingham City Univ, Sch Comp & Digital Technol, Fac Comp Engn & Built Environm, Birmingham, W Midlands, England
基金
欧盟地平线“2020”;
关键词
Artificial intelligence; Machine learning; Explainability; Interpretability; Clinical decision support; DECISIONS;
D O I
10.1186/s12911-020-01332-6
中图分类号
R-058 [];
学科分类号
摘要
Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the "Principles of Biomedical Ethics" by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
引用
收藏
页数:9
相关论文
共 44 条
[1]   Shared Decision Making - The Pinnacle of Patient-Centered Care [J].
Barry, Michael J. ;
Edgman-Levitan, Susan .
NEW ENGLAND JOURNAL OF MEDICINE, 2012, 366 (09) :780-781
[2]  
Beauchamp TL., 2012, Principles of Biomedical Ethics
[3]   Ethical considerations about artificial intelligence for prognostication in intensive care [J].
Beil, Michael ;
Proft, Ingo ;
van Heerden, Daniel ;
Sviri, Sigal ;
van Heerden, Peter Vernon .
INTENSIVE CARE MEDICINE EXPERIMENTAL, 2019, 7 (01)
[4]  
Bjerring JC., 2021, Philosophy & Technology, V34, P349, DOI DOI 10.1007/S13347-019-00391-6
[5]   Should heart age calculators be used alongside absolute cardiovascular disease risk assessment? [J].
Bonner, Carissa ;
Bell, Katy ;
Jansen, Jesse ;
Glasziou, Paul ;
Irwig, Les ;
Doust, Jenny ;
McCaffery, Kirsten .
BMC CARDIOVASCULAR DISORDERS, 2018, 18
[6]  
Cohen IG, 2020, GEORGETOWN LAW J, V108, P1425
[7]   Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency [J].
Cutillo, Christine M. ;
Sharma, Karlie R. ;
Foschini, Luca ;
Kundu, Shinjini ;
Mackintosh, Maxine ;
Mandl, Kenneth D. ;
Beck, Tyler ;
Collier, Elaine ;
Colvis, Christine ;
Gersing, Kenneth ;
Gordon, Valery ;
Jensen, Roxanne ;
Shabestari, Behrouz ;
Southall, Noel .
NPJ DIGITAL MEDICINE, 2020, 3 (01)
[8]   Impact of decision aids used during clinical encounters on clinician outcomes and consultation length: a systematic review [J].
Dobler, Claudia Caroline ;
Sanchez, Manuel ;
Gionfriddo, Michael R. ;
Alvarez-Villalobos, Neri A. ;
Ospina, Naykky Singh ;
Spencer-Bonilla, Gabriela ;
Thorsteinsdottir, Bjorg ;
Benkhadra, Raed ;
Erwin, Patricia J. ;
West, Colin P. ;
Brito, Juan P. ;
Murad, Mohammad Hassan ;
Montori, Victor M. .
BMJ QUALITY & SAFETY, 2019, 28 (06) :499-510
[9]  
Doran D., 2017, ARXIV171000794CS
[10]   A guide to deep learning in healthcare [J].
Esteva, Andre ;
Robicquet, Alexandre ;
Ramsundar, Bharath ;
Kuleshov, Volodymyr ;
DePristo, Mark ;
Chou, Katherine ;
Cui, Claire ;
Corrado, Greg ;
Thrun, Sebastian ;
Dean, Jeff .
NATURE MEDICINE, 2019, 25 (01) :24-29