Ethics and governance of trustworthy medical artificial intelligence

被引:168
作者
Zhang, Jie [1 ,2 ]
Zhang, Zong-ming [3 ]
机构
[1] Nanjing Univ Chinese Med, Inst Literature Chinese Med, Nanjing 210023, Peoples R China
[2] Nantong Univ, Xinglin Coll, Nantong 226236, Peoples R China
[3] Nanjing Univ Chinese Med, Res Ctr Chinese Med Culture, Nanjing 210023, Peoples R China
基金
中国国家社会科学基金;
关键词
Artificial intelligence; Healthcare; Ethics; Governance; Regulation; Data; Algorithms; Responsibility attribution; HEALTH-CARE; BLACK-BOX; BIG DATA; MACHINE;
D O I
10.1186/s12911-023-02103-9
中图分类号
R-058 [];
学科分类号
摘要
BackgroundThe growing application of artificial intelligence (AI) in healthcare has brought technological breakthroughs to traditional diagnosis and treatment, but it is accompanied by many risks and challenges. These adverse effects are also seen as ethical issues and affect trustworthiness in medical AI and need to be managed through identification, prognosis and monitoring.MethodsWe adopted a multidisciplinary approach and summarized five subjects that influence the trustworthiness of medical AI: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution, and discussed these factors from the perspectives of technology, law, and healthcare stakeholders and institutions. The ethical framework of ethical values-ethical principles-ethical norms is used to propose corresponding ethical governance countermeasures for trustworthy medical AI from the ethical, legal, and regulatory aspects.ResultsMedical data are primarily unstructured, lacking uniform and standardized annotation, and data quality will directly affect the quality of medical AI algorithm models. Algorithmic bias can affect AI clinical predictions and exacerbate health disparities. The opacity of algorithms affects patients' and doctors' trust in medical AI, and algorithmic errors or security vulnerabilities can pose significant risks and harm to patients. The involvement of medical AI in clinical practices may threaten doctors 'and patients' autonomy and dignity. When accidents occur with medical AI, the responsibility attribution is not clear. All these factors affect people's trust in medical AI.ConclusionsIn order to make medical AI trustworthy, at the ethical level, the ethical value orientation of promoting human health should first and foremost be considered as the top-level design. At the legal level, current medical AI does not have moral status and humans remain the duty bearers. At the regulatory level, strengthening data quality management, improving algorithm transparency and traceability to reduce algorithm bias, and regulating and reviewing the whole process of the AI industry to control risks are proposed. It is also necessary to encourage multiple parties to discuss and assess AI risks and social impacts, and to strengthen international cooperation and communication.
引用
收藏
页数:15
相关论文
共 98 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]   Machine Learning and Health Care Disparities in Dermatology [J].
Adamson, Adewole S. ;
Smith, Avery .
JAMA DERMATOLOGY, 2018, 154 (11) :1247-1248
[3]   Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data [J].
Alemzadeh, Homa ;
Raman, Jaishankar ;
Leveson, Nancy ;
Kalbarczyk, Zbigniew ;
Iyer, Ravishankar K. .
PLOS ONE, 2016, 11 (04)
[4]  
Allain Jessica S., 2013, La. L. Rev., V73, P1049
[5]   Using artificial intelligence methods to speed up drug discovery [J].
Alvarez-Machancoses, Oscar ;
Luis Fernandez-Martinez, Juan .
EXPERT OPINION ON DRUG DISCOVERY, 2019, 14 (08) :769-777
[6]  
AMA (American Medical Association), 2018, REP COUNC LONG RANG
[7]  
[Anonymous], 2017, Asilomar AI principles
[8]  
[Anonymous], 2019, DEEP LEARNING ASSIST
[9]  
[Anonymous], 2021, Ethics and governance of artificial intelligence for health: WHO guidance
[10]  
[Anonymous], 2016, Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems No