Managing the tension between opposing effects of explainability of artificial intelligence: a contingency theory perspective

被引:42
作者
Abedin, Babak [1 ]
机构
[1] Macquarie Univ, Macquarie Business Sch, Sydney, NSW, Australia
关键词
Contingency theory; Systematic literature review; Explainable artificial intelligence; Interpretable analytics; Mitigating strategies; Opposing effects; PARADOX; CONTEXT; SYSTEMS; INFORMATION; SELECTION; ETHICS;
D O I
10.1108/INTR-05-2020-0300
中图分类号
F [经济];
学科分类号
02 ;
摘要
Purpose Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness. Design/methodology/approach The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability. Findings The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems). Research limitations/implications As in other systematic literature review studies, the results are limited by the content of the selected papers. Practical implications The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the "social goodness" of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus. Originality/value This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.
引用
收藏
页码:425 / 453
页数:29
相关论文
共 88 条
[1]   Attraction, selection, and attrition in online health communities: Initial conversations and their association with subsequent activity levels [J].
Abedin, Babak ;
Milne, David ;
Erfani, Eila .
INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2020, 141
[2]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[3]   Information systems in the age of pandemics: COVID-19 and beyond [J].
Agerfalk, Par J. ;
Conboy, Kieran ;
Myers, Michael D. .
EUROPEAN JOURNAL OF INFORMATION SYSTEMS, 2020, 29 (03) :203-207
[4]  
[Anonymous], 2018, Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Coordinated Plan on Artificial Intelligence
[5]  
[Anonymous], 2019, Ethics Guidelines for Trustworthy AI | Shaping Europe's Digital Future
[6]  
[Anonymous], 2010, Europe 2020
[7]  
A Strategy for Smart, Sustainable and Inclusive Growth, V10, P15
[8]  
Australian Government, 2019, ETH FRAM
[9]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[10]   Intelligibility and accountability: Human considerations in context-aware systems [J].
Bellotti, V ;
Edwards, K .
HUMAN-COMPUTER INTERACTION, 2001, 16 (2-4) :193-212