Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty

被引:46
作者
Jiang, Jinglu [1 ]
Kahai, Surinder [1 ]
Yang, Ming [2 ]
机构
[1] Binghamton Univ, Sch Management, Binghamton, NY USA
[2] Cent Univ Finance & Econ, Sch Informat, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
AI explainability; AI advice acceptance; Medical AI; Human-AI interaction; Experiment; ARTIFICIAL-INTELLIGENCE; RECOMMENDATION AGENTS; HEALTH LITERACY; DECISION AIDS; BLACK-BOX; TRUST; CONFIDENCE; SATISFACTION; ACCEPTANCE; IMPACT;
D O I
10.1016/j.ijhcs.2022.102839
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, AI explainability (XAI) has received wide attention. Although XAI is expected to play a positive role in decision-making and advice acceptance, various opposing effects have also been found. The opposing effects of XAI highlight the critical role of context, especially human factors, in understanding XAI's impacts. This study investigates the effects of providing three types of post-hoc explanations (alternative advice, prediction confidence scores, and prediction rationale) on two context-specific user decision-making outcomes (AI advice acceptance and advice adoption). Our field experiment results show that users' epistemic uncertainty matters when understanding XAI's impacts. As users' epistemic uncertainty increases, only providing prediction rationale is beneficial, whereas providing alternative advice and showing prediction confidence scores may hinder users' advice acceptance. Our study contributes to the emerging literature on the human aspects of XAI by clarifying XAI and showing that XAI may not always be desirable. It also contributes by highlighting the importance of considering user profiles when predicting XAI's impacts, designing XAI, and providing professional services with AI.
引用
收藏
页数:17
相关论文
共 124 条
[11]  
Bashier H.K., P 16 C EUR CHAPT ASS, P3021
[12]   Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption [J].
Bedue, Patrick ;
Fritzsche, Albrecht .
JOURNAL OF ENTERPRISE INFORMATION MANAGEMENT, 2022, 35 (02) :530-549
[13]   Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems [J].
Ben Shneiderman .
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2020, 10 (04)
[14]   Regulating Transparency? Facebook, Twitter and the German Network Enforcement Act [J].
Ben Wagner ;
Rozgonyi, Krisztina ;
Sekwenz, Marie-Therese ;
Cobbe, Jennifer ;
Singh, Jatinder .
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, :261-271
[15]   Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences [J].
Bonaccio, Silvia ;
Dalal, Reeshad S. .
ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES, 2006, 101 (02) :127-151
[16]   Mistrust, uncertainty and health risks [J].
Breakwell, Glynis M. .
CONTEMPORARY SOCIAL SCIENCE, 2020, 15 (05) :504-516
[17]   RECENT DEVELOPMENTS IN MODELING PREFERENCES - UNCERTAINTY AND AMBIGUITY [J].
CAMERER, C ;
WEBER, M .
JOURNAL OF RISK AND UNCERTAINTY, 1992, 5 (04) :325-370
[18]  
Canty A., 2020, R package
[19]  
Chakraborti T, 2019, Proceedings of the International Conference on Automated Planning and Scheduling, V29, P86, DOI [10.1609/icaps.v29i1.3463, 10.1609/icaps.v29i1.3463, DOI 10.1609/ICAPS.V29I1.3463]
[20]   Do End -Users Want Explanations? Analyzing the Role of Explainability as an Emerging Aspect of Non-Functional Requirements [J].
Chazette, Larissa ;
Karras, Oliver ;
Schneider, Kurt .
2019 27TH IEEE INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE (RE 2019), 2019, :223-233