In AI We Trust? Effects of Agency Locus and Transparency on Uncertainty Reduction in Human-AI Interaction

被引:108
作者
Liu, Bingjie [1 ]
机构
[1] Calif State Univ Los Angeles, Los Angeles, CA 90032 USA
关键词
Machine Learning; Agency Locus; Agency Attribution; Transparency; Uncertainty; Trust; INFORMATION; MOTIVATION; AUTOMATION;
D O I
10.1093/jcmc/zmab013
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
Artificial intelligence (AI) is increasingly used to make decisions for humans. Unlike traditional AI that is programmed to follow human-made rules, machine-learning AI generates rules from data. These machine-generated rules are often unintelligible to humans. Will users feel more uncertainty about decisions governed by such rules? To what extent does rule transparency reduce uncertainty and increase users' trust? In a 2x3x2 between-subjects online experiment, 491 participants interacted with a website that was purported to be a decision-making AI system. Three factors of the AI system were manipulated: agency locus (human-made rules vs. machine-learned rules), transparency (no vs. placebic vs. real explanations), and task (detecting fake news vs. assessing personality). Results show that machine-learning AI triggered less social presence, which increased uncertainty and lowered trust. Transparency reduced uncertainty and enhanced trust, but the mechanisms for this effect differed between the two types of AI. Lay Summary Machine-learning AI systems are governed by system-generated rules based on their analysis of large databases. These rules are not predetermined by humans. Furthermore, they can sometimes be seen as difficult to interpret by humans. In this research, I ask whether users trust the judgments of such systems that are driven by machine-made rules. The results show that when compared with a traditional system that was programmed to follow human-made rules, machine-learning AI was perceived as less humanlike. This led users to be more uncertain about the decisions produced by the machine-learning AI system. This also decreased their trust in the system and their intention to use it. Transparency of the rationales for its decisions alleviated users' uncertainty and enhanced their trust, provided that the rationales are meaningful and informative.
引用
收藏
页码:384 / 402
页数:19
相关论文
共 45 条
[1]   In AI we trust? Perceptions about automated decision-making by artificial intelligence [J].
Araujo, Theo ;
Helberger, Natali ;
Kruikemeier, Sanne ;
de Vreese, Claes H. .
AI & SOCIETY, 2020, 35 (03) :611-623
[2]  
Berger C.R., 1987, Interpersonal processes: New directions in communication research, P39
[3]  
Berger C. R., 1975, Human Communication Research, V1, P99, DOI [10.1111/j.1468-2958.1975.tb00258.x, DOI 10.1111/J.1468-2958.1975.TB00258.X]
[4]  
Berger C.R., 1982, LANGUAGE SOCIAL KNOW
[5]  
Clatterbuck G.W., 1979, Human Communication Research, V5, P147, DOI DOI 10.1111/J.1468-2958.1979.TB00630.X
[6]   The Very Efficient Assessment of Need for Cognition: Developing a Six-Item Version* [J].
Coelho, Gabriel Lins de Holanda ;
Hanel, Paul H. P. ;
Wolf, Lukas J. .
ASSESSMENT, 2020, 27 (08) :1870-1885
[7]   PRECIS OF THE INTENTIONAL STANCE [J].
DENNETT, DC .
BEHAVIORAL AND BRAIN SCIENCES, 1988, 11 (03) :495-505
[8]   The role of trust in automation reliance [J].
Dzindolet, MT ;
Peterson, SA ;
Pomranky, RA ;
Pierce, LG ;
Beck, HP .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2003, 58 (06) :697-718
[9]   COGNITIVE THEORIES OF PERSUASION [J].
EAGLY, AH ;
CHAIKEN, S .
ADVANCES IN EXPERIMENTAL SOCIAL PSYCHOLOGY, 1984, 17 :267-359
[10]   Initial Interaction Expectations with Robots: Testing the Human-To-Human Interaction Script [J].
Edwards, Chad ;
Edwards, Autumn ;
Spence, Patric R. ;
Westerman, David .
COMMUNICATION STUDIES, 2016, 67 (02) :227-238