Causal Interpretations of Black-Box Models

被引:284
作者
Zhao, Qingyuan [1 ]
Hastie, Trevor [2 ]
机构
[1] Univ Penn, Dept Stat, 400 Huntsman Hall,3730 Walnut St, Philadelphia, PA 19104 USA
[2] Stanford Univ, Dept Stat, Sequoia Hall,390 Serra Mall, Stanford, CA 94305 USA
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
Back-door adjustment; Data visualization; Machine learning; Mediation analysis; Partial dependence plot; INFERENCE; REGRESSION;
D O I
10.1080/07350015.2019.1624293
中图分类号
F [经济];
学科分类号
02 ;
摘要
The fields of machine learning and causal inference have developed many concepts, tools, and theory that are potentially useful for each other. Through exploring the possibility of extracting causal interpretations from black-box machine-trained models, we briefly review the languages and concepts in causal inference that may be interesting to machine learning researchers. We start with the curious observation that Friedman's partial dependence plot has exactly the same formula as Pearl's back-door adjustment and discuss three requirements to make causal interpretations: a model with good predictive performance, some domain knowledge in the form of a causal diagram and suitable visualization tools. We provide several illustrative examples and find some interesting and potentially causal relations using visualization tools for black-box models.
引用
收藏
页码:272 / 281
页数:10
相关论文
共 49 条
[21]  
HOLLAND PW, 1986, J AM STAT ASSOC, V81, P945, DOI 10.2307/2289064
[22]   Generalized functional ANOVA diagnostics for high-dimensional functions of dependent variables [J].
Hooker, Giles .
JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2007, 16 (03) :709-732
[23]   Identification, Inference and Sensitivity Analysis for Causal Mediation Effects [J].
Imai, Kosuke ;
Keele, Luke ;
Yamamoto, Teppei .
STATISTICAL SCIENCE, 2010, 25 (01) :51-71
[24]  
Imbens GW, 2015, CAUSAL INFERENCE FOR STATISTICS, SOCIAL, AND BIOMEDICAL SCIENCES: AN INTRODUCTION, P1, DOI 10.1017/CBO9781139025751
[25]  
JIANG T, 2002, TECHNICAL REPORT
[26]  
Kilbertus N, 2017, ADV NEUR IN, V30
[27]  
King G, 2000, American Journal of Political Science, V44, P347, DOI [DOI 10.2307/2669316, 10.2307/2669316]
[28]  
Kusner M.J., 2017, Adv. Neural Inf. Process. Syst, V30, P1
[29]  
Lauritzen S.L., 2001, Monographs on Stat. Appl. Probability, V87, P63, DOI [10.1201/9781420035988.ch2, DOI 10.1201/9781420035988.CH2]
[30]   The Mythos of Model Interpretability [J].
Lipton, Zachary C. .
COMMUNICATIONS OF THE ACM, 2018, 61 (10) :36-43