X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust

Abstract

Explaining the recommendations or decisions made by AI machines is the key in gaining justified human trust in them. To this end, we take a novel look at building explainable AI (XAI) framework in terms of increasing justified human trust and reliance in the AI machine. Current methods in XAI generate explanations as a single shot response. In our work, we instead pose explanation as an iterative communication process, i.e. dialog, between the machine and human user. More concretely, our XAI framework generates sequence of explanations in a dialog which takes into account three important aspects at each dialog turn: (a) human's intention (or curiosity); (b) human's understanding of the machine; and (c) machine's understanding of the human user. To do this, we use Theory of Mind (ToM) which helps us in explicitly modeling human's intention, machine's mind as inferred by the human as well as human's mind as inferred by the machine. In other words, these explicit mental representations in ToM are incorporated to learn an optimal explanation policy that takes into account human's perception and beliefs. Furthermore, we also show that ToM facilitates in quantitatively measuring justified human trust in the machine by comparing all the three mental representations. We present applications of the proposed approach to three visual recognition tasks, namely, image classification, action recognition, and human body pose estimation. We argue that our ToM based explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex machine learning models. To the best of our knowledge, this is the first work to derive explanations using ToM. Extensive human study experiments verify our hypotheses, showing that the proposed explanations significantly outperform the state-of-the-art XAI methods in terms of all the quantitative and qualitative XAI evaluation metrics including human trust, reliance, and explanation satisfaction.


Acknowledgements: DARPA XAI N66001-17-2-4029, ARO W911NF1810296, ONR MURI N00014-16-1-2007

AuthorImage
Arjun Akula
PhD Candidate @ UCLA

Related