HRI Workshop on Explainable Robotic Systems

Held on March 5 in conjunction with HRI 2018 conference, to be held in Chicago, IL, USA from March 5–8, 2018.

Problem statement

The call for Autonomous Intelligent Systems (AIS) to be transparent has recently become loud and clear and currently is a pressing funding and research agenda. Some forms of transparency, such as traceability and verification, are particularly important for software and hardware engineers; other forms, such as explainability or intelligibility, are particularly important for ordinary people. As artificial agents, and especially socially interactive robots, enter human society, the demands for these agents to be transparent and explainable grow rapidly. When systems are able, for example, to explain how they made classifications or arrived at a decision, users are better able to judge the systems’ accuracy and have more calibrated trust in them.

More and more AI systems process vast amounts of information and make classifications or recommendations that humans use for financial, employment, medical, military, and political decisions. More precariously yet, autonomous social robots, by definition, make decisions reaching beyond direct commands and perform actions with social and moral significance for humans. The demand for these social robots to become transparent and explainable is particularly urgent. However, to make robots explainable, we need to understand how people interpret the behavior of such systems and what expectations they have of them.

Aim of the workshop

In this workshop, we will address the topics of transparency and explainability, for robots in particular, from both the cognitive science perspective and the computer science and robotics perspective. Cognitive science elucidates how people interpret robot behavior; computer science and robotics elucidate how the computational mechanisms underlying robot behavior can be structured and communicated so as to be human-interpretable. The implementation and use of explainable robotic systems may prevent the potentially frightening confusion over why a robot is behaving the way it is. Moreover, explainable robot systems may allow people to better calibrate their expectations of the robot’s capabilities and be less prone to treating robots as almost-humans.