THRI

Communicating Missing Causal Information to Explain a Robot’s Past Behavior

Robots need to explain their behavior to gain trust. Existing research has focused on explaining a robot’s current behavior, yet it remains unknown yet challenging how to provide explanations of past actions in an environment that might change after a robot’s actions, leading to critical missing causal information due to moved objects. We conducted… Continue reading Communicating Missing Causal Information to Explain a Robot’s Past Behavior

Published

AI-HRI 2022

Mixed-Reality Robot Behavior Replay: A System Implementation

As robots become increasingly complex, they must explain their behaviors to gain trust and acceptance. However, it may be difficult through verbal explanation alone to fully convey information about past behavior, especially regarding objects no longer present due to robots’ or humans’ actions. Humans often try to physically mimic past movements to accompany verbal… Continue reading Mixed-Reality Robot Behavior Replay: A System Implementation

Published

WYSD 2022

“Why Didn’t I Do It?” A Study Design to Evaluate Robot Explanations

The “detect screw” subtree with Assumption Checker nodes (prefixed C, green names) and Action nodes (prefixed A, white names). The root RetryUntilSuccessful node retries the subtree up to three times if a failure or assumption violation occurs. The ReactiveSequenceNode asynchronously runs all actions including “lift torso”, “look at table”, and “detect screw” while continuously checking the assumption, “check updated head3DCamera”. An example of pre and post-conditions can be seen before and after the “detect screw” action.

As robot systems are becoming ubiquitous in more complex tasks, there is a pressing need for robots to be able to explain their behaviors in order to gain trust and acceptance. In this paper, we discuss our plan for an online human subjects study to evaluate our new system for explanation generation. Specifically, the… Continue reading “Why Didn’t I Do It?” A Study Design to Evaluate Robot Explanations

Published

THRI, 2021

The Need for Verbal Robot Explanations and How People Would Like a Robot To Explain Itself

Robot explanation, robot failure

Although non-verbal cues such as arm movement and eye gaze can convey robot intention, they alone may not provide enough information for a human to fully understand a robot’s behavior. To better understand how to convey robot intention, we conducted an experiment (N = 366) investigating the need for robots to explain, and the… Continue reading The Need for Verbal Robot Explanations and How People Would Like a Robot To Explain Itself

Published

THRI, 2021

Building The Foundation of Robot Explanation Generation Using Behavior Trees

Behavior tree, robot task representation

As autonomous robots continue to be deployed near people, robots need to be able to explain their actions. In this paper, we focus on organizing and representing complex tasks in a way that makes them readily explainable. Many actions consist of sub-actions, each of which may have several sub-actions of their own, and the… Continue reading Building The Foundation of Robot Explanation Generation Using Behavior Trees

Published

ICRA 2020 WS

Reasons People Want Explanations After Unrecoverable Pre-Handover Failures

Codes for explanation reasoning. Shown are those that appeared more than 12 times.

Most research on human-robot handovers focuses on the development of comfortable and efficient HRI; few have studied handover failures. If a failure occurs in the beginning of the interaction, it prevents the whole handover process and destroys trust. Here we analyze the underlying reasons why people want explanations in a handover scenario where a… Continue reading Reasons People Want Explanations After Unrecoverable Pre-Handover Failures

Published