News
- Sep 10, 2020
Our paper is accepted to The AAAI Fall Symposium on The Artificial Intelligence for Human-Robot Interaction (AI-HRI '20)!
Contents
Abstract
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot’s internal states such as perception results and action intent.
Projecting the states directly onto a robot’s operating environment has the advantages of being direct, accurate, and more salient, eliminating mental inference about the robot’s intention.
However, there is a lack of tools for projection mapping in robotics, compared to established motion planning libraries (e.g., MoveIt).
In this paper, we detail the implementation of projection mapping to enable researchers and practitioners to push the boundaries for better interaction between robots and humans.
We also provide practical documentation and code for a sample manipulation projection mapping on GitHub: github.com/uml-robotics/projection_mapping.