AI-HRI 2020 — 2020 AAAI Fall Symposium on The Artificial Intelligence for Human-Robot Interaction (AI-HRI)

Projection Mapping Implementation: Enabling Direct Externalization of Perception Results and Action Intent to Improve Robot Explainability

Zhao Han, Alexander Wilkinson, Jenna Parrillo, Jordan Allspaw, and Holly A. Yanco

, , , ,
table top projection
Augmented reality (AR), tabletop projection mapping
  • Sep 10, 2020

    Our paper is accepted to The AAAI Fall Symposium on The Artificial Intelligence for Human-Robot Interaction (AI-HRI '20)!


Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot’s internal states such as perception results and action intent.

Projecting the states directly onto a robot’s operating environment has the advantages of being direct, accurate, and more salient, eliminating mental inference about the robot’s intention.

However, there is a lack of tools for projection mapping in robotics, compared to established motion planning libraries (e.g., MoveIt).

In this paper, we detail the implementation of projection mapping to enable researchers and practitioners to push the boundaries for better interaction between robots and humans.

We also provide practical documentation and code for a sample manipulation projection mapping on GitHub:


hardware platform fetch
Hardware platform.
High-level diagram for our projection mapping implementation. With the projector lens calibrated, a virtual camera – placed in Rviz with the same pose as the projector in real world – subscribes to the camera intrinsics so it can output an image of objects visualized in the virtual world in Rviz to the projector to reflect the perceived objects.
Presentation Slides


(Accompanying video). The projection of perception results: the detected objects (white and green) and the object to be manipulated (green).
Whiteboard projection