XRDS, 2023

The Importance of Memory for Language-Capable Robots

Robots need to be able to communicate with people through natural language. But how should their memory systems be designed to facilitate this communication? Tags: Cognitive robotics, Cognitive science, Natural language generation Introduction As robots become more widely available to the public and play more prominent roles in people’s day-to-day routines, those robots will… Continue reading The Importance of Memory for Language-Capable Robots

Published

SemDial 2023

Toward Open-World Human-Robot Interaction: What Types of Gestures Are Used in Task-Based Open-World Referential Communication?

Gestures play a critical role in human-human and human-robot interaction. In task-based contexts, deictic gestures like pointing are particularly important for directing attention to task-relevant entities. While most work on task-based human-human and human-robot dialogue focuses on closed-world domains, recent research has begun to consider open-world tasks, where task-relevant objects may not be known to interactants a priori. In open-world tasks, we argue that… Continue reading Toward Open-World Human-Robot Interaction: What Types of Gestures Are Used in Task-Based Open-World Referential Communication?

Published

INLG 2023

Exploring the Naturalness of Cognitive Status Informed Referring Form Selection Models

Language-capable robots must be able to efficiently and naturally communicate about objects in the environment. A key part of communication is Referring Form Selection (RFS): the process of selecting a form like it, that, or the N to use when referring to an object. Recent cognitive status-informed computational RFS models have been evaluated in… Continue reading Exploring the Naturalness of Cognitive Status Informed Referring Form Selection Models

Published

CogSci 2023

Evaluating Cognitive Status-Informed Referring Form Selection for Human-Robot Interactions

Robots must be able to communicate naturally and efficiently, e.g., using concise referring forms like it, that, and the ⟨N’⟩. Recently researchers have started working on Referring Form Selection (RFS) machine learning algorithms but only evaluating them offline using traditional metrics like accuracy. In this work, we investigated how a cognitive status-informed RFS computational… Continue reading Evaluating Cognitive Status-Informed Referring Form Selection for Human-Robot Interactions

Published

THRI, 2023

Best of Both Worlds? Combining Different Forms of Mixed Reality Deictic Gestures

Mixed Reality provides a powerful medium for transparent and effective human-robot communication, especially for robots with significant physical limitations (e.g., those without arms). To enhance nonverbal capabilities for armless robots, this paper presents two studies that explore two different categories of mixed-reality deictic gestures for armless robots: a virtual arrow positioned over a target… Continue reading Best of Both Worlds? Combining Different Forms of Mixed Reality Deictic Gestures

Published

HRI 2023

Crossing Reality: Comparing Physical and Virtual Robot Deixis

We investigate referring behavior at the intersection of physical and AR worlds: physical/virtual (AR) arm × physical/virtual (AR) referent.

Augmented Reality (AR) technologies present an exciting new medium for human-robot interactions, enabling new opportunities for both implicit and explicit human-robot communication. For example, these technologies enable physically-limited robots to execute non-verbal interaction patterns such as deictic gestures despite lacking the physical morphology necessary to do so. However, a wealth of HRI research has… Continue reading Crossing Reality: Comparing Physical and Virtual Robot Deixis

Published

IROS 2022

Givenness Hierarchy Informed Optimal Document Planning for Situated Human-Robot Interaction

Robots that use natural language in collaborative tasks must refer to objects in their environment. Recent work has shown the utility of the linguistic theory of the Givenness Hierarchy (GH) in generating appropriate referring forms. But before referring expression generation, collaborative robots must determine the content and structure of a sequence of utterances, a… Continue reading Givenness Hierarchy Informed Optimal Document Planning for Situated Human-Robot Interaction

Published

INLG 2022

Evaluating Referring Form Selection Models in Partially-Known Environments

This paper won the Best Long-Paper Award at INLG 2022: For autonomous agents such as robots to effectively communicate with humans, they must be able to refer to different entities in situated contexts. In service of this goal, researchers have recently attempted to model the selection of referring forms on the basis of cognitive… Continue reading Evaluating Referring Form Selection Models in Partially-Known Environments

Published

VAM-HRI 2022

Towards an Understanding of Physical vs Virtual Robot Appendage Design

Artist's rendering. One of the four conditions, AR→P: Physical Robot with a AR virtual arm pointing to a physical referent. See Figure 1 for all four conditions.

Augmented Reality (AR) or Mixed Reality (MR) enables innovative interactions by overlaying virtual imagery over the physical world. For roboticists, this creates new opportunities to apply proven non-verbal interaction patterns, like gesture, to physically-limited robots. However, a wealth of HRI research has demonstrated that there are real benefits to physical embodiment (compared, e.g., to… Continue reading Towards an Understanding of Physical vs Virtual Robot Appendage Design

Published

HRI 2022 LBR

A Task Design for Studying Referring Behaviors for Linguistic HRI

Two of four quadrants of the task environment to study referring behaviors for linguistic HRI. Two rule cards are placed on each table, with learners' on the left hand side and instructors' (dashed box) on the right hand side (solid box). Objects are intentionally placed at the intersections of a 3×3 grid to encourage use of different referring forms whose use varies according to distance. Instructors teach learners to construct buildings whose constituents blocks are distributed across the visible and non-visible quadrants.

In many domains, robots must be able to communicate to humans through natural language. One of the core capabilities needed for task-based natural language communication is the ability to refer to objects, people, and locations. Existing work on robot referring expression generation has focused nearly exclusively on generation of definite descriptions to visible objects.… Continue reading A Task Design for Studying Referring Behaviors for Linguistic HRI

Published