Diversity, equality, and inclusion (DEI) are critical factors that need to be considered when developing AI and robotic technologies for people. The lack of such considerations exacerbates and can also perpetuate existing forms of discrimination and biases in society for years to come. Although concerns have already been voiced around the globe, there is an urgent need to take action within the human-robot interaction (HRI) community. This workshop contributes to filling the gap by providing a platform in which to share experiences and research insights on identifying, addressing, and integrating DEI considerations in HRI. With respect to last year, this year the workshop will further engage participants on the problem of sampling biases through hands-on co-design activities for mitigating inequity and exclusion within the field of HRI.
∗All authors contributed equally to this work. A.T. and S.C. take first author responsibilities, the remaining authors follow in alphabetical order.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). HRI ’23 Companion, March 13–16, 2023, Stockholm, Sweden © 2023 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9970-8/23/03.
- Human-centered computing → Collaborative and social computing; Accessibility design and evaluation methods;
- Social and professional topics → User characteristics.
Diversity, Inclusion, Equity, HRI, Accessibility, Sampling bias
ACM Reference Format:
Ana Tanevska, Shruti Chandra, Giulia Barbareschi, Amy Eguchi, Zhao Han, Raj Korpan, Anastasia K. Ostrowski, Giulia Perugia, Sindhu Ravindranath, Katie Seaborn, and Katie Winkle. 2023. Inclusive HRI II: Equity and Diversity in Design, Application, Methods, and Community. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’23 Companion), March 13–16, 2023, Stockholm, Sweden. ACM, New York, NY, USA , 5 pages. https://doi.org/10.1145/3568294.3579965
As artificial intelligence (AI) powered devices, including robots, become prevalent in everyday life, there is an urgent need to prevent the creation or perpetuation of stereotypes and biases around gender, age, race, sexual orientations [6, 11], among other social characteristics at risk of bias and discrimination. Often, AI systems and the underlying datasets do not reflect the diversity that exists in human societies, exacerbating structural and systemic biases [17, 27]. For example, critical work has shown that the design of AI-based conversational voice assistants [3, 9], gender classifiers, hiring algorithms, and sex robots [2, 14] tend to be rooted in oppressive gender inequities historically present in society.
One major cause is the lack of diversity among the stakeholders responsible for developing such technologies [16, 24]. Thus, it is critical for the AI community, including those of us in the field of human-robot interaction (HRI), to address this lack of diversity, acknowledge its impact on technology development, [5, 7] and seek solutions to ensure diversity and inclusion . Recent work has emphasized a need to incorporate more human-centred, equity-focused, holistic, and critical HRI approaches [10, 13, 19, 25]. Critical robotics approaches also stress how the involvement of stakeholders in HRI does not meet the ideal practices of user-centred design [1, 20] because it is often hard to identify stakeholders’ influence in the final design .
End-user participation is also fundamental to empirical research in HRI. Yet, critical scholarship on human subjects research has raised concerns about a troubling pattern with participant sampling. Participants have largely been drawn from nations and populations characterized as Western, Educated, Industrialized, Rich, and Democratic (WEIRD) . Besides, patterns related to other diversity factors and representation exist and extend to gender and sexuality; race and ethnicity; age; sexuality and family configuration; disability; the body; ideology and domain expertise. We recognize that those with less social power in terms of social identity and physical characteristics laden with social values are underrepresented and marginalized [12, 21, 26]. People at the intersections of these population slices experience greater disenfranchisement [18, 22]. Part of the workshop aims to highlight this issue and take action on rectifying it.
This workshop brings together researchers interested in expanding efforts to advance diversity, equity, and inclusion (DEI) in HRI. Besides foundational information about DEI in HRI, this workshop delves deeply into the DEI challenge of participant recruitment.
Part 1: Foundations and Provocations
To understand how robotic systems and embedded AI can replicate and amplify inequalities and injustice among underrepresented communities, the first edition  sought to answer: What do diversity and inclusion mean in the context of HRI? This year, we aim to provide a forum to share experiences and research insights on identifying, addressing, and integrating DEI aspects in HRI, including robot design, applications, research methods, and the HRI community.
Part 2: Co-Constructing Future Praxis
Sampling bias is a community-level challenge. Part 2 will introduce hands-on co-design activities for immediate practical changes in HRI. We ask: How can we tackle sampling biases now and craft a foundation for all future work? We will engage diverse and like-minded researchers and practitioners who wish to make practical changes to the status quo right away. We aim to raise awareness and escalate the problem of sampling biases to advance fairness and rigour in HRI.
- build a community of researchers by strengthening existing connections and building new ones under DEI
- raise awareness of how to avoid creating and perpetuating existing biases and stereotypes and highlight promising directions and approaches
- identify oversights and better understand the challenges related to study design (e.g., unrepresentative recruitment)
- co-develop recommendations for constructive strategies and best practices to include excluded groups
- propose reporting guidelines and co-construct a template on the choices for participant recruitment in HRI studies.
Our full-day hybrid workshop format will combine informative sessions, panel discussions, and interactive activities. Participants will: a) engage with invited guest speakers; b) present papers on a range of topics relevant to DEI in HRI; c) discuss how to advance DEI in the field of HRI; d) participate in activities to identify strategies and best practices; e) co-construct a template for ACM, IEEE, and HRI venues. The schedule (CET time) will be:
- 8-12:30: Part 1 – Foundations and Provocations
- 12:30-14: Lunch
- 14-18: Part 2 – Co-Constructing Future Praxis
Part 1 will facilitate discussions on how to make HRI more diverse, equal, and inclusive. Accepted papers will be presented in two blocks to accommodate different time zones. Three panel discussions are scheduled: Two with invited speakers from diverse backgrounds, and one with students. To boost discussion, paper authors and the HRI community will submit questions and issues on DEI matters in HRI before the workshop on our website.
Part 2 consists of two activities. The first will be a card sort to co-develop DEI recommendations for HRI. Organisers will guide attendees to categorise the submitted ideas about challenges, needs, and recommendations on diversity factors into clusters. Attendees will then share and brainstorm strategies and best practices that address the needs and challenges. The second activity will involve co-constructing a DEI template built upon the results of the first activity, to be formalized in a follow-up paper (attendees will be invited for co-authorship or acknowledged).
We aim to bring together researchers and practitioners from diverse backgrounds, including computer science, engineering, ethics, psychology, gender studies, and more. We also target those interested in addressing the challenge of WEIRD research and who have published on human subjects research in HRI and adjacent spaces. We will also invite participants from previous workshops who indicated an interest in future opportunities. Our goal is to maximize community engagement to further increase awareness of and action on DEI issues. We aim for about 20-50 participants, in light of the activities. We will distribute calls for participation via mailing lists, social media, and professional networks. We will update last year’s website on the topics of workshop information, papers, and community building. Slack will be used to facilitate asynchronous Q&A, idea sharing, networking, and discussions on DEI matters.
Participants are encouraged to join both parts or can choose one. We offer this dual format of participation because of the special requirements and outcomes for Part 2, to be explained.
Those interested in Part 1 (general topics) are invited to submit an extended abstract (2 pages, excl. references) and short papers (4 pages, excl. references). We welcome submissions on HRI and social robotics research focusing on accessibility, disability and ableism, LGBTQIA+ topics, intersectional feminism, neurodiversity, race, ethnicity, and/or religion. We also encourage submissions from researchers outside of the HRI community. Submissions will be made to EasyChair, and will be peer reviewed based on originality, relevance, technical knowledge, and clarity. Paper acceptance requires that at least one author registers for and presents at the workshop, virtually or in-person. We will provide online access to the workshop proceedings on the website, with permission.
Those interested in Part 2 will need to fill out a survey to participate. Anonymous responses will be used as the basis for the activities. The survey will explain WEIRD and other diversity and representation factors and collect the following information: 1) brief descriptions of published studies wherein the WEIRD factors were accounted for; 2) participant details and paper citation; and 3) the challenges encountered in recruiting or participation. The primary outcomes will be: (a) a critical set of challenges and opportunities for recruitment; (b) an initial set of strategies and best practices for addressing these challenges and opportunities in recruitment; and (c) a reporting template with accompanying guidelines that target diverse recruitment and are grounded in current HRI reporting structures. We aim to submit these outcomes to the International Journal of Social Robotics, with contributing and consenting participants as co-authors.
The workshop is co-organized by a diverse team of researchers and practitioners in HRI and adjacent spaces:
- Ana Tanevska is a postdoctoral researcher at Uppsala University, Sweden. Their work focuses on trust and social cognition in HRI.
- Shruti Chandra is a postdoctoral fellow at the University of Waterloo, Canada. Her research focuses on socially-assistive robots, inter-generational gameplay and well-being of people.
- Giulia Barbareschi is a Research Fellow at the Keio School of Media Design, Japan. She works on technologies to empower people with disabilities living in different parts of the world.
- Amy Eguchi is an Associate Teaching Professor at UC San Diego, USA. Her research focuses on CS education and AI literacy through the use of robotics in K-12 classrooms.
- Zhao Han is a Postdoctoral Fellow at the Colorado School of Mines, USA. He focuses on explanation, language and AR in HRI.
- Raj Korpan is an Assistant Professor at Iona University, USA. He works on robot navigation explainable AI, and cognitive models.
- Anastasia K. Ostrowski is a PhD candidate at MIT, USA. Her current research explores equitable design of robots and design education through Design Justice and human-centered approaches.
- Giulia Perugia is an Assistant Professor in HRI at Eindhoven University of Technology (TU/e). Her research lies at the intersection of Social Robotics, Social Psychology, and Inclusive HRI.
- Sindhu Ravindranath is an Assistant professor at IFHE University and a research student of ICFAI, India. She works on communication theories, HRI, health communications, and qualitative analysis.
- Katie Seaborn is an Associate Professor at Tokyo Institute of Technology. She works on voice-based agents, inclusive design with older adults, and intersectionality in critical computing.
- Katie Winkle is an Assistant Professor in Social Robotics at Uppsala University, where she works on trustworthy HRI.
 Davide Cirillo, Silvina Catuara-Solarz, Czuee Morey, Emre Guney, Laia Subirats, Simona Mellino, Annalisa Gigante, Alfonso Valencia, María José Rementeria, Antonella Santuccione Chadha, et al. 2020. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ digital medicine 3, 1 (2020), 1–11.
 Maartje MA de Graaf, Giulia Perugia, Eduard Fosch-Villaronga, Angelica Lim, Frank Broz, Elaine Schaertl Short, and Mark Neerincx. 2022. Inclusive hri: Equity and diversity in design, application, methods, and community. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 1247–1249.
 Eduard Fosch-Villaronga, Adam Poulsen, Roger Andre Søraa, and BHM Custers. 2021. A little bird told me your gender: Gender inferences in social media. Information Processing & Management 58, 3 (2021), 102541.
 Maria Luce Lupetti, Cristina Zaga, and Nazli Cila. 2021. Designerly ways of knowing in HRI: Broadening the scope of design-oriented HRI through the concept of intermediate-level knowledge. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. 389–398.
 Ihudiya Finda Ogbonnaya-Ogburu, Angela D.R. Smith, Alexandra To, and Kentaro Toyama. 2020. Critical race theory for HCI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–16.
 Anastasia K Ostrowski, Raechel Walker, Madhurima Das, Maria Yang, Cynthia Breazeal, Hae Won Park, and Aditi Verma. 2022. Ethics, Equity, & Justice in Human-Robot Interaction: A Review and Future Directions. In 2022 31st IEEE International Conference on Robot & Human Interactive Communication.
 Giulia Perugia, Stefano Guidi, Margherita Bicchi, and Oronzo Parlangeli. 2022. The Shape of Our Bias: Perceived Age and Gender in the Humanoid Robots of the ABOT Database. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction. 110–119.
 Farzana Rahman and Elodie Billionniere. 2021. Re-entering computing through emerging technology: Current state and special issue introduction. ACM Transactions on Computing Education (TOCE) 21, 2 (2021), 1–5.
 Mehdi Roopaei, Justine Horst, Emilee Klaas, Gwen Foster, Tammy J Salmon-Stephens, and Jodean Grunow. 2021. Women in AI: Barriers and solutions. In 2021 IEEE World AI IoT Congress (AIIoT). IEEE, 0497–0503.
 Ari Schlesinger, W. Keith Edwards, and Rebecca E. Grinter. 2017. Intersectional HCI: Engaging identity through gender, race, and class. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 5412–5427.
 Katie Seaborn and Alexa Frank. 2022. What pronouns for Pepper? A critical review of gender/ing in research. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 239, 15 pages. https://doi.org/10.1145/3491102.3501996
 Katta Spiel, Os Keyes, and Pınar Barlas. 2019. Patching gender: Non-binary utopias in HCI. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). 1–11.
 Roger Andre Søraa and Eduard Fosch-Villaronga. 2020. Exoskeletons for all: The interplay between exoskeletons, inclusion, gender, and intersectionality. Paladyn, Journal of Behavioral Robotics 11, 1 (2020), 217–227.
 Katie Winkle, Gaspar Isaac Melsión, Donald McMillan, and Iolanda Leite. 2021. Boosting robot credibility and challenging gender norms in responding to abusive behaviour: A case for feminist robots. In Companion of the 2021 ACM/IEEE international conference on human-robot interaction. 29–37.
 Anon Ymous, Katta Spiel, Os Keyes, Rua M Williams, Judith Good, Eva Hornecker, and Cynthia L Bennett. 2020. ” I am just terrified of my future”—Epistemic Violence in Disability Related Technology Research. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–16.
 Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457 (2017).