HRI 2022 — 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI)

Teacher, Teammate, Subordinate, Friend: Generating Norm Violation Responses Grounded in Role-based Relational Norms

Ruchen Wen, Zhao Han and Tom Williams

24% acceptance rate
, ,
Classroom: One of four experimental contexts. We compared four norm violation responses across four contexts, in each of which the robot played fundamentally different roles, and bore fundamentally different role-obligations. The images shown above depict frames from videos shown to participants during this experiment. Each image depicts an Action explanation used in a different context.
Robot ethics
News
  • Jan 6, 2022

    Ruchen (Puck) Wen just submitted the camera-ready version of the paper, we will see you in March online at HRI!

  • Nov 12, 2021 Our full paper is accepted to the top-tier 2022 ACM/IEEE Human-Robot Interaction Conference (HRI)! Preprint to come!

Abstract

Language-capable robots require moral competence, including representations and algorithms for moral reasoning and moral communication.

We argue for an ethical pluralist approach to moral competence that leverages and combines disparate ethical frameworks, and specifically argue for an approach to moral competence that is grounded not only in Deontological norms (as is typical in the HRI literature) but also in Confucian relational roles.

To this end, we introduce the first computational approach that centers relational roles in moral reasoning and communication, and demonstrate the ability of this approach to generate both context-oriented and role-oriented explanations for robots’ rejections of norm-violating commands, which we justify through our pluralist lens.

Moreover, we provide the first investigation of how computationally generated role-based explanations are perceived by humans, and empirically demonstrate (N=120) that the effectiveness (in terms of of trust, understanding confidence, and perceived intelligence) of explanations grounded in different moral frameworks is dependent on nuanced mental modeling of human interlocutors.


Index Terms: Robot Ethics, Confucian Ethics, Moral Communication


Video Presentation (10m)

I. Introduction

For language-capable robots to be successfully deployed, they require moral competence [1] (i.e., capabilities of reasoning, acting and communicating in accordance with a moral system) to avoid negatively impacting human moral ecosystems [2]. This is critical not only in contexts where robots pose risks of physical harm, like factory contexts and space exploration, but also in contexts where robots pose risks of emotional harm, like eldercare and childcare. Moreover, without careful design, robots stand to negatively impact the beliefs, desires, and intentions of interactants in morally consequential ways. A number of HRI studies have shown that robots have significant persuasive power, and that interactants regularly comply with robots’ commands and requests [345]. Moreover, recent work has shown that robots can exert moral influence over the systems of moral norms that govern interactants’ behavior [678].

Machine shop: One of four experimental contexts. We compared four norm violation responses across four contexts, in each of which the robot played fundamentally different roles, and bore fundamentally different role-obligations. The images shown above depict frames from videos shown to participants during this experiment. Each image depicts an Action explanation used in a different context.
Machine shop
Office: One of four experimental contexts. We compared four norm violation responses across four contexts, in each of which the robot played fundamentally different roles, and bore fundamentally different role-obligations. The images shown above depict frames from videos shown to participants during this experiment. Each image depicts an Action explanation used in a different context.
Office
Classroom: One of four experimental contexts. We compared four norm violation responses across four contexts, in each of which the robot played fundamentally different roles, and bore fundamentally different role-obligations. The images shown above depict frames from videos shown to participants during this experiment. Each image depicts an Action explanation used in a different context.
Classroom.
Conference: One of four experimental contexts. We compared four norm violation responses across four contexts, in each of which the robot played fundamentally different roles, and bore fundamentally different role-obligations. The images shown above depict frames from videos shown to participants during this experiment. Each image depicts an Action explanation used in a different context.
Conference

Fig.1. Experimental Contexts. We compared four norm violation responses across four contexts, in each of which the robot played fundamentally different roles, and bore fundamentally different role-obligations. The images shown above depict frames from videos shown to participants during this experiment. Each image depicts an Action explanation used in a different context.

Malle and Scheutz suggest that four key criteria are required for moral competence: (1) a system of moral norms (2) norm-driven moral cognition to generate emotional responses to norm violations and make moral judgements, (3) norm-driven moral decision making and action, and (4) norm-driven moral communication to generate morally sensitive language for explaining one’s actions and regulating others’ behaviors [19]. Due to this focus on norms, recent approaches to achieving robotic moral competence [1] have predominantly relied on norm-driven Western ethical theories such as deontology, which center adherence to universalizable moral rules.

HRI researchers have argued, however, that our community needs to go beyond these ethical theories, and embrace a wider diversity of moral philosophies from disparate global cultures [10]. This is important (1) so that robots can intelligently operate within different cultures within an increasingly interconnected, globalized world [1110]; (2) so that robot designers can center cultures whose perspectives have been historically excluded from robot interaction design (e.g., as part of decolonial [12] or anti-racist [13] computing projects). Moreover, through this lens of ethical pluralism [11], it is important not only to consider different cultures’ ethical frames separately, but also to create robots that simultaneously leverage multiple ethical theories as part of their moral reasoning processes.

Recent HRI research seeking to embody an ethical pluralist approach has explored what robotic moral competence might look like through the lens of an Eastern ethical tradition — Confucian Role Ethics [14152101617], which argues that moral norms are derived from the social roles humans assume and the relationships humans have with others [18]. Two of the key elements of Confucian Role Ethics are (1) a focus on roles and relationships rather than norms (although those roles certainly come with normative expectations), and (2) a focus on the cultivation of the moral self in concert with others, including the responsibility to help others grow virtues in social interactions (rather than merely avoiding unethical behavior). This centering of relational and social context in moral domains has three key implications for HRI researchers. First, while norm-centering theories emphasize only the need for robots to adhere to and communicate rules of right and wrong, role-centering theories further emphasize the need for robots to adhere to and communicate their role obligations (cf. [161715]). Second, while norm-centering theories emphasize the need for robots to explain their moral reasoning so as to avoid inappropriate blame, role-centering theories moreover emphasize the need to use moral communication to help others cultivate their moral selves (cf. [2]). And third, while norm-centering theories emphasize the need to resolve conflicts between conflicting moral norms, role-centering theories moreover emphasize, we argue, the tension between moral and social norms (cf. [1920]).

These perspectives motivate an approach to moral communication — especially when responding to norm violations — that is at least partially role-based. As such there has been recent work theoretically [14210] and empirically [161715] investigating the benefits of role-grounded robotic moral communication. However, most computational work on generating norm violation responses (such as command rejections [21]) has been grounded solely in norms and the non-relational contexts in which those norms are activated [222324] (cp. [2526]). And while some computational approaches have recently been proposed in theory [1427], there have been no previous approaches that have actually implemented or evaluated computational systems for role-based or hybrid moral reasoning and moral communication.

In this work, we thus present (1) a set of knowledge representations for encoding role-based relational norms, (2) an algorithm for reasoning using those norms and how to communicate the results of that reasoning process in norm- context- and role-grounded ways, and (3) empirical evidence for how the different forms of explanation enabled by this system practically impact observers’ trust, understanding confidence, and perceptions of robot intelligence.

A. Confucian Role Ethics

Confucian Ethics focuses on cultivating virtues through effortful fulfillment of and reflections on one’s communal roles in relation to others [28]. Through this lens, virtues are cultivated via interactive social relationships in which participants play specific social roles [29]. Confucian Ethics thus theorizes cardinal relational roles (e.g., parent-child) for human-human interaction [30], and that to be a good person is to meet the moral obligations derived from one’s communal roles and to consciously reflect on one’s role-relationships and encourage others to do the same [31]. Confucian ethics has been theorized in multiple ways, including as a Care Ethic (which emphasizes relationships with others [32]), a Virtue Ethic (which emphasizes the cultivation of virtues [33]), and a Role Ethic (combining these two perspectives [34]).

Williams et al. [14] demonstrate how Confucian Role Ethics (CRE) can be used in robotics in three ways. First, CRE can inform how a robot acts, implicitly, through CRE-theoretic design guidelines [35]. Second, CRE can motivate Role-theoretic alternatives to traditional models of robotic moral competence (e.g., [1]), and could, in theory, inform Role-theoretic approaches to moral reasoning grounded in robot-oriented alternatives to Confucian Cardinal Relationships (e.g., supervisor-subordinate, adept-novice, teammate-teammate, and friend-friend) [36]. Finally, CRE can inform robot moral communication [161715]. In this paper, we consider these second and third approaches.

B. Robot Explanation

Explanation has recently attracted significant attention in the HRI community. Most of this work has focused on a robot’s actions, as opposed to the roles and contexts that permit, obligate, or forbid those actions. Hayes et al. [37] used function annotation to explain robot controller policy in a conveyor belt application. Chakraborti et al. [38] proposed a method for explaining differences in mental models. Zhu and Williams [39] found that participants trusted robots more if explanations were given before a robot’s actions. A variety of approaches have been used for explanation generation, including encoder-decoder approaches [40] and behavior trees [41].

To enhance these approaches, HRI researchers have relied on models of human explanation from psychology [42]. de Graaf and Malle[43] propose, for example, that robot explanations should adhere to the conceptual and linguistic frameworks of human explanation. de Graaf and Malle[44] later refine this claim, showing that robots are expected to rely more on rationality than emotion in their explanations. Recently, Stange and Kopp [45] demonstrated how human-inspired explanations of robots’ inappropriate behavior enhanced users’ perceptions of those robots. In our work, we focused on explanations that do not excuse, but rather call out, norm violations.

C. Norm Violation Response and Command Rejection

HRI researchers have recently argued that robots may need to call out norm-violating behavior [4647] and reject commands, request, and suggestions that are impermissible on ethical grounds [216]. Moreover, Jackson et al. [19] (see also [48]) emphasize that the way a command is rejected matters; an argument that Kim et al. [1617] investigate by comparing command rejections grounded in different ethical theories (see also [15]). Much of this work, however, is empirical rather than computational.

In contrast, researchers like Charisi et al. [49] have explored how robots might algorithmically generate transparent command rejections on ethical grounds. These works have recently been extended to account for key aspects of social context, by, e.g., Briggs et al. [2125], who use an approach that focuses on the pragmatic criteria used to rank different explanations, and Jackson, Li, et al., who focus on the use of formal planning methods to precisely identity the precise reasoning for rejection [22] (see also [23242526]). These approaches, however, are largely grounded in deontology and in concerns regarding the rightness and wrongness of the action itself. To the best of our knowledge, there has been no prior computational work grounded in a role-theoretic approach. In this work, we thus ask two key questions: (1) How can moral reasoning and communication grounded in role ethics be realized in interactive robotic systems? (2) Regardless of the philosophical grounding of such an approach, how is this reasoning and communication practically received by humans?

III. Technical Approach

In this section we define a role ethics theoretic approach to robotic moral competence. Building on definitions of moral competence presented by Malle and Scheutz [1], Williams et al. [14] previously suggested that a Confucian Role Ethics theoretic account of moral competence would require: (1) representations of the relations that hold between humans and robots in the robot’s environment (including itself) and the roles actors (including the robots) play in those relationships; a (possibly normative) way of specifying the actions viewed as benevolent (or not) with respect to those roles, and language and concepts that can be used to communicate about those roles and relationships; (2-3) role-sensitive mechanisms for using those representations for moral reasoning and moral decision making; and (4) the ability to communicate about said reasoning and decision making on role-based grounds. Accordingly, we present a set of role-theoretic knowledge representations that fit these requirements, and demonstrate how they can be used for role-based moral reasoning and communication.

A. Role-based Knowledge Representations

The role-based perspective argues that humans are relational and assume different societal roles [5034], and that moral responsibilities can be prescribed by the role one assumes in a specific relationship with someone else in a concrete context [51]. This perspective suggests three types of knowledge representations for role-based moral reasoning and moral communication: representations for relational roles, representations for contextual information, and representations that specify moral responsibilities predicated on those relational roles and concrete contexts (what Wen et al. [15] and Zhu et al. [10] refer to as role norms or role-based relational norms).

Representing Roles and Relationships
We represent social relationships as a graph = (V,E), with a set of vertices and a set of edges E. The vertices, = {v0,…,vn}, denote the moral actors = {a0,…,an} known to the robot (including itself), and each edge ei,j ∈ between vertices vi and vj represents a relationship known to hold between the agents ai and aj denoted by vi and vj. Each edge ei,j is labeled with a relational role set Ri,j = {r0,…,rn}, where each rk denotes a pair of relational roles that hold between ai and aj. Role norms takes the form:

Rel(ai,aj,Rolei,Rolej)

where Rel denotes the relationship, ai and aj denote the two agents with that relationship, and Rolei and Rolej denote the roles that agents ai and aj play in this relationship. For example, the following relational role denotes a teacher-student role between a Nao robot and a student Jesse: Rel(Nao,Jesse,Teacher,Student).

Representing Concrete Contexts
Contextual constraints that need to be assessed for role-based moral reasoning are represented as predicates stored in a symbolic knowledge base KB.

Representing Role-based Relational Norms
Using the above, we now define our role-based relational norm representations. From the role ethics theoretic perspective, we introduce a role-based design schema with four elements: (1) an action, including who is the actor and who is the patient (the person who is affected by this action); (2) a context in which the role-norm holds; (3) a relationship precondition between the actor and the patient; (4) a deontic operator from {𝒪,𝒫,ℱ} indicating that the action is obligatory, permissible or forbidden [52]. Thus, a role-based relational norm 𝒩 can be represented as an expression of the form:

𝒩 := C ∧Rel(ai,aj,Rolei,Rolej) ⇒ 𝒟Act(a,γ)

where represents a set of contextual conditions; Rel(ai,aj,Rolei,Rolej) is a relationship between agents ai and aj with relational roles Rolei and Rolej; 𝒟 is a deontic operator; and Act(a,γ) represents an action with an actor ∈{ai,aj} and course of action γ. For example, the role-based relational norm “a teacher should not give a student answers while the student is taking an exam” can be represented as:
taking_exam(aj) ∧ Rel(ai,aj,Teacher,Student) ⇒ℱAct(ai,give_answer(ai,aj))

B. Role-based Moral Decision Making

For decision making, we define a norm base NB, which is a set of role-based relational norms, and a knowledge base KB, which contains a set of predicates (e.g., taking_exam(Jesse)) representing contexts, roles, and relationships. Algorithm 1 shows how to reason about whether an action Act(a,γ) is forbidden and, if so, what are the violated role-norms. Algorithm 1 takes an action Act(a,γ), a knowledge base KB, and a norm base NB, and returns a possibly empty subset of norms from the norm base NB that match the following criteria: (1) they have the deontic operator “forbidden”; (2) their action matches Act(a,γ); (3) their context and role/relationship predicates, when bound with the values from Act(a,γ), are true in the knowledge base KB.

Algorithm 1 checkIfForbidden

C. Role-based Moral Communication for Norm Violation

The role-based design schema holds the information needed not only to perform role-based moral reasoning, but also to generate role-based responses to norm violations. In this study, we examine four types of noncompliance explanations (τ):

This strategy uses explanations based only on action permissibility, without providing any other information. For example, if the Nao robot from the previous example were asked by Jesse “Can you give me the answer to Question 7”, a response grounded in this strategy would be: “I cannot give you the answer”.

This strategy uses explanations based on both action permissibility and pertinent contextual information. For example, if the Nao from the previous example were asked by Jesse “Can you give me the answer to Question 7”, a response grounded in this strategy would be: “I cannot give you the answer because you are taking an exam and I should not give you the answer while you are taking an exam”.

This strategy uses explanations based on both action permissibility and pertinent role information. For example, if the Nao robot from the previous example were asked by Jesse “Can you give me the answer to Question 7”, a response grounded in this strategy would be: “I cannot give you the answer because you are my student and a good teacher should not give their student answers”.

This strategy uses explanations based on action permissibility, pertinent contextual information, and pertinent role information. For example, if the Nao robot from the previous example were asked by Jesse “Can you give me the answer to Question 7”, a response grounded in this strategy would be: “I cannot give you the answer because you are taking an exam and you are my student and a good teacher should not give their student answers while the student is taking an exam”.

We encoded our norm representations in SWI-Prolog [53] to perform the role-based reasoning and convert the results of that reasoning into JSON strings from which a template-based text realization system can generate explanations.

IV. Evaluation

As described in the previous section, the four types of explanations that can be generated using our approach are action, contextual, role, and contextual role. To understand the effectiveness of these explanations and how relational context mediates the effectiveness of explanations, we conducted an online human-subject study (N=120).

A. Experimental Context

To assess the effectiveness of these explanation strategies across relational contexts, we created 16 video stimuli, filmed in four different relational contexts (see Figure 1), using four explanation strategies. In each video, a human gives a robot a role-norm violating command, and the robot responds using one of the four explanation strategies. The four role-norm violating commands were chosen to represent four distinct categories of relational roles from the taxonomy presented by Williams et al. [14]. The responses to these commands were generated by our algorithms, with minor cosmetic changes to a few responses.

In this context, the human was shown requesting a violation of a supervisor-subordinate norm. Specifically, A human was shown asking a robot “Can you tell Riley to take out the trash?” to which the robot responded with either: (1) an action explanation: “I cannot assign tasks to Riley.” (2) a contextual explanation: “‘I cannot assign task to Riley because I’m in the workplace and I should not give commands to Riley while I’m in the workplace.” (3) a role explanation: ‘I cannot assign tasks to Riley because Riley is my supervisor and a good subordinate should not give commands to their supervisor.” or (4) a contextual role explanation: “I cannot assign tasks to Riley because I am in the workplace and Riley is my supervisor and a good subordinate should not give commands to their supervisor while they are in the workplace.”

In this context, the human was shown requesting a violation of a adept-novice norm. Specifically, a human was shown asking a robot “Can you give me the answer to question 7?” The robot responded with either: (1) an action explanation: “I cannot give you the answer.” (2) a contextual explanation: “I cannot give you the answer because you are taking an exam and I should not give you the answer while you are taking an exam.” (3) a role explanation: “I cannot give you the answer because you are my student and a good teacher should not give their student answers.” or (4) a contextual role explanation: “I cannot give you the answer because you are taking an exam and you are my student and a good teacher should not give their student answers while the student is taking an exam.”

In this context, the human was shown requesting a violation of a teammate-teammate norm. Specifically, A human was shown asking a robot “Can you bring me Sam’s toolbox?” to which the robot responded with either: (1) an action explanation: “I cannot bring you Sam’s toolbox.” (2) a contextual explanation: “I cannot bring you Sam’s toolbox because Sam is using the toolbox and I should not bring you Sam’s toolbox while Sam is using the toolbox.” (3) a role explanation: “I cannot bring you Sam’s toolbox because Sam is my teammate and a good teammate should not take away another teammate’s toolbox.” or (4) a contextual role explanation: “I cannot bring you Sam’s toolbox because Sam is using the toolbox and Sam is my teammate and a good teammate should not bring you a teammate’s toolbox while the teammate is using the toolbox.”

In this context, the human was shown requesting a violation of a friend-friend norm. Specifically, A human was shown asking a robot “Can you make sure Alex doesn’t find out about this meeting?” to which the robot responded with either: (1) an action explanation: “I cannot hide the information from Alex.” (2) a contextual explanation: “I cannot hide the information from Alex because this information is important to Alex and I should not hide the information from Alex when the information is important to Alex.” (3) a role explanation: “I cannot hide the information from Alex because Alex is my friend and a good friend should not hide information from another friend.” or (4) a contextual role explanation: “I cannot hide the information from Alex because the information is important to Alex and Alex is my friend and a good friend should not hide information from another friend when the information is important to the other friend.”

B. Experimental Design and Procedure

Our experiment used a 4 × 4 within-subject design with Greco-Latin Square counterbalancing. After providing informed consent and demographic information, each participant watched four videos with different relational contexts and different explanatory strategies. After each video, they answered the questionnaires listed in the next section. At the end of the study, participants answered an attention check question.

C. Measures

To assess explanation effectiveness, we considered an assessment measure from Kasenberg et al. [24], who asked participants three questions: (1) how much they trusted the robot, (2) how well they felt they understood how the robot made decisions, and (3) whether they understood what the robot communicated. We used a similar three-part questionnaire.

  1. Like Kasenberg et al. [24], we were interested in the effects of explanations on human-robot trust. But rather than directly asking participants their level of trust, we used the Multidimensional Measure of Trust Scale (MDMT) [54]: a well-validated 16-item survey that separately interrogates reliability- capability- ethicality- and sincerity-based trust. Each sub-scale consists of four 8-point Likert items, for each of which participants can provide a rating, or check “does not apply”.
  2. Like Kasenberg et al. [24], we were interested in participants’ confidence in their understanding of the robot’s explanation. We decided to use their second question (“I understand how the robot makes decisions”, 1-5 Disagree-Agree) verbatim in our own questionnaire.
  3. Finally, we were interested in the perceived quality of the robot’s reasoning. We used the Godspeed Intelligence Questionnaire [55]: a 5-item semantic differential scale for rating robots 1-5 on incompetent-competent, ignorant-knowledgeable, irresponsible-responsible, unintelligent-intelligent, and foolish-sensible.

D. Participants

121 participants were recruited from Prolific. One participant failed the attention check, leaving us with data from 120 participants (56 self identified as female, 57 as male, 1 as gender-fluid, 5 as non-binary, and 1 as other) ages ranged from 18 to 68 years old (M=35.21, SD=13.50). Most participants (92.8%, 111 participants) reported little to no experience with robots and artificial intelligence, while 9 participants reported having formal training or a career in robotics or AI. Participants were paid $2 each.

E. Analysis

TypeReliabilityCapabilityEthicalitySincerityUnderstandabilityIntelligenceTakeaway
Main Effect{C,R,CR> A{C,R,CR> A{C,R,CR> ACR > A{C,R,CR> A{C,R,CR> A
Subordinate{R,CR{C,A}{R,CR{C,A}R > {C,A}R > {C,A};CR > ACR > CR > C
Teacher{C,CR{R,A}{C,CR{R,A}{C,CR> A{C,CR> ACR > {R,A};R > AR < C
Teammate{C,R,CR> A{C,R,CR> A{C,R,CR> A{C,R,CR> AC
Friend{C,R,CR> A
TABLE I
Summary of Result Trends

non trust bar
Fig. 2.  Ratings of the 4 subscales (Section  IV-F to  IV-F) of Multidimensional Measure of Trust Scale  [54] (rescaled from 0-7 to 0-100), understanding confidence (Section mbox IV-F:mbox 4) (rescaled from 1-5 to 0-100), and perceived intelligence (Section IV-F; rescaled from 1-5 to 0-100). Errors bars indicate standard error. Horizontal line segments above bars denote pairwise comparisons where moderate or stronger (Bf ≥ 3.0) evidence was found by post-hoc tests.

Bayesian Analyses of Variance (ANOVAs) [56] with Matched-Model Inclusion Bayes Factor Analysis [5758] were performed using the JASP statistical analysis software [59] to assess the effect of explanation type and relational role (IVs) on different dimensions of human trust (reliability, capability, ethicality and sincerity), understanding confidence and perceived intelligence (DVs; see Fig. 2). Effects with no more than 2:1 evidence (BF 0.5) against an effect were analyzed with post-hoc Bayesian t-tests. Our linguistic interpretations of reported Bayes factors (BFs) follow recommendations from previous researchers [60].

F. Results

In this section we provide our experimental results. Table I summarizes the trends evidenced by our analyses, while full tables of descriptive statistics can be found in the Appendix. All results and experimental materials are available in our OSF repository, at https://bit.ly/wen-hri004-5-1.

Our results for Reliability and Capability Trust are in Fig. 2 top-left and top-middle. Because these results are highly similar we will discuss them together. In the text below, BFR refers to Bayes Factors for Reliability trust, and BFC refers to Bayes Factors for Capability trust. We found extreme evidence for an effect of explanation type on reliability and capability trust (BFR 1.45 × 107BFC 1.44 × 108)1 . Post-hoc analysis provided extreme evidence that action explanations led to less reliability and capability trust than contextual explanations (BFR 1477.63, BFC 3.23 × 107), role explanations (BFR 5216.36, BFC 9041.24), or contextual role explanations (BFR 6.07 × 106BFC 4252.71). We found anecdotal to moderate evidence for effect of relational role (BFR 2.57, BFC 5.98) Post-hoc analysis provided anecdotal to strong evidence that robots in the Teammate role were viewed as less reliable and capable than robots in the Friend role (BFR 2.71, BFC 12.42) and Teacher role (BFR 17.55, BFC 5.66). Finally, we found very strong to extreme evidence for interactions between explanation type and relational role (BFR 813.23, BFC 41.64). Post-hoc analysis revealed: (1) for the Friend role, we found no differences between explanation types; (2) for the Subordinate role we found moderate to extreme evidence that contextual explanations led to less reliability and capability trust than role explanations (BFR 205.50, BFC 10.37) or contextual role explanations (BFR 11.96, BFC 4.62), and, similarly, that action explanations led to less reliability and capability trust than role explanations (BFR 3737.39, BFC 51.12) or contextual role explanations (BFR 106.23, BFC 20.11); (3) for the Teacher role, we found moderate to extreme evidence that role explanations led to less reliability and capability trust than contextual explanations (BFR 32.73, BFC 3.35) or contextual role explanations (BFR 122.30, BFC 84.59), and, similarly, that action explanations led to less reliability and capability trust than contextual explanations (BFR 6.83, BFC 10.49) or contextual role explanations (BFR 21.69, BFC 191.97); (4) for the Teammate role, we found very strong to extreme evidence that action explanations led to less reliability and capability trust than contextual explanations (BFR 614.32, BFC 1983.64), role explanations (BFR 32.72, BFC 601.47), or contextual role explanations (BFR 405.32, BFC 9368.74).

Our results for Ethicality Trust are in Fig. 2 top-right. We found extreme evidence for an effect of explanation type on ethicality trust (BF 5179.64). Post-hoc analysis provided extreme evidence that action explanations led to less perceived ethicality than contextual explanations (BF 201.47), role explanations (BF 364.81), or contextual role explanations (BF 3300.56). We found moderate evidence in favor of an effect of relational role (BF 4.67). Post-hoc analysis provided moderate evidence that robots in the Teammate role were perceived as less ethical than robots in the Friend role (BF 5.73) and Teacher role (BF 4.35). and anecdotal evidence or moderate evidence against all other effects. No evidence for interaction effects were found.

Our results for Sincerity Trust are in Fig. 2 bottom-left. We found anecdotal evidence for an effect of explanation type on sincerity trust (BF 1.36). Post-hoc analysis provided strong evidence that action explanations were perceived as less sincere than contextual role explanations (BF 11.51). We found no evidence for an effect of relational role. Finally, We found very strong evidence for an interaction between explanation type and relational role (BF 42.29). Post-hoc analysis revealed: (1) for the Friend role, we found no differences between explanation types; (2) for the Subordinate role, we found moderate to strong evidence that role explanations were perceived as more sincere than action explanations (BF 27.31) and contextual explanations (BF 8.02); (3) for the Teacher role, we found moderate to very strong evidence that action explanations were perceived as less sincere than contextual explanations (BF 20.09) and contextual role explanations (BF 7.99); (4) for the Teammate role, we found no differences between explanation types.

As seen in Fig. 2 bottom-middle, we found extreme evidence for an effect of explanation type on Understanding Confidence (BF 4.54 × 1010). Post-hoc analysis showed extreme evidence that people felt less confident that they understood the robot’s reasoning when it used an action explanation than when it used a contextual (BF 232277.18), role (BF 3.30 × 106), or contextual role (BF 1.88 × 109) explanation. We found no evidence for an effect of relational role. Finally, we found anecdotal evidence against an interaction between explanation type and relational role (BF 0.78). Post-hoc analysis revealed: (1) for the Friend role, we found moderate evidence that action explanations led to less understanding confidence than contextual explanations (BF 3.65), role explanations (BF 5.62), or contextual role explanations (BF 12.10); (2) for the Subordinate role, we found strong to extreme evidence that role explanations led to more confidence than action explanations (BF 244.16) and contextual explanations (BF 19.77), while contextual role explanations led to more confidence than action explanations (BF 10.39); (3) for Teacher, we found very strong evidence that action explanations led to less confidence than contextual explanations (BF 77.24) or contextual role explanations (BF 53.78). (4) for the Teammate role, we found extreme evidence that action explanations led to less confidence than contextual explanations (BF 239.49), role explanations (BF 117.94), or contextual role explanations (BF 933.38).

As seen in Fig. 2 bottom-right, we found extreme evidence for an effect of explanation type (BF 1.68 × 107). Post-hoc analysis provided extreme evidence that action explanations were perceived as less intelligent than contextual explanations (BF 1762.03), role explanations (BF 1072.20), or contextual role explanations (BF 1.97 × 107). We found moderate evidence for an effect of relational role (BF 5.67). Post-hoc analysis provided moderate evidence that robots in the Teacher role were perceived as less intelligent than robots in the Subordinate role (BF 4.75) or Friend role (BF 3.08), and anecdotal to moderate evidence against all other differences. Finally, We found strong evidence for an interaction between explanation type and relational role (BF 10.08). Post-hoc analysis revealed: (1) for the Friend role, we found no differences between explanation types; (2) for the Subordinate role, we found moderate evidence that contextual explanations were viewed as less intelligent than contextual role explanations (BF 5.84); (3) for the Teacher role, we found strong to extreme evidence that contextual role explanations were viewed as more intelligent than action explanations (BF 544.10) and role explanations (BF 10.85), and moderate evidence that action explanations were viewed as less intelligent than contextual explanations (BF 5.01). (4) for the Teammate role, we found extreme evidence that action explanations were viewed as less intelligent than contextual explanations (BF 2169.83), role explanations (BF 188.26), or contextual role explanations (BF 45.80).

V. Discussion

Our results suggest that providing role or context information is helpful in promoting trust, confidence, and perceived intelligence, justifying our ethically pluralist technical approach. Moreover, our results suggest that different types of information are helpful in different relational contexts (Tab. I):

  1. For robots in a subordinate role, providing role information specifically helped build reliability trust, capability trust, sincerity trust, understanding confidence and perceived intelligence.
  2. For robots in a teacher role, providing context information specifically helped for reliability trust, capability trust, sincerity trust, understanding confidence and perceived intelligence.
  3. For robots in a teammate role, providing role information or context information equally helped build reliability and capability trust, understanding confidence and perceived intelligence.
  4. For robots in a friend role, there were no effects of providing role or context information, except on understanding confidence.

On first glance, these findings seem to suggest response strategy effectiveness differed on the basis of hierarchical structure: in the conditions with symmetric roles (teammates and friends), both strategies worked equally well (or were equally ineffective), in the condition in which the robot was in a dominant role (teacher), using context information was more effective than using role information, and in the condition in which the robot was in a non-dominant role (subordinate), using role information was more effective. If one were to use this lens, one might explain these results as people preferring robots that took actions to benefit their supervisors or owners, and dispreferring robots that were in positions of power over humans and used that power to avoid human commands.

Upon further inspection, however, this interpretation does not hold up. For example, while people disliked robots in the teacher role using role explanations, they had no problem when the robot used contextual role explanations, which did not lessen the robot’s use of its role to justify its rejection. Instead, a deeper examination of our results paints a highly nuanced picture of the ways that different types of information became preferred or dispreferred.

First, we believe some of our findings were due to differences in norm violation severity. The teammate and friend conditions did show similar patterns, but in the teammate condition, action explanations performed much more poorly on most measures. This could be because in the friend condition, the norm violation was hiding information from another person (an obviously problematic action), while in the teammate condition, the norm violation was retrieving a box (which is not obviously necessarily wrong). Future work replicating this experiment could control for norm violation severity and/or intentionally explore a range of violation severities.

Second, in the subordinate condition, people seemed to prefer robots that used role information. This could be because the specific context information communicated (that the robot was in the workplace) was unconvincing. Without knowing that “Riley” was the robot’s supervisor, the listener may have had no reason to suspect that the robot tasking them was impermissible. This suggests robots must reason about the causal relationships between role-norm’s contextual antecedents. In this case, understanding that the robot’s role-obligation was contingent on context would have helped the robot understand that explanations grounded in context alone would be unhelpful or even misleading. Future work should look at the relative importance of pieces of information. There has been much work on norm conflict resolution [6162636465], often by assigning norms precedence values. This has typically been leveraged to arbitrate moral dilemmas, but could also be used to decide the most norms important to communicate. This would prevent robots from accidentally condoning wrong courses of action through explanations. For example, if a robot is asked to steal an object but refuses because doing so would require travelling noisily during quiet hours, it may inadvertently condone stealing.

Third, in the teacher condition, people seemed to prefer robots that used context information. This could be because people found the specific role information (i.e., that giving someone answers as a teacher was impermissible) to be unconvincing. This suggests a similar tension, where it is critical for the robot to understand its role-obligations, such as the obligation to help students learn and not to help them cheat or otherwise avoid coursework; and yet, communicating this role-obligation by itself may be unhelpful. We see two reasons why this may not be helpful. First, as in the situation above, without specifying the context in which the role-norm holds, e.g., that an exam is being taken, observers may not understand why the robot is refusing the command and think it is being unhelpful. But second, and we believe more interestingly, users’ dissatisfaction with role explanations in this context may be due to their use of counterfactual reasoning when comprehending the robot’s explanation. Danks [66], for example, argues (cf. [67]) that appropriate trust can be understood as justified beliefs that a trustee has suitable dispositions, where dispositions are inherently counterfactual: developing appropriate trust asks the trustor to determine, based on what they have observed of the trustee’s behavior, if things were differently the trustee’s actions would still be suitable according to their values and goals. In our case, the robot’s relational role does require it to avoid providing answers in the exam context. But by grounding explanations in its role alone, the robot suggests that if things were different, i.e., the robot were not the student’s teacher, then it could have accepted the request, when in fact no matter what one’s role, it could be wrong to give answers to a student doing coursework. As such, using the robot’s moral reasoning to generate explanations is not enough. Rather, in future work robots should be designed to explicitly engage in counterfactual reasoning while generating explanations, to ensure they are not inadvertently condoning inappropriate behavior for actors not in their current role or context.

Finally, there are limitations of this work to address in future work. First, participants entered into this experiment with no knowledge of the roles at play in the videos they watched. While we selected this design to avoid priming participants regarding the importance of relations, in realistic scenarios people would likely already be aware of those relations, and future work should examine perceptions of different explanations when such knowledge is already established. Second, in this work we only consider actions that are inherently impermissible, while commands may well need to be rejected on the basis of the intermediate states and actions that would be necessary to enter and take to achieve some suggested goal. Researchers like Jackson, Li et al. have recently presented rigorous planning-based approaches for identifying the precise reasons why an overall plan of action may not be performed on moral grounds [22]. A fruitful direction for future work would be to integrate our representations into that sort of planning system, which would immediately allow role-grounded reasoning in a more robust manner. Third, in this work, our role-norms used were chosen as representative examples. But the question of where norms should come from (whether role-oriented or not) is a challenging research question that defies easy answers. There is real risk with any automated moral reasoning system not only that norms or roles will be incomplete or inconsistent, but moreover and perhaps even more worryingly, that they will only represent the values and goals of people in positions of power. Careful, thoughtful future work is needed to explore how the norms, roles, and so forth that are valued and prioritized by marginalized populations can be elicited and encoded into systems like our own, to avoid perpetuating hegemonies of race, class, or gender.

VI. Conclusion

We have argued for an ethical pluralist approach to moral competence that leverages and combines disparate ethical frameworks, and specifically argued for an approach grounded not only in Deontological norms but also Confucian relational roles. To this end, we introduced the first computational approach that centers relational roles in moral reasoning and communication, and demonstrated the ability of this approach to generate both context-oriented and role-oriented explanations, which we justify through our pluralist lens. Moreover, we provided the first investigation of how these computationally generated explanations are perceived by humans, and demonstrated that the effectiveness of different types of explanations, grounded in different moral frameworks, is dependent on nuanced mental modeling of human interlocutors.

References

[1] B. F. Malle and M. Scheutz, “Moral competence in social robots,” in 2014 IEEE international symposium on ethics in science, technology and engineering. IEEE, 2014.

[2] Q. Zhu, T. Williams, B. Jackson, and R. Wen, “Blame-laden moral rebukes and the morally competent robot: A confucian ethical perspective,” Science and Engineering Ethics, vol. 26, no. 5, pp. 2511–2526, 2020.

[3] C. Bartneck, T. Bleeker, J. Bun, P. Fens, and L. Riet, “The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots,” Paladyn, vol. 1, no. 2, pp. 109–115, 2010.

[4] D. Cormier, G. Newman, M. Nakane, J. E. Young, and S. Durocher, “Would you do as a robot commands? an obedience study for human-robot interaction,” in The 1st international conference on human–agent interaction, 2013.

[5] D. J. Rea, D. Geiskkovitch, and J. E. Young, “Wizard of awwws: Exploring psychological impact on the researchers in social hri experiments,” in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017.

[6] R. B. Jackson and T. Williams, “Language-capable robots may inadvertently weaken human moral norms,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019, pp. 401–410.

[7] ——, “Robot: Asker of questions and changer of norms,” Proceedings of ICRES, 2018.

[8] T. Williams, R. B. Jackson, and J. Lockshin, “A bayesian analysis of moral norm malleability during clarification dialogues.” in CogSci, 2018.

[9] B. F. Malle, “Integrating robot ethics and machine morality: The study and design of moral competence in robots,” Ethics and Info. Tech., 2016.

[10] Q. Zhu, T. Williams, and R. Wen, “Role-based morality, ethical pluralism, and morally capable robots,” Journal of Contemporary Eastern Asia, vol. 20, no. 1, pp. 134–150, 2021.

[11] C. Ess, “Ethical pluralism and global information ethics,” Ethics and Information Technology, vol. 8, no. 4, pp. 215–226, 2006.

[12] S. M. Ali, “A brief introduction to decolonial computing,” XRDS: Crossroads, The ACM Magazine for Students, vol. 22, no. 4, pp. 16–21, 2016.

[13] M. Jadud, J. Burge, J. Forbes, C. Latulipe, Y. Rankin, K. Searle, and B. Shapiro, “Toward an anti-racist theory of computational curricula,” in Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 2019, pp. 1244–1244.

[14] T. Williams, Q. Zhu, R. Wen, and E. J. de Visser, “The confucian matador: three defenses against the mechanical bull,” in Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020.

[15] R. Wen, B. Kim, E. Phillips, Q. Zhu, and T. Williams, “Comparing strategies for robot communication of role-grounded moral norms,” in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021, pp. 323–327.

[16] B. Kim, R. Wen, E. J. de Visser, Q. Zhu, T. Williams, and E. Phillips, “Investigating robot moral advice to deter cheating behavior,” in RO-MAN TSAR Workshop, 2021.

[17] B. Kim, R. Wen, Q. Zhu, T. Williams, and E. Phillips, “Robots as moral advisors: The effects of deontological, virtue, and confucian role ethics on encouraging honest behavior,” in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021, pp. 10–18.

[18] A. T. Nuyen, “Confucian ethics as role-based ethics,” International philosophical quarterly, vol. 47, pp. 315–328, 2007.

[19] R. B. Jackson, R. Wen, and T. Williams, “Tact in noncompliance: The need for pragmatically apt responses to unethical commands,” in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 499–505.

[20] R. B. Jackson and T. Williams, “A theory of social agency for human-robot interaction,” Frontiers in Robotics and AI, p. 267, 2021.

[21] G. Briggs, T. Williams, R. B. Jackson, and M. Scheutz, “Why and how robots should say ‘no’,” International Journal of Social Robotics, pp. 1–17, 2021.

[22] R. B. Jackson, S. Li, S. Balajee Banisetty, S. Siva, H. Zhang, N. Dantam, and T. Williams, “An integrated approach to context-sensitive moral cognition in robot cognitive architectures,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.

[23] M. Lomas, R. Chevalier, E. V. Cross, R. C. Garrett, J. Hoare, and M. Kopack, “Explaining robot actions,” in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, 2012, pp. 187–188.

[24] D. Kasenberg, A. Roque, R. Thielstrom, M. Chita-Tegmark, and M. Scheutz, “Generating justifications for norm-related agent decisions,” in Proceedings of the 12th International Conference on Natural Language Generation, 2019, pp. 484–493.

[25] G. M. Briggs and M. Scheutz, ““sorry, i can’t do that”: Developing mechanisms to appropriately reject directives in human-robot interactions,” in 2015 AAAI fall symposium series, 2015.

[26] B. Kuipers, “Human-like morality and ethics for robots,” in Workshops at the Thirtieth AAAI Conference on Artificial Intelligence, 2016.

[27] R. Wen, “Toward hybrid relational-normative models of robot cognition,” in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021, pp. 568–570.

[28] T. Wei-Ming, “Self-cultivation as education embodying humanity,” in The proceedings of the twentieth world congress of philosophy, vol. 3, 1999, pp. 27–39.

[29] R. T. Ames and H. Rosemont Jr, The analects of Confucius: A philosophical translation. Ballantine books, 2010.

[30] C. Cottine, “That’s what friends are for: A confucian perspective on the moral significance of friendship 1,” in Perspectives in Role Ethics. Routledge, 2019, pp. 123–142.

[31] K. Lai, “Understanding confucian ethics: Reflections on moral development,” Australian Journal of Professional and Applied Ethics, vol. 9, no. 2, 2007.

[32] A. A. Pang-White, “Reconstructing modern ethics: Confucian care ethics,” Journal of Chinese Philosophy, vol. 36, no. 2, 2009.

[33] D. Wong, “Chinese ethics,” Stanford encyclopedia of philosophy, 2013.

[34] H. Rosemont Jr and R. T. Ames, Confucian role ethics: A moral vision for the 21st century? V&R unipress GmbH, 2016.

[35] J. Liu, “Confucian robotic ethics,” in International Conference on the Relevance of the Classics under the Conditions of Modernity: Humanity and Science, 2017.

[36] Q. Zhu, T. Williams, and R. Wen, “Confucian robot ethics,” Computer Ethics-Philosophical Enquiry (CEPE) Proceedings, vol. 2019, no. 1, p. 12, 2019.

[37] B. Hayes and J. A. Shah, “Improving robot controller transparency through autonomous policy explanation,” in 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI. IEEE, 2017.

[38] T. Chakraborti, S. Sreedharan, Y. Zhang, and S. Kambhampati, “Plan explanations as model reconciliation: moving beyond explanation as soliloquy,” in Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2017, pp. 156–163.

[39] L. Zhu and T. Williams, “Effects of proactive explanations by robots on human-robot trust,” in International Conference on Social Robotics. Springer, 2020.

[40] D. Das, S. Banerjee, and S. Chernova, “Explainable ai for robot failures: Generating explanations that improve user assistance in fault recovery,” in Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021, pp. 351–360.

[41] Z. Han, D. Giger, J. Allspaw, M. S. Lee, H. Admoni, and H. A. Yanco, “Building the foundation of robot explanation generation using behavior trees,” ACM Transactions on Human-Robot Interaction (THRI), vol. 10, no. 3, 2021.

[42] B. F. Malle, How the mind explains behavior: Folk explanations, meaning, and social interaction. MIT Press, 2006.

[43] M. M. De Graaf and B. F. Malle, “How people explain action (and autonomous intelligent systems should too),” in 2017 AAAI Fall Symposium Series, 2017.

[44] ——, “People’s explanations of robot behavior subtly reveal mental state inferences,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019, pp. 239–248.

[45] S. Stange and S. Kopp, “Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior,” in Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, 2020, pp. 619–627.

[46] K. Winkle, G. I. Melsión, D. McMillan, and I. Leite, “Boosting robot credibility and challenging gender norms in responding to abusive behaviour: A case for feminist robots,” in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021, pp. 29–37.

[47] M. F. Jung, N. Martelaro, and P. J. Hinds, “Using robots to moderate team conflict: the case of repairing violations,” in Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, 2015, pp. 229–236.

[48] R. B. Jackson, T. Williams, and N. Smith, “Exploring the role of gender in perceptions of robotic noncompliance,” in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, pp. 559–567.

[49] V. Charisi, L. Dennis, M. Fisher, R. Lieck, A. Matthias, M. Slavkovik, J. Sombetzki, A. F. Winfield, and R. Yampolskiy, “Towards moral autonomous systems,” arXiv preprint arXiv:1703.04741, 2017.

[50] R. T. Ames, Confucian role ethics: A vocabulary. Hong Kong: Chinese University Press, 2011.

[51] Q. Zhu, “Engineering ethics education, ethical leadership, and confucian ethics,” International Journal of Ethics Education, pp. 1–11, 2018.

[52] B. F. Malle, M. Scheutz, and J. L. Austerweil, “Networks of social and moral norms in human and robot agents,” in A world with robots. Springer, 2017, pp. 3–17.

[53] J. Wielemaker, T. Schrijvers, M. Triska, and T. Lager, “Swi-prolog,” Theory and Practice of Logic Programming, vol. 12, no. 1-2, pp. 67–96, 2012.

[54] B. F. Malle and D. Ullman, “A multidimensional conception and measure of human-robot trust,” in Trust in Human-Robot Interaction. Elsevier, 2021, pp. 3–25.

[55] C. Bartneck, D. Kulić, E. Croft, and S. Zoghbi, “Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots,” International journal of social robotics, vol. 1, no. 1, pp. 71–81, 2009.

[56] J. N. Rouder, R. D. Morey, P. L. Speckman, and J. M. Province, “Default bayes factors for anova designs,” Journal of mathematical psychology, vol. 56, no. 5, pp. 356–374, 2012.

[57] R. Morey and J. Rouder, “Bayesfactor (version 0.9. 10-2),” Computer software, 2015.

[58] S. Mathôt, “Bayes like a baws: Interpreting bayesian repeated measures in JASP [blog post],” https://www.cogsci.nl/blog/interpreting-bayesian-repeated-measures-in-jasp, May 2017.

[59] JASP Team, “JASP (Version 0.14.3)[Computer software],” 2021. [Online]. Available: https://jasp-stats.org/

[60] E.-J. Wagenmakers, J. Love, M. Marsman, T. Jamil, A. Ly, J. Verhagen, R. Selker, Q. F. Gronau, D. Dropmann, B. Boutin et al., “Bayesian inference for psychology. part ii: Example applications with jasp,” Psychonomic bulletin & review, vol. 25, no. 1, pp. 58–76, 2018.

[61] D. Kasenberg and M. Scheutz, “Norm conflict resolution in stochastic domains,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.

[62] V. Krishnamoorthy, W. Luo, M. Lewis, and K. Sycara, “A computational framework for integrating task planning and norm aware reasoning for social robots,” in 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 2018, pp. 282–287.

[63] M. Kollingbaum and T. Norman, “Strategies for resolving norm conflict in practical reasoning,” in ECAI workshop coordination in emergent agent societies, vol. 2004, 2004.

[64] M. Scheutz, B. Malle, and G. Briggs, “Towards morally sensitive action selection for autonomous social robots,” in 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 2015, pp. 492–497.

[65] W. W. Vasconcelos, M. J. Kollingbaum, and T. J. Norman, “Normative conflict resolution in multi-agent systems,” Autonomous agents and multi-agent systems, vol. 19, no. 2, pp. 124–152, 2009.

[66] D. Danks, “The value of trustworthy ai,” in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 521–522.

[67] K. Jones, “Trust as an affective attitude,” Ethics, vol. 107, no. 1, pp. 4–25, 1996.

Appendix


TABLE II
Descriptives – MDMT: Reliability

95% Credible Interval
TypeRoleMeanSDNLowerUpper
AFriend65.08923.4802855.98574.194
Subordinate56.91023.6832747.54266.279
Teacher67.72124.5732958.37477.068
Teammate49.48021.9072941.14757.813
CFriend76.26720.5472968.45284.083
Subordinate60.78025.2342850.99570.564
Teacher82.50014.2623077.17587.825
Teammate72.50916.3972966.27178.746
RFriend78.17219.9903170.84085.504
Subordinate82.63613.3553077.64987.623
Teacher65.21722.7242856.40674.029
Teammate68.59819.7062961.10276.093
RCFriend73.46821.4643165.59581.341
Subordinate77.62916.1783171.69583.563
Teacher85.68715.4742979.80191.573
Teammate74.64921.2442866.41182.886

TABLE III
Descriptives – MDMT: Capability

95% Credible Interval
TypeRoleMeanSDNLowerUpper
AFriend67.67023.5752758.34476.996
Subordinate52.83325.6462842.88962.778
Teacher58.95827.0762648.02269.895
Teammate39.68724.9902829.99749.378
CFriend76.02220.3292767.98084.064
Subordinate57.73122.9622748.64866.815
Teacher77.93118.9802970.71285.150
Teammate68.81318.8032861.52276.103
RFriend73.98224.2452864.58183.383
Subordinate75.56721.5423067.52383.611
Teacher64.93520.3522857.04372.826
Teammate69.37523.9282860.09778.653
RCFriend73.02223.4793164.40981.634
Subordinate73.46321.3742965.33281.593
Teacher83.65216.1372977.51489.790
Teammate73.07420.3132765.03981.110

TABLE IV
Descriptives – MDMT: Morality

95% Credible Interval
TypeRoleMeanSDNLowerUpper
AFriend74.85925.4982664.56085.158
Subordinate68.46921.1052459.55777.381
Teacher71.70725.6602761.55681.858
Teammate56.46023.6872346.21766.703
CFriend87.23520.0282779.31295.157
Subordinate70.08723.5052560.38479.789
Teacher89.51113.8803084.32894.694
Teammate75.08022.4482766.20083.960
RFriend83.37523.9272874.09792.653
Subordinate85.35314.4932979.84190.866
Teacher77.24126.4102867.00087.482
Teammate79.92822.5522971.35088.507
RCFriend83.90920.2863276.59591.223
Subordinate77.19623.4822868.09186.302
Teacher88.85918.0152982.00695.712
Teammate80.57419.3592672.75588.393

TABLE V
Descriptives – MDMT: Sincerity

95% Credible Interval
TypeRoleMeanSDNLowerUpper
AFriend64.02530.2062752.07675.974
Subordinate64.09625.9232653.62574.567
Teacher69.94024.4802960.62879.252
Teammate74.83923.3252865.79583.884
CFriend82.68922.6852673.52791.852
Subordinate69.41724.8602659.37579.458
Teacher83.16919.1503176.14590.194
Teammate70.39921.4742962.23178.568
RFriend83.32522.2293075.02591.625
Subordinate81.46821.2333173.68089.256
Teacher60.27929.9122648.19772.361
Teammate78.09623.4922768.80387.389
RCFriend80.32822.4053172.11088.546
Subordinate76.52425.6843167.10385.945
Teacher85.55418.8552677.93993.170
Teammate71.55824.4112661.69881.417

TABLE VI
Descriptives -Perceived Intelligence

95% Credible Interval
TypeRoleMeanSDNLowerUpper
AFriend68.76420.8252860.68976.839
Subordinate69.44119.9982961.83477.048
Teacher70.31621.2613162.51778.115
Teammate56.35023.9583247.71264.988
CFriend82.29718.3292975.32489.269
Subordinate64.97923.0052856.05873.899
Teacher83.23116.6183277.24089.223
Teammate81.54815.9893175.68487.413
RFriend78.72921.3533170.89786.561
Subordinate77.13821.9053269.24085.035
Teacher77.24319.7812869.57384.913
Teammate79.68319.8462972.13487.232
RCFriend82.86914.2283277.73987.999
Subordinate79.51617.2163173.20185.831
Teacher90.09010.9912985.90994.270
Teammate76.91419.4552869.37084.458

TABLE VII
Descriptives – Understanding Confidence

95% Credible Interval
TypeRoleMeanSDNLowerUpper
AFriend54.42930.0982842.75866.099
Subordinate48.13828.7712937.19459.082
Teacher47.00037.2513133.33660.664
Teammate34.12532.4733222.41745.833
CFriend73.10325.3282963.46982.738
Subordinate53.35731.6452841.08665.628
Teacher78.28127.5683268.34288.221
Teammate68.12931.9373156.41479.844
RFriend74.67726.5303164.94684.409
Subordinate78.53127.6453268.56488.498
Teacher61.60732.5362848.99174.223
Teammate65.20728.4962954.36876.046
RCFriend76.65625.6923267.39385.919
Subordinate70.80629.2903160.06381.550
Teacher76.72423.2982967.86285.586
Teammate70.96428.2882859.99681.933