Norms and commitments in human-robot cooperative interactions
Convenor. Elisabeth Pacherie (Institut Jean Nicod, PSL, Paris).
Speakers & talks
- Elisabeth Pacherie (Institut Jean Nicod, PSL, Paris): “Introduction – Motivational and predictive challenges in human-robot cooperative interactions”
- Ingar Brinck (Lund University): “Social norms in Human-Robot interaction”
- Raul Hakli (University of Helsinki): “Cooperative interactions with social robots?”
- John Michael (Central European University, Vienna): “The Sense of Commitment in Human-Robot Interaction”
Recent developments in social robotics aimed at improving cooperative human-robot interactions have brought to light a number of important challenges to that enterprise. These challenges fall into two main categories: motivational and predictive.
Humans are social animals that exhibit a robust motivation to engage with others that has a variety of sources, both endogenous (e.g. need to belong or general pro-social tendency) or exogenous (e.g. social pressure). This motivation plays a crucial role in explaining why people engage in joint action with human partners and why they may remained engaged in a joint action even when more attractive options emerge. In contrast, humans appear much less motivated to engage in joint actions with robots, exhibiting negative attitudes (e.g., the Uncanny Valley Effect) and forms of distrust toward robots..
Besides motivation, successful cooperative interactions are also premised on prediction. Agents need to coordinate their actions at various levels and to do must be able to make accurate predictions regarding their partner’s actions and their consequences. In human-human interactions a variety of processes and devices, ranging from automatic processes of motor resonance all the way to explicit communication and commitments, help us predict the actions of our partners. However, research on human-robot interaction suggests that prediction can be a serious challenge for human-robot interactions. Robots are often cognitively opaque to their human partners. For instance, several findings in psychology and neuroscience suggest that humans interact differently and do not deploy the same range of predictive processes when their partner is a robot rather than a human (Sahaï et al., 2017; 2019). In addition, the gap between the expected and the actual capabilities of the robot may also impact the predictive capacities of humans.
The main purpose of this symposium will be to assess the extent to which an appeal to norms and commitments in human-robot cooperative interactions may help mitigate these motivational and predictive challenges, what the benefits or drawbacks of such an approach could be, and how it compares to other strategies, such as strategies that emphasize the need to built robots whose appearance and behavior appeal to human positive emotions and exploit human tendencies towards anthropomorphism or strategies that emphasize the need to endow robots with an extensive range of human-like social cognition abilities.
Ingar Brinck: Social norms in Human-Robot interaction
Social norms are spontaneously emergent patterns of coordinated behaviour that organize how individuals behave towards each other in accordance with social expectations about what and how an individual ought to do in a given situation. They improve how agents collectively manage, and reduce the cognitive costs associated with interaction generally. Basing HRI in social norms can be expected to have advantages that would include meeting the motivational and predictive challenges. However, social norms present a challenge for HRI, being notoriously difficult to implement. I will discuss the advantages of an approach to HRI based in social norms, granted that the implementation problem can be handled in a satisfactory way, and compare it to other approaches. Then, I will briefly address the implementation problem and sketch a way of dealing with it.
Raul Hakli: Cooperative interactions with social robots?
My talk is concerned with social interaction with robots, which seems to be a problematic notion because the term “sociality” does not seem to be readily applicable to artifacts like robots. Even if we were to allow taking robots as intentional agents, as in functionalism or Dennett’s intentional stance approach, the problem remains that social interaction is typically understood to take place between persons and to involve capacities that arguably are beyond robots. Contrary to the common way of talking in AI and robotics, I argue that robots are not autonomous agents in the philosophical sense relevant to personhood, or moral agency that entails fitness to be held responsible. This arguably imples that, strictly speaking, they are not proper subjects of social commitments, they are not subject to norms or obligations, they are not proper objects of trust, nor proper participants in joint actions. However, they can still be programmed to behave in ways that are behaviourally equivalent to cooperative social interaction and joint action. Extending the Dennettian approach beyond the intentional stance, into what I call a social stance, creates room for taking them, for instrumental purposes, as social agents and partners in social interaction or joint action.
John Michael: The Sense of Commitment in Human-Robot Interaction
In this talk I spell out the rationale for developing means of manipulating and of measuring people’s sense of commitment to robot interaction partners. A sense of commitment may lead people to be patient when a robot is not working smoothly, to remain vigilant when a robot is working so smoothly that a task becomes boring and to increase their willingness to invest effort in teaching a robot. Against this background I will present a set of studies that have been conducted to probe various means of boosting people’s sense of commitment to robot interaction partners, and raise discuss the implications for our psychological and normative understanding of commitment.