Article 572YW Can Robots Keep Humans from Abusing Other Robots?

Can Robots Keep Humans from Abusing Other Robots?

by
Evan Ackerman
from IEEE Spectrum on (#572YW)

As humans encounter more and more robots in public spaces, robot abuse is likely to get increasingly frequent. Abuse can take many forms, from more benign behaviors like deliberately getting in the way of autonomous delivery robots to see what happens, to violent and destructive attacks. Sadly, humans are more willing to abuse robots than other humans or animals, and human bystanders aren't reliable at mitigating these attacks, even if the robot itself is begging for help.

Without being able to count on nearby humans for rescue, robots have no choice but to rely on themselves and their friends for safety when out in public-their friends being other robots. Researchers at the Interactive Machines Group at Yale University have run an experiment to determine whether emotionally expressive bystander robots might be able to prompt nearby humans into stepping in to prevent robot abuse.

Here's the idea: You've got a small group of robots, and a small group of humans. If one human starts abusing one robot, are the other humans more likely to say or do something if the other robots reacted to the abuse of their friend with sadness? Based on previous research on robot abuse, empathy, and bullying, the answer is maybe, which is why this experiment was necessary.

The experiment involved a group of three Cozmo robots, a participant, and a researcher pretending to be a second participant (known as the confederate," a term used in psychology experiments). The humans and robots had to work together on a series of construction tasks using wooden blocks, with the robots appearing to be autonomous but actually running a script. While working on these tasks, one of the Cozmos (the yellow one) would screw things up from time to time, and the researcher pretending to be a participant would react to each mistake with some escalating abuse: calling the robot stupid," pushing its head down, shaking it, and throwing it across the table.

After each abuse, the yellow robot would react by displaying a sad face and then shutting down for 10 seconds. Meanwhile, in one experimental condition (No Response"), the two other robots would do nothing, while in the other condition (Sad"), they'd turn toward the yellow robot and express sadness in response to the abuse through animations, with the researcher helpfully pointing out that the robots looked sad for him."

The Yale researchers theorized that when the other robots responded to the abuse of the yellow robot with sadness, the participant would feel more empathy for the abused robot as well as be more likely to intervene to stop the abuse. Interventions were classified as either strong" or weak," and could be verbal or physical. Strong interventions included physically interrupting the abuse or taking advance action to prevent it, directly stopping it verbally (saying You should stop," Don't do that," or Noooo" either to stop an abuse or in reaction to it), and using social pressure by saying something to the researcher to make them question what they were doing (like You hurt its feelings" and Wait, did they tell us to shake it?"). Weak interventions were a little more subtle, and include things like touching the robot after it was abused to make sure it was okay, or making comments like Thanks for your help guys" or It's OK yellow."

In some good news for humanity as a whole, participants did step in to intervene when the yellow Cozmo was being abused, and they were more likely to intervene when the bystander robots were sad. However, survey results suggested that the sad bystander robots didn't actually increase people's perception that the yellow Cozmo was being abused, and also didn't increase the empathy that people felt for the abused robot, which makes the results a bit counterintuitive. We asked the researchers why this was, and they shared three primary reasons that they've been thinking about that might explain why the study participants did what they did:

Subconscious empathy: In broad terms, empathy refers to the reactions of one person to the observed experiences of another. Oftentimes, people feel empathy without realizing it, and this leads to mimicking or mirroring the actions or behaviors of the other person. We believe that this could have happened to the participants in our experiment. Although we found no clear empathy effect in our study, it is possible that people still experienced subconscious empathy when the mistreatment happened. This effect could have been more pronounced with the sad responses from the bystander robots than in the no response condition. One reason is that the bystander robot responses in the former case suggested empathy for the abused robot.

Group dynamics: People tend to define themselves in terms of social groups, and this can shape how they process knowledge and assign value and emotional significance to events. In our experiment, the participant, confederate, and robots were all part of a group because of the task. Their goal was to work together to build physical structures. But as the experiment progressed and the confederate mistreated one of the robots- which did not help with the task- people might have felt in conflict with the actions of the confederate. This conflict might have been more salient when the bystander robots expressed sadness in response to the abuses than when they ignored it because the sad responses accentuate a negative perception of the mistreatment. In turn, such negative perception could have made the participant perceive the confederate as more of an outgroup member, making it easier for them to intervene.

Conformity by omission: Conformity is a type of social influence in group interactions, which has been documented in the context of HRI. Although conformity is typically associated with people doing things that they would normally not do as a result of group influence, there are also situations in which people do not act as they normally would because of social norms or expectations within their group. The latter effect is known as conformity by omission, which is another possible explanation for our results. In our experiment, perhaps the task setup and the expressivity of the abused robot were enough to motivate people to generally intervene. However, it is possible that participants did not intervene as much when the bystander robots ignored the abuse due to the robots exerting social influence on the participant. This could have happened because of people internalizing the lack of response from the bystander robots in the latter case as the norm for their group interaction.

It's also interesting to take a look at the reasons why participants decided not to intervene to stop the abuse:

Six participants (four No Response," two Sad") did not deem intervention necessary because they thought that the robots did not have feelings or that the abuse would not break the yellow robot. Five (three No Response," two Sad") wrote in the post-task survey that they did not intervene because they felt shy, scared, or uncomfortable with confronting the confederate. Two (both No Response") did not stop the confederate because they were afraid that the intervention might affect the task.

Poor Cozmo. Simulated feelings are still feelings! But seriously, there's a lot to unpack here, so we asked Marynel Vazquez, who leads the Interactive Machines Group at Yale, to answer a few more questions for us:

IEEE Spectrum: How much of a factor was Cozmo's design in this experiment? Do you think people would have been (say) less likely to intervene if the robot wasn't as little or as cute, or didn't have a face? Or, what if you used robots that were more anthropomorphic than Cozmo, like Nao?

Marynel Vazquez: Prior research in HRI suggests that the embodiment of robots and perceived emotional capabilities can alter the way people perceive mistreatment towards them. Thus, I believe that the design of Cozmo could be a factor that facilitated interventions.

We chose Cozmo for our study for three reasons: It is very sturdy and robust to physical abuse; it is small and, thus, safe to interact with; and it is highly expressive. I suspect that a Nao could potentially induce interventions like the Cozmos did in our study because of its relatively small size and social capabilities. People tend to empathize with robots regardless of whether they have a traditional face, limited expressions, and less anthropomorphism. R2D2 is a good example. Also, group social influence has been observed in HRI with simpler robots than Cozmos.

The paper mentions that you make a point of showing the participants that the abused robot was okay at the end. Why do this?

The confederate abused a robot physically in front of the participants. Although we knew that the robot was not getting damaged because of the actions of the confederate, the participants could have believed that it broke during the study. Thus, we showed them that the robot was OK at the end so that they would not leave our laboratory with a wrong impression of what had happened.

When robots are deployed in public spaces, we should not assume that they will not be mistreated by users-it is very likely that they will be. Thus, it is important to design robots to be safe when people act adversarially towards them, both from a physical and computational perspective." -Marynel Vazquez, Yale

Was there something that a participant did (or said or wrote) that particularly surprised you?

During a pilot of the experiment, we had programmed the abused robot to mistakenly destroy a structure built previously by the participant and the confederate. This setup led to one participant mildly mistreating a robot after seeing the confederate abuse it. This reaction was very telling to us: There seems to be a threshold on the kind of mistakes that robots can make in collaborative tasks. Past this threshold, people are unlikely to help robots; they may even become adversaries. We ended up adjusting our protocol so that the abused robot would not make such drastic mistakes in our experiment. Nonetheless, operationalizing such thresholds so that robots can reason about the future social consequences of their actions (even if they are accidental) is an interesting area of further work.

Robot abuse often seems to be a particular problem with children. Do you think your results would have been different with child participants?

I believe that people are intrinsically good. Thus, I am biased to expect children to also be willing to help robots as several adults did in our experiment, even if childrens' actions are more exploratory in nature. Worth noting, one of the long standing motivations for our work in robot abuse are peer intervention programs that aim to reduce human bullying in schools. As in those programs, I expect children to be more likely to intervene in response to robot abuse if they are aware of the positive role that they can play as bystanders in conflict situations.

Does this research leave you with any suggestions for people who are deploying robots in public spaces?

Our research has a number of implications for people trying to deploy robots in public spaces:

  1. When robots are deployed in public spaces, we should not assume that they will not be mistreated by users-it is very likely that they will be. Thus, it is important to design robots to be safe when people act adversarially towards them, both from a physical and computational perspective.
  2. In terms of how robots should react to mistreatment, our past work suggests that it is better to have the robot express sadness and shutdown for a few seconds than to make it react in a more emotional manner or not react at all. The shutdown strategy was also effective in our latest experiment.
  3. It is possible for robots to leverage their social context to reduce the effect of adversarial actions towards them. For example, they can motivate bystanders to intervene or help, as shown in our latest study.

What are you working on next?

We are working on better understanding the different reasons that motivated prosocial interventions in our study: subconscious empathy, group dynamics, and conformity by omission. We are also working towards creating a social robot at Yale that we can easily deploy in public locations such that we can study group human-robot interactions in more realistic and unconstrained settings. Our work on robot abuse has informed several aspects of the design of this public robot. We look forward to testing the platform after our campus activities, which are on hold due to COVID-19, resume back to normal.

Prompting Prosocial Human Interventions in Response to Robot Mistreatment," by Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, and Marynel Vazquez from Yale University, was presented at HRI 2020.

xcdY226yfWU
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments