Article 5GNWR Cobots Act Like Puppies to Better Communicate with Humans

Cobots Act Like Puppies to Better Communicate with Humans

by
Evan Ackerman
from IEEE Spectrum on (#5GNWR)

Human-robot interaction goes both ways. You've got robots understanding (or attempting to understand) humans, as well as humans understanding (or attempting to understand) robots. Humans, in my experience, are virtually impossible to understand even under the best of circumstances. But going the other way, robots have all kinds of communication tools at their disposal. Lights, sounds, screens, haptics-there are lots of options. That doesn't mean that robot to human (RtH) communication is easy, though, because the ideal communication modality is something that is low cost and low complexity while also being understandable to almost anyone.

One good option for something like a collaborative robot arm can be to use human-inspired gestures (since it doesn't require any additional hardware), although it's important to be careful when you start having robots doing human stuff, because it can set unreasonable expectations if people think of the robot in human terms. In order to get around this, roboticists from Aachen University are experimenting with animal-like gestures for cobots instead, modeled after the behavior of puppies. Puppies!

For robots that are low-cost and appearance-constrained, animal-inspired (zoomorphic) gestures can be highly effective at state communication. We know this because of tails on Roombas:

While this is an adorable experiment, adding tails to industrial cobots is probably not going to happen. That's too bad, because humans have an intuitive understanding of dog gestures, and this extends even to people who aren't dog owners. But tails aren't necessary for something to display dog gestures; it turns out that you can do it with a standard robot arm:

In a recent preprint in IEEE Robotics and Automation Letters (RA-L), first author Vanessa Sauer used puppies to inspire a series of communicative gestures for a Franka Emika Panda arm. Specifically, the arm was to be used in a collaborative assembly task, and needed to communicate five states to the human user, including greeting the user, prompting the user to take a part, waiting for a new command, an error condition when a container was empty of parts, and then shutting down. From the paper:

For each use case, we mirrored the intention of the robot (e.g., prompting the user to take a part) to an intention, a dog may have (e.g., encouraging the owner to play). In a second step, we collected gestures that dogs use to express the respective intention by leveraging real-life interaction with dogs, online videos, and literature. We then translated the dog gestures into three distinct zoomorphic gestures by jointly applying the following guidelines inspired by:

  • Mimicry. We mimic specific dog behavior and body language to communicate robot states.
  • Exploiting structural similarities. Although the cobot is functionally designed, we exploit certain components to make the gestures more dog-like," e.g., the camera corresponds to the dog's eyes, or the end-effector corresponds to the dog's snout.
  • Natural flow. We use kinesthetic teaching and record a full trajectory to allow natural and flowing movements with increased animacy.

A user study comparing the zoomorphic gestures to a more conventional light display for state communication during the assembly task showed that the zoomorphic gestures were easily recognized by participants as dog-like, even if the participants weren't dog people. And the zoomorphic gestures were also more intuitively understood than the light displays, although the classification of each gesture wasn't perfect. People also preferred the zoomorphic gestures over more abstract gestures designed to communicate the same concept. Or as the paper puts it, Zoomorphic gestures are significantly more attractive and intuitive and provide more joy when using." An online version of the study is here, so give it a try and provide yourself with some joy.

While zoomorphic gestures (at least in this very preliminary research) aren't nearly as accurate at state communication as using something like a screen, they're appealing because they're compelling, easy to understand, inexpensive to implement, and less restrictive than sounds or screens. And there's no reason why you can't use both!

For a few more details, we spoke with the first author on this paper, Vanessa Sauer.

IEEE Spectrum: Where did you get the idea for this research from, and why do you think it hasn't been more widely studied or applied in the context of practical cobots?

Vanessa Sauer: I'm a total dog person. During a conversation about dogs and how their ways of communicating with their owner has evolved over time (e.g., more expressive face, easy to understand even without owning a dog), I got the rough idea for my research. I was curious to see if this intuitive understanding many people have of dog behavior could also be applied to cobots that communicate in a similar way. Especially in social robotics, approaches utilizing zoomorphic gestures have been explored. I guess due to the playful nature, less research and applications have been done in the context of industry robots, as they often have a stronger focus on efficiency.

How complex of a concept can be communicated in this way?

In our proof-of-concept" style approach, we used rather basic robot states to be communicated. The challenge with more complex robot states would be to find intuitive parallels in dog behavior. Nonetheless, I believe that more complex states can also be communicated with dog-inspired gestures.

How would you like to see your research be put into practice?

I would enjoy seeing zoomorphic gestures offered as modality-option on cobots, especially cobots used in industry. I think that could have the potential to reduce inhibitions towards collaborating with robots and make the interaction more fun.

MzgwMjY3Ng.jpeg Photos, Robots: Franka Emika; Dogs: iStockphoto Zoomorphic Gestures for Communicating Cobot States, by Vanessa Sauer, Axel Sauer, and Alexander Mertens from Aachen University and TUM, will be published in RA-L.

4l74psmspZw
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments