Article 6BEP0 Robot Hand Manipulates Complex Objects by Touch Alone

Robot Hand Manipulates Complex Objects by Touch Alone

by
Evan Ackerman
from IEEE Spectrum on (#6BEP0)
a-gif-showing-five-robotic-fingers-rotat

In terms of human features that robots are probably the most jealous of, fingers have to be right up there with eyeballs and brains. Our fleshy little digits have a crazy amount of dexterity relative to their size, and so many sensors packed into them that allow you to manipulate complex objects sight unseen. Obviously, these are capabilities that would be really nice to have in a robot , especially if we want them to be useful outside of factories and warehouses.

There are two parts to this problem: The first is having fingers that can perform like human fingers (or as close to human fingers as is reasonable to expect); the second is having the intelligence necessary to do something useful with those fingers.

Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand."
-Matei Ciocarlie, Columbia University

In a paper just accepted to the Robotics: Science and Systems 2023 conference, researchers from Columbia University have shown how to train robotic fingers to perform dexterous in-hand manipulation of complex objects without dropping them. What's more, the manipulation is done entirely by touch-no vision required.

Robotic fingers manipulate random objects a level of dexterity humans master by the time they're toddlers.Columbia University

Those slightly chunky fingers have a lot going on inside of them to help make this kind of manipulation possible. Underneath the skin of each finger is a flexible reflective membrane, and under that membrane is an array of LEDs along with an array of photodiodes. Each LED is cycled on and off for a fraction of a millisecond, and the photodiodes record how the light from each LED reflects off of the inner membrane of the finger. The pattern of that reflection changes when the membrane flexes, which is what happens if the finger is contacting something. A trained model can correlate that light pattern with the location and amplitude of finger contacts.

So now that you have fingers that know what they're touching, they also need to know how to touch something in order to manipulate it the way you want them to without dropping it. There are some objects that are robot-friendly when it comes to manipulation, and some that are robot-hostile, like objects with complex shapes and concavities (L or U shapes, for example). And with a limited number of fingers, doing in-hand manipulation is often at odds with making sure that the object remains in a stable grip. This is a skill called finger gaiting," and it takes practice. Or, in this case, it takes reinforcement learning (which, I guess, is arguably the same thing). The trick that the researchers use is to combine sampling-based methods (which find trajectories between known start and end states) with reinforcement learning to develop a control policy trained on the entire state space.

While this method works well, the whole nonvision thing is somewhat of an artificial constraint. This isn't to say that the ability to manipulate objects in darkness or clutter isn't super important, it's just that there's even more potential with vision, says Columbia's Matei Ciocarlie: Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand."

Sampling-based Exploration for Reinforcement Learning of Dexterous Manipulation," by Gagan Khandate, Siqi Shang, Eric T. Chang, Tristan Luca Saidi, Johnson Adams, and Matei Ciocarlie from Columbia University, is accepted to RSS 2023.

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments