Low-cost tactile glove learns signatures of the human grasp: Page 2 of 3

May 30, 2019 //By Rich Pell
Low-cost tactile glove learns signatures of the human grasp
Researchers at MIT (Cambridge, MA) have created a low-cost, sensor-packed glove that captures pressure signals as its wearer interacts with a variety of objects, providing insights they say that could aid the future design of prosthetics, robot grasping tools, and human–robot interactions.
also used the dataset to measure the cooperation between regions of the hand during object interactions. For example, when someone uses the middle joint of their index finger, they rarely use their thumb; but the tips of the index and middle fingers always correspond to thumb usage.

"We quantifiably show, for the first time, that, if I’m using one part of my hand, how likely I am to use another part of my hand," says Sundaram.

Prosthetics manufacturers can potentially use such information to, for example, choose optimal spots for placing pressure sensors and help customize prosthetics to the tasks and objects people regularly interact with.

STAG is laminated with an electrically conductive polymer that changes resistance to applied pressure. Conductive threads were sewn through holes in the conductive polymer film, from fingertips to the base of the palm, overlapping in a way that turned them into pressure sensors. When someone wearing the glove feels, lifts, holds, and drops an object, the sensors record the pressure at each point.

The threads connect from the glove to an external circuit that translates the pressure data into "tactile maps," which are essentially brief videos of dots growing and shrinking across a graphic of a hand. The dots represent the location of pressure points, and their size represents the force - the bigger the dot, the greater the pressure.

From those maps, the researchers compiled a dataset of about 135,000 video frames from interactions with the 26 objects. A convolutional neural network (CNN) was then designed to associate specific pressure patterns with specific objects. But the trick, say the researchers, was choosing frames from different types of grasps to get a full picture of the object.

The idea was to mimic the way humans can hold an object in a few different ways in order to recognize it, without using their eyesight. Similarly, the CNN chooses up to eight semirandom frames from the


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.