To maximize the variation between the frames to give the best possible input, it first groups similar frames together, resulting in distinct clusters corresponding to unique grasps. Then, it pulls one frame from each of those clusters, ensuring it has a representative sample. Then the CNN uses the contact patterns it learned in training to predict an object classification from the chosen frames.
For weight estimation, the researchers built a separate dataset of around 11,600 frames from tactile maps of objects being picked up by finger and thumb, held, and dropped. In testing, a single frame was inputted into the CNN, essentially resulting in the CNN picking out the pressure around the hand caused by the object's weight, and ignoring pressure caused by other factors, such as hand positioning to prevent the object from slipping. Then it calculates the weight based on the appropriate pressures.
The system, say the researchers, could be combined with the sensors already on robot joints that measure torque and force to help them better predict object weight.
For more, see " Learning the signatures of the human grasp using a scalable tactile glove ."
Flexible body sensor detects fine motor movements
'Smart' prosthetics monitor for infection, stress
Nanocomposite-coated fiber yields next-gen smart textiles
Electronic skin brings sense of touch to prosthetic users
Skin-inspired flexible tactile sensor for smart prosthetics