Proprioception is necessary for a soft robot: Only if it knows its current shape and configuration in three dimensions will it properly interact with its environment.
Typically, gathering proprioception information for a robot (giving the machine a sense of the relative position of its own articulated parts) is done through active three-degree-of-freedom mechanisms combined with closed-loop control. For soft robotics, the trend in literature it to embed strain and pressure sensors along neutral bending axes of a limb to detect its curvature and touch events.
But this approach results in discrete pressure and bending data at certain points and along certain axes, which according to researchers from Cornell University limits the information that they can give about a robot’s configuration. Of course, more data points can be gathered by integrating more sensors, but this increases system complexity.
In a paper titled "Soft optoelectronic sensory foams with proprioception" published in the Science Robotics journal, the researchers opted to remove all forms of discrete pressure and strain sensors and instead embedded an array of flexible optical fibers in the base layer of an elastomeric foam robotic limb (about the size of a finger for their experiments).
Each optical fiber terminated to exit the base layer and illuminate the bulk of the foam internally. In the research setup, the fibers not only both illuminated the foam, they were used to detect the diffuse reflected light within the limb (through a beam splitter and camera external to the limb).
First, the researchers bent and twisted the foam to known angles and recorded the intensity of the diffuse reflected light leaving each fiber. Then by applying machine learning techniques to the data, they were able to produce models to predict the foam’s deformation state from the internally reflected light.