The inspiration behind the design of the sensor was the efficient depth perception system that has evolved in jumping spiders - a family of spiders with two sets of eyes that can accurately pounce on unsuspecting targets from several body lengths away. The resulting sensor, which combines a multifunctional flat metalens with an ultra-efficient algorithm to measure depth in a single shot, could be used in microrobotics, augmented reality, and wearable devices, say the researchers.
"Evolution has produced a wide variety of optical configurations and vision systems that are tailored to different purposes," says Zhujun Shi, a Ph.D. candidate in the Graduate School of Arts and Sciences (GSAS) in the Department of Physics and co-first author of a paper on the sensor. "Optical design and nanotechnology are finally allowing us to explore artificial depth sensors and other vision systems that are similarly diverse and effective."
Current depth sensors use integrated light sources and multiple cameras to measure distance, while humans measure depth using stereo vision - i.e., when looking at an object each eye collects a slightly different image - and calculate distance based on the difference between the two images. That calculation, say the researchers, is computationally burdensome - an ability that human brains can handle but not those of jumping spiders, which have instead had to evolve a more efficient system.
Jumping spiders have two sets of eyes: two large principal eyes and two small lateral eyes (image). The lateral eyes are used to sense the motion of an object, such as a fly, which the spider then zeros in on using its principal eyes
Each principal eye has a few semi-transparent retinae arranged in layers, and these retinae measure multiple images with different amounts of blur. For example, if a jumping spider looks at a fruit fly with one of its principal eyes, the fly will appear sharper in one retina's image and blurrier in another. This