The method for calculating holograms is up to 56 times faster than conventional algorithms and does not require power-hungry graphics processing units (GPUs), running on normal computing cores like those found in PCs. This opens the way to developing compact, power-efficient, next-gen augmented reality devices, including 3D navigation on car windshields and eyewear.
Holography, the science of making records of light in 3D, is used everywhere, from microscopy, fraud prevention on banknotes to state-of-the-art data storage. However, its most obvious application, that of truly 3D displays, remains the exception. The deployment of truly 3D displays that don't need special glasses is yet to become widespread. Even though recent advances have seen virtual reality (VR) technologies make their way into the market, the vast majority rely on optical tricks that convince the human eye to see things in 3D. This is not always feasible and limits its scope.
One of the reasons for this is that generating the hologram of arbitrary 3D objects is a computationally heavy exercise. This makes every calculation slow and power-hungry, a serious limitation when you want to display large 3D images that change in real-time. The vast majority require specialized hardware like graphics processing units (GPUs), the energy-guzzling chips that power modern gaming. This severely limits where 3D displays can be deployed.
To solve this problem, a team led by Assistant Professor Takashi Nishitsuji looked at how holograms were calculated. They realized that not all applications needed a full rendering of 3D polygons. By solely focusing on drawing the edge around 3D objects, they succeeded in significantly reducing the computational load of hologram calculations. In particular, they could avoid using Fast-Fourier Transforms (FFTs), the intensive math routines powering holograms for full polygons.