It has already been reported that novel optical neural networks, which use optical phenomena to accelerate computation, can run much faster and more efficiently than electrical AI chips. But as traditional and optical neural networks grow more complex, they eat up more power, requiring so-called “AI accelerators” or specialized chips that improve the speed and efficiency of training and testing neural networks.
Yet, for electrical chips, including most AI accelerators, there is a theoretical minimum limit for energy consumption. In a paper titled “Large-Scale Optical Neural Networks Based on Photoelectric Multiplication” published in Physical Review X, the MIT researchers describe a new photonic accelerator that uses more compact optical components and optical signal-processing techniques, to drastically reduce both power consumption and chip area. That allows the chip to scale to neural networks several orders of magnitude larger than its counterparts.
Simulated training of neural networks on the MNIST image-classification dataset suggest the accelerator can theoretically process neural networks more than 10 million times below the energy-consumption limit of traditional electrical-based accelerators and about 1,000 times below the limit of photonic accelerators. The researchers are now working on a prototype chip to experimentally prove the results.
“People are looking for technology that can compute beyond the fundamental limits of energy consumption,” says Ryan Hamerly, a postdoc in the Research Laboratory of Electronics. “Photonic accelerators are promising … but our motivation is to build a [photonic accelerator] that can scale up to large neural networks.”
Practical applications for such technologies include reducing energy consumption in data centers.