The library, called Neural Tangents , is offered as providing an easy-to-use neural network library that builds finite- and infinite-width versions of neural networks simultaneously. It is designed to address the issue of taking a finite-width model to its corresponding infinite-width network - a process that could take months, say the researchers.
Deep neural networks (DNNs) that are allowed to become infinitely wide, converge to another, simpler, class of models called Gaussian processes. In this limit, complicated phenomena boil down to simple linear algebra equations.
Insights from these infinitely wide networks frequently carry over to their finite counterparts. As such, say the researchers, infinite-width networks can not only be used as a lens to study deep learning, but also as useful models in their own right.
With Neural Tangents, data scientists can construct and train ensembles of these infinite-width networks at once using only five lines of code. Networks built using Neural Tangents, say the researchers, can be applied to any problem on which scientists could apply a regular neural network.
"We invite everyone to explore the infinite-width versions of their models with Neural Tangents," say the researchers, "and help us open the black box of deep learning."
For more see " Neural Tangents: Fast and Easy Infinite Neural Networks in Python ." Also available is a Neural Tangents Colab Cookbook and Neural Tangents GitHub repository .
New neural network training approach cuts energy use, time
Self-taught Google AI achieves 'superhuman' chess proficiency in hours
Open-source library for rapid prototyping of quantum ML models
Google AI demo lets anyone explore machine learning without coding