MENU

Dynamical ‘liquid’ neural network adapts to new data

Dynamical ‘liquid’ neural network adapts to new data

Technology News |
By Rich Pell



Dubbed “liquid” networks, these flexible algorithms change their underlying equations to continuously adapt to new data inputs. This advance, say the researchers, could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.

“This is a way forward for the future of robot control, natural language processing, video processing – any form of time series data processing,” says Ramin Hasani, the lead author of a study on the research. “The potential is really significant.”

Time series data – a sequence of data points indexed in time order – are both ubiquitous and vital to our understanding the world, say the researchers.

“The real world is all about sequences,” says Hasani. “Even our perception – you’re not perceiving images, you’re perceiving sequences of images. So, time series data actually create our reality.”

Time series examples such as in video processing, financial data, and medical diagnostic applications are central to society, say the researchers, but such data streams are ever changing and can be unpredictable. Yet analyzing these data in real time, and using them to successfully anticipate future behavior, can boost the development of emerging technologies like self-driving cars.

With this goal in mind, the resaerchers designed a neural network that can adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by analyzing a set of “training” examples, and are often said to mimic the processing pathways of the brain.

In this case, say the researchers, they drew inspiration directly from the microscopic nematode, C. elegans.

“It only has 302 neurons in its nervous system,” says Hasnai, “yet it can generate unexpectedly complex dynamics.”

The researchers coded the neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses, while allowing the parameters to change over time based on the results of a nested set of differential equations. This flexibility is key, say the researchers, as most neural networks’ behavior is fixed after the training phase, which means they’re bad at adjusting to changes in the incoming data stream.

The fluidity of the “liquid” network makes it more resilient to unexpected or noisy data, for example if heavy rain obscures the view of a camera on a self-driving car. Another advantage of the network’s flexiblity, say the researchers, is that it’s more interpretable and exhibits less of the inscrutability common to other neural networks.

“Just changing the representation of a neuron [with the differential equations],” says Hasani, “you can really explore some degrees of complexity you couldn’t explore otherwise.”

Thanks to the small number of highly expressive neurons in the network, say the researchers, it’s easier to peer into the “black box” of the network’s decision making and diagnose why the network made a certain characterization.

The model itself is richer in terms of expressivity, say the researchers, which could help engineers understand and improve the liquid network’s performance. In tests, the network reportedly edged out other state-of-the-art time series algorithms by a few percentage points in accurately predicting future values in datasets, ranging from atmospheric chemistry to traffic patterns.

“In many applications, we see the performance is reliably high,” says Hasani.

Plus, say the researchers, the network’s small size meant it completed the tests without a steep computing cost.

“Everyone talks about scaling up their network,” says Hasani. “We want to scale down, to have fewer but richer nodes.”

Looking ahead, the researchers say they plan to keep improving the system and ready it for industrial application.

“We have a provably more expressive neural network that is inspired by nature,” says Hasani. “But this is just the beginning of the process. The obvious question is how do you extend this? We think this kind of network could be a key element of future intelligence systems.”

For more, see “Liquid Time-constant Networks.”

Related articles:
Scientific ML promises ‘near interactive’ design optimization speeds
New neural network training approach cuts energy use, time
Brain simulator AI platform processes 3 billion synapses/s
‘Future-proof’ chip design aims to disrupt AI space

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s