Distributed deep learning technology for edge computing

August 25, 2020 //By Rich Pell
Distributed deep learning technology for edge computing
Researchers at telecommunications company NTT say they have achieved asynchronous distributed deep learning technology - which they call edge-consensus learning - for machine learning on edge computing.

The researchers are investigating a training algorithm to obtain a global model as if it is trained by aggregating data in a single server, even when the data are placed in distributed servers, such as in edge computing. The proposed technology, say the researchers, has both academic and practical interest, and enables users to obtain a global model - a trained model that uses all the data at a single place - even when (1) statistically nonhomogeneous data subsets are placed on multiple servers, and (2) the servers only asynchronously exchange variables related to the model.

Currently, machine learning - especially deep learning - generally involves training models, such as image/speech recognition, by aggregating data at a fixed location such as a cloud data center. However, in the IoT era, where everything is connected to networks, aggregating vast amounts of data on the cloud is complicated.

More and more people are demanding that data be held on a local server/device due to privacy issues. Legal regulations have also been enacted to guarantee data privacy, including the EU’s General Data Protection Regulation (GDPR). As a result, say the researchers, excitement is growing in edge computing that decentralizes data processing/storing servers for processing load and response time reductions on cloud/communication networks and for data privacy protection.

One technical challenge for this goal is to enable data aggregation/model training/processing in a decentralized manner. The researchers' proposed training algorithm can obtain a global model even in situations where different/nonhomogeneous data subsets are placed on multiple servers and their communication is asynchronous. Instead of aggregating/exchanging data (e.g., image or speech) between servers, variables associated with each model trained on servers are asynchronously exchanged between servers and result in a global model.

The training algorithm is composed of two processes: a procedure that updates the variables inside each server, and a variable exchange between servers. In their experiments, the researchers used a ring network composed of


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.