Get Demo

Federated Learning

PLAY PAUSE
0:00
/
PLAY PAUSE
A New Horizon for Confidential Machine Learning

Let's first consider the most common case of machine learning today. There is a certain entity that possesses enough data to train a machine learning model.

There is a certain entity that possesses enough data to train a machine learning model.
fl_001

The model can be arbitrary, ranging from deep neural networks to linear regression.

The interaction of the model with the data generates a solution to a practical problem, such as object detection, audio transcription, and so on.

In reality, the data that the model has to work with does not originate from the machine on which the training takes place; they are created elsewhere.

fl_0002

That is, to conduct the analysis, data from various sources must be collected on some central server, e.g., in the cloud.

fl_03-2
It is easy to imagine a situation where it is not possible to transfer data from each source to the server, for example, due to the following reasons:
Reluctance of parties to share their confidential information with a third party in an open form
The combined volume of data exceeds the capacity of the central storage
Regulatory restrictions: GDPR (Europe), CCPA (California), PIPEDA (Canada), LGPD (Brazil), and others protect sensitive data from being transferred
The local data of each party individually is insufficient for training a high-quality model. Thus, we have arrived at the challenge that led to the emergence of the innovative concept of Federated Learning.

Federated Learning (FL) is a machine learning paradigm that enables the training of a global model by clients without sharing their local data.

This approach to machine learning not only solves data privacy issues, but also opens new horizons for developing secure and efficient models.

federated_learning_4
Centralized ML moves data to perform computations
FL vs Centralized ML
federated_learning_5
FL moves computations to the data

The Basic Principles Of FL Step By Step

Step 0

Clients agree on the global model, loss function, and data preprocessing procedures. The central server initializes the global model either randomly or using a pre-trained checkpoint.

Step 1

The server distributes the parameters of the global model to connected clients. It's important to note that each client starts training using the same model parameters.

fl_step1-2
Step 2

Clients perform local training of model instances on their own data. Local training ranges from a few steps to several epochs depending on initial agreements.

fl_step2-2
Step 3

After local training, clients have models with differing parameters due to variations in their local datasets. Clients return these model parameters to the central server.

fl_step3-2
Step 4

The server receives the model parameters from clients and aggregates them to update the parameters of the global version.

There are various approaches to parameter aggregation, but the most popular method is FedAvg (Federated Averaging), where the received parameters are averaged in a weighted manner according to the sizes of the local datasets.

fl_step004
Step 5

Steps 1-4 constitute one round of the FL, which is repeated until the model converges.

It is important to note that the data itself remains local and only model updates are transmitted to the centralized server and other devices. This approach preserves data privacy by avoiding the need for centralized accumulation.

Like any other technology, the FL, while solving the intended problem, has both positive and negative aspects.

The positive aspects of the FL procedure

Data Confidentiality

Due to the absence of data transmission, FL minimizes the risk of leaks or unauthorized access to confidential information.

Scalability

This approach allows for efficient processing of large volumes of data and scaling to a large number of devices without significantly increasing network load or computational resources.

Distributed Structure

Local copies of models and data distribution among clients help minimize vulnerabilities associated with server failures.

Efficient Resource Utilization

FL enables significant parallelization of the global model training process, eliminating the need for GPU-equipped central servers.

Reduction of Data Drift Effect

The path from data source to model becomes shorter, reducing the likelihood of data becoming outdated or distorted.

The complexities brought by learning technology

Coordination Complexity

Managing the training process with multiple clients requires a complex system of coordination and agreement, which can complicate system deployment and support.

Data Consistency Issues

Differences in client data sets can lead to inconsistencies in models or loss of commonality in the aggregated model.

Computational Constraints

Computational resources on user devices may be limited, complicating the training of complex models or requiring additional algorithm optimization.

Security Threats

The possibility of attacks on individual devices or servers storing data or model updates necessitates increased attention to cybersecurity and fraud protection.

Need for Cooperation

The difficulty in finding identical data owners willing to solve a similar practical problem.

Nevertheless, FL has found wide application in various fields where sensitive data processing is required, such as medicine (analyzing medical images and patient data), financial services (transaction analysis and fraud detection) and the Internet of Things (processing data from sensors and smart devices).

Thus, FL represents an important technological innovation in the field of PPML (Privacy Preserving Machine Learning), capable of reshaping the landscape of machine learning by making it safer and more accessible across various sectors of the economy.

This approach not only safeguards data confidentiality but also promotes the development of new methods for processing and analyzing information while preserving privacy.

However, successful implementation requires careful consideration and management of the shortcomings and challenges inherent to this method.