Robust federated learning with noisy communication
IEEE Transactions on Communications, 2020•ieeexplore.ieee.org
Federated learning is a communication-efficient training process that alternate between
local training at the edge devices and averaging of the updated local model at the center
server. Nevertheless, it is impractical to achieve perfect acquisition of the local models in
wireless communication due to the noise, which also brings serious effect on federated
learning. To tackle this challenge in this paper, we propose a robust design for federated
learning to decline the effect of noise. Considering the noise in two aforementioned steps …
local training at the edge devices and averaging of the updated local model at the center
server. Nevertheless, it is impractical to achieve perfect acquisition of the local models in
wireless communication due to the noise, which also brings serious effect on federated
learning. To tackle this challenge in this paper, we propose a robust design for federated
learning to decline the effect of noise. Considering the noise in two aforementioned steps …
Federated learning is a communication-efficient training process that alternate between local training at the edge devices and averaging of the updated local model at the center server. Nevertheless, it is impractical to achieve perfect acquisition of the local models in wireless communication due to the noise, which also brings serious effect on federated learning. To tackle this challenge in this paper, we propose a robust design for federated learning to decline the effect of noise. Considering the noise in two aforementioned steps, we first formulate the training problem as a parallel optimization for each node under the expectation-based model and worst-case model. Due to the non-convexity of the problem, regularizer approximation method is proposed to make it tractable. Regarding the worst-case model, we utilize the sampling-based successive convex approximation algorithm to develop a feasible training scheme to tackle the unavailable maxima or minima noise condition and the non-convex issue of the objective function. Furthermore, the convergence rates of both new designs are analyzed from a theoretical point of view. Finally, the improvement of prediction accuracy and the reduction of loss function value are demonstrated via simulation for the proposed designs.
ieeexplore.ieee.org
Showing the best result for this search. See all results