Dec 24, 2019 · We present a novel aggregation algorithm with residual-based reweighting to defend federated learning.
To tackle the challenge in robust federated learning, we propose a reweighted aggregation algorithm that dynamically assigns weights to the local model based on ...
Targeted Model Poisoning. • Derived from data poisoning. • Label flipping attack. • Change the label of data so the model will misclassify test samples.
This work presents a novel aggregation algorithm with residual-based reweighting to defend federated learning and demonstrates that it outperforms other ...
This is a PyTorch implementation of our paper. We present a novel aggregation algorithm with residual-based reweighting to defend federated learning.
Jan 8, 2021 · To make federated learning more attack-resistant, we develop an aggregation algorithm that is robust against label-flipping and backdoor attacks ...
Code for Attack-Resistant Federated Learning with Residual-based ...
www.catalyzex.com › paper › code
Explore all code implementations available for Attack-Resistant Federated Learning with Residual-based Reweighting.
Attack-Resistant Federated Learning with Residual-based Reweighting · 2 code implementations • 24 Dec 2019 • Shuhao Fu, Chulin Xie, Bo Li, Qifeng Chen.
This one-day workshop intends to bring experts from machine learning, security communities, and federated learning together to work more closely in addressing ...
In our paper we present a novel aggregation algorithm based and extended upon residual-based reweighting to defend against poisoning attacks in federated ...