FedKG: Model-optimized federated learning for local client training with non-IID private data
H Wang, L Wang - … Conference on Advanced Cloud and Big …, 2022 - ieeexplore.ieee.org
H Wang, L Wang
2021 Ninth International Conference on Advanced Cloud and Big Data …, 2022•ieeexplore.ieee.orgIn recent years, people's privacy issues are being challenged, federated learning (FL) is
proposed. Traditional FL aggregates the gradient parameters (FedAvg) sent by each client
on the server, which causes the passed parameters to be proportional to the size of client
model. We refine the client model to reduce parameters and communication efficiency, but
the important prerequisite for model refinement is that the training data set meets a similar
structure. In order to reduce the impact of cross-client devices, we use generative …
proposed. Traditional FL aggregates the gradient parameters (FedAvg) sent by each client
on the server, which causes the passed parameters to be proportional to the size of client
model. We refine the client model to reduce parameters and communication efficiency, but
the important prerequisite for model refinement is that the training data set meets a similar
structure. In order to reduce the impact of cross-client devices, we use generative …
In recent years, people’s privacy issues are being challenged, federated learning (FL) is proposed. Traditional FL aggregates the gradient parameters (FedAvg) sent by each client on the server, which causes the passed parameters to be proportional to the size of client model. We refine the client model to reduce parameters and communication efficiency, but the important prerequisite for model refinement is that the training data set meets a similar structure. In order to reduce the impact of cross-client devices, we use generative adversarial networks (GAN) expansion and pre-processing data. We propose a new method that combines knowledge distillation and GAN to reduce communication loss and improve model performance. Experiments show that compared with FedAvg, the accuracy score of our method is improved by 9.76% in smaller iterations. When the amount of passed parameters is reduced by 72.19%, the accuracy score is also improved by 2.20%.
ieeexplore.ieee.org
Showing the best result for this search. See all results