EdgeMove: Pipelining device-edge model training for mobile intelligence

Z Dong, Q He, F Chen, H Jin, T Gu, Y Yang - Proceedings of the ACM …, 2023 - dl.acm.org
Proceedings of the ACM Web Conference 2023, 2023dl.acm.org
Training machine learning (ML) models on mobile and Web-of-Things (WoT) has been
widely acknowledged and employed as a promising solution to privacy-preserving ML.
However, these end-devices often suffer from constrained resources and fail to
accommodate increasingly large ML models that crave great computation power. Offloading
ML models partially to the cloud for training strikes a trade-off between privacy preservation
and resource requirements. However, device-cloud training creates communication …
Training machine learning (ML) models on mobile and Web-of-Things (WoT) has been widely acknowledged and employed as a promising solution to privacy-preserving ML. However, these end-devices often suffer from constrained resources and fail to accommodate increasingly large ML models that crave great computation power. Offloading ML models partially to the cloud for training strikes a trade-off between privacy preservation and resource requirements. However, device-cloud training creates communication overheads that delay model training tremendously. This paper presents EdgeMove, the first device-edge training scheme that enables fast pipelined model training across edge devices and edge servers. It employs probing-based mechanisms to tackle the new challenges raised by device-edge training. Before training begins, it probes nearby edge servers’ training performance and bootstraps model training by constructing a training pipeline with an approximate model partitioning. During the training process, EdgeMove accommodates user mobility and system dynamics by probing nearby edge servers’ training performance adaptively and adapting the training pipeline proactively. Extensive experiments are conducted with two popular DNN models trained on four datasets for three ML tasks. The results demonstrate that EdgeMove achieves a 1.3 × -2.1 × speedup over the state-of-the-art scheme.
ACM Digital Library
Showing the best result for this search. See all results