Fully asynchronous distributed optimization with linear convergence in directed networks

J Zhang, K You - arXiv preprint arXiv:1901.08215, 2019 - arxiv.org
arXiv preprint arXiv:1901.08215, 2019arxiv.org
We consider the distributed optimization problem, the goal of which is to minimize the sum of
local objective functions over a directed network. Though it has been widely studied
recently, most of the existing algorithms are designed for synchronized or randomly
activated implementation, which may create deadlocks in practice. In sharp contrast, we
propose a\emph {fully} asynchronous push-pull gradient algorithm (APPG) where each node
updates without waiting for any other node by using (possibly stale) information from …
We consider the distributed optimization problem, the goal of which is to minimize the sum of local objective functions over a directed network. Though it has been widely studied recently, most of the existing algorithms are designed for synchronized or randomly activated implementation, which may create deadlocks in practice. In sharp contrast, we propose a \emph{fully} asynchronous push-pull gradient algorithm (APPG) where each node updates without waiting for any other node by using (possibly stale) information from neighbors. Thus, it is both deadlock-free and robust to any bounded communication delay. Moreover, we construct two novel augmented networks to theoretically evaluate its performance from the worst-case point of view and show that if local functions have Lipschitz-continuous gradients and their sum satisfies the Polyak-\L ojasiewicz condition (convexity is not required), each node of APPG converges to the same optimal solution at a linear rate of , where and the virtual counter increases by one no matter which node updates. This largely elucidates its linear speedup efficiency and shows its advantage over the synchronous version. Finally, the performance of APPG is numerically validated via a logistic regression problem on the \emph{Covertype} dataset.
arxiv.org