Distributed approximate Newton algorithms for constrained optimization


Tor Anderson, Chin-Yao Chang and Sonia Martínez
Automatica, 109 (2019) doi 108538

Abstract:

Motivated by economic dispatch and linearly-constrained resource allocation problems, this paper proposes a class of novel $\distnewton$ algorithms that approximate the standard Newton optimization method. We first develop the notion of an optimal edge weighting for the communication graph over which agents implement the second-order algorithm, and propose a convex approximation for the nonconvex weight design problem. We next build on the optimal weight design to develop a \distnewtondisc algorithm which converges exponentially to the optimal solution for economic dispatch problems with unknown cost functions and relaxed local box constraints. For the full box-constrained problem, we develop a \distnewtoncont algorithm which is inspired by first-order saddle-point methods and rigorously prove its convergence to the primal and dual optimizers. A main property of each of these distributed algorithms is that they only require agents to exchange constant-size communication messages, which lends itself to scalable implementations. Simulations demonstrate that the $\distnewton$ algorithms with our weight design have superior convergence properties compared to existing weighting strategies for first-order saddle-point and gradient descent methods.


File: (ArXiv version)


Bib-tex entry:

@article{TA-CYC-SM:19-auto},
author = {T. Anderson and C.-Y. Chang and S. Mart{\'\i}nez},
journal = {Automatica},
title = {Distributed approximate Newton algorithms for constrained optimization},
year = {2019},
volume ={109},
doi = {108538}
}