Distributed line search via dynamic convex combinations


Jorge Cortés and Sonia Martínez
Proceedings of the 52th IEEE Int. Conference on Decision and Control, Florence, Italy, December 2013

Abstract:

This paper considers multi-agent systems seeking to optimize an aggregate objective function. We assume the gradient of this function is distributed, meaning that each agent can compute its corresponding partial derivative with state information about its neighbors and itself. In such scenarios, the discrete-time implementation of gradient descent method poses the fundamental challenge of determining appropriate agent stepsizes that guarantee the monotonic evolution of the objective function. We provide a distributed algorithmic solution to this problem employing convex combinations of stepsizes. Asymptotically, this algorithm converges to a convex combination that enables each agent decrease the function in a proportional amount to the achievable decrease through its own direction of descent.


File: main.pdf


Bib-tex entry:

@InProceedings{JC-SM:13-cdc},
author = {J. Cort\'es and S. Mart{\'\i}nez},
booktitle = {52th IEEE Int. Conf. Decision and Control},
title = {Distributed line search via dynamic convex combinations},
year = {2013},
pages = {2346-2351},
address ={Florence, Italy}
}