Distributed variational inference for online supervised learning


Parth Paritosh, Nikolai Atanasov and Sonia Martínez
IEEE Transactions on Signal Processing, submitted

Abstract:

Developing efficient solutions for inference problems in intelligent sensor networks is crucial for the next generation of location, tracking, and mapping services. This paper develops a scalable distributed probabilistic inference algorithm that applies to continuous variables, intractable posteriors and large-scale real-time data in sensor networks. In a centralized setting, variational inference is a fundamental technique for perform- ing approximate Bayesian estimation, in which an intractable posterior density is approximated with a parametric density. Our key contribution lies in the derivation of a separable lower bound on the centralized estimation objective, which enables distributed variational inference with one-hop communication in a sensor network. Our distributed evidence lower bound (DELBO) consists of a weighted sum of observation likelihood and divergence to prior densities, and its gap to the measurement evidence is due to consensus and modeling errors. To solve binary classification and regression problems while handling streaming data, we design an online distributed algorithm that maximizes DELBO, and specialize it to Gaussian variational densities with non-linear likelihoods. The resulting distributed Gaussian variational inference (DGVI) efficiently inverts a 1- rank correction to the covariance matrix. Finally, we derive a diagonalized version for online distributed inference in high- dimensional models, and apply it to multi-robot probabilistic mapping using indoor LiDAR data1.


File: (ArXiv version)


Bib-tex entry:

@article{PP-NA-SM:23-tsp,
author = {P. Paritosh and N. Atanasov and S. Mart{\'\i}nez},
title = {Distributed variational inference for online supervised learning},
journal= {IEEE Transactions on Signal Processing},
pages = {},
volume = {},
number = {},
note = {Under review},
year = {2023}
}