# [Review] Sensor Coverage with Mixed Gaussian Processes

February 21, 2019 · 4 minute read

# Adaptive Sampling and Online Learning in Multi-Robot Sensor Coverage with Mixture of Gaussian Processes

Luo, Wenhao, and Katia Sycara. 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6359-6364. IEEE, 2018.

## The Problem

To find a solution to the Multi-Robot sensor coverage or Localisation Optimisation problem and the Adaptive sampling problems where we look for the optimal placement of robots to best take samples that best describe the distribution of the environment. The optimal solution comes from finding the Centroidal Voronoi Tessellation of the densities of the environment for each robot. This paper attempts to solve the problem in an unknown environment in a semi-distributed fashion.

## Aims of the Paper

This paper aims to address two major difficulties of optimal sensor coverage in an unknown environment.

1. Efficiently learning the density function online while optimising coverage performance.
2. Mix information from the distributed network of robots into one unified model of the environment.

It approaches these difficulties by utilising local gaussian processes for each robot and a gaussian mixture model to generate the overall model of the environment. Each gaussian process is used to approximate and model the density of the sensed environment.

## Paper Summary

An example temperature field is given for the purposes of explaining the concepts. The field or density is non-uniform and the optimal multi-robot coverage is given by finding the centroids relative to the density. Each robot explores a given region and fits its own Gaussian Process to samples taken from spatial locations in the environment, from this information it can then find the position in the field from which it will sense optimally. Previously in the combination step, the distributed gaussians are combined into one single “uni-model” Gaussian Process, but this does not effectively model densities with multiple peaks.

The paper clearly defines the problem statement and the goals of the new method in Section 3. The environment is divided into Voronoi cells and a cost function on the sensing which takes into account sensed intensity/density and distance to item sensed $\mathcal{H}$. The optimal position is then the minimisation of this cost function which is equivalent to the centre of mass or the centroid of a given Voronoi cell. The optimal control law can then be given by a proportional controller to the error between the current position and the calculated optimal.

An explanation of Gaussian Processes is given for each individual robot where it is parameterised by a mean and covariance as modified by successive sensor samplings. A squared exponential kernel is used. An interesting part is in the method of the Gaussian Process Mixture Model which effectively combines the various robots’ sensor readings. The mixture model is a weighted combination of the individual gaussian processes. Each cell of the environment has a mixture model modelling the density. One major combination is the idea that we require a location which maximises the high predicted value of the phenomenon. in other words to maximise the information received. To this end they define an information criterion $h(q) = \mu + \beta \sigma^2$ on each cell dependent on the gaussian process mixture parameters. An Expectation Maximisation form is then used for prediction on the rest of the un-sampled areas of the environment.

A decent evaluation of the system is done with simulation based on the Intel Berkeley Lab dataset. The robot trajectories are shown to converge mostly to the optimal locations. The predicted densities are reminiscent of the original density. A comparison is done with the uni-model gaussian process and entropy maximisation. It shows improvements in RMS Error and maximum prediction error.

## Paper Review

This paper was well written and very complete in terms of the explanation provided of the methodology. The details provided in the problem statement and the explanations of Gaussian Processes were useful in understanding the methodology and reasoning for the new addition of the criterion. The evaluation seems to provide some very promising results, although the diagrams provided are not fully clear as to what they represent so I am dubious as to whether it is actually as good as they say. The comparison’s also show a large increase in the average prediction variance which they do not mention at all. I believe the aim’s of the paper were reached and it was very informative.