Timeline
14:15-15:15 | Junior Richard-von-Mises-Lecture by Daniel Walter Towards Optimal Sensor Placement for Sparse Inverse Problems with Random Noise |
15:15 - 15:45 | Coffee Break |
15:45-16:45 | Richard-von-Mises-Lecture by Aad van der Vaart On the Bernstein-von Mises theorem |
16:45-17:00 | Discussions |
Abstracts
On the Bernstein-von Mises theoremAad van der Vaart (TU Delft)
The Bayesian statistical approach consists of updating a prior probability distribution of a set of unknown parameters to a posterior distribution by reweighting the prior with the likelihood of the observables. We are interested in this posterior distribution from the non-Bayesian point of view that the observables follow some fixed probability distribution. In this setting the classical Bernstein-von Mises theorem says that the posterior distribution of a parameter of a smoothly parametrised statistical model can be approximated by a certain normal distribution. A definitive mathematical formulation of the theorem was obtained in the 1960/70s by Lucien Le Cam, but the theorem goes back in some form to Laplace, almost contemporary with the conception of Bayesian statistical inference. Besides making some historical remarks (also about the name of the theorem), we review the role of the theorem in justifying Bayesian uncertainty quantification in the non-Bayesian, general statistical setup. This consists of using the spread of the posterior distribution (often in the form of so-called "credible sets") as a measure of statistical uncertainty. We next continue to discuss extensions of the theorem within the modern setup of an infinite-dimensional parameter (e.g. regression function, density function, initial value or potential function in a noisy inverse problem), and its significance for uncertainty quantification in that setup. For a genuinely infinite-dimensional parameter, equipped with a strong norm, the naive analogy of the theorem is known to fail, and credible sets must balance the centering and spread of the posterior distribution (bias and root-variance). However, versions of the Bernstein-von Mises theorem apply to smooth functionals of a parameter. The appropriate assertion can be understood from the point of view of semiparametric information calculus and efficiency. We give some examples, including some from noisy estimation of PDEs, and some involving the famous Dirichlet prior on a probability distribution, and discuss the (in)appropriateness of these results as a justification of uncertainty quantification of infinite-dimensional Bayesian methods. The talk will be addressed to a general mathematical audience.
Towards Optimal Sensor Placement for Sparse Inverse Problems with Random Noise
Daniel Walter (Humboldt-Universität zu Berlin)
In this talk, we study the identification of a linear combination of point sources from a finite number of measurements contaminated by random noise. Problems of this form play, e.g., an important role in the context of airborne contamination spreading. It relies on two main ingredients, first, a convex but nonsmooth Tikhonov point estimator over the space of Radon measures and, second, a suitable mean-squared error based on its Hellinger-Kantorovich distance to the ground truth. Assuming standard non-degenerate source conditions as well as applying careful linearization arguments, a computable upper bound on the latter is derived. On the one hand, this allows to derive asymptotic convergence results for the mean-squared error of the estimator in the small small variance case. On the other, the upper bound explicitly depends on the measurement setup and can thus be used as design functional for the optimization of the measurement setup. We present extensive numerical experiments denonstrating the sharpeness of these estimates as well as the necessity and impact of an optimal choice of the measurement setup.
Link to a more detailed abstract