Direkt zum InhaltDirekt zur SucheDirekt zur Navigation
▼ Zielgruppen ▼

Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Institut für Mathematik

Forschungsseminar Mathematische Statistik

Für den Bereich Statistik

G. Blanchard, M. Reiß, V. Spokoiny, W. Härdle



Weierstrass-Institut für Angewandte Analysis und Stochastik
Mohrenstrasse 39
10117 Berlin



mittwochs, 10.00 - 12.30 Uhr



18. April 2018
25. April 2018
Nicolai Baldin (Cambridge)
Optimal link prediction with matrix logistic regression
Abstract: In this talk, we will consider the problem of link prediction, based on partial observation of a large network, and on side information associated to its vertices. The generative model is formulated as a matrix logistic regression. The performance of the model is analysed in a high-dimensional regime under a structural assumption. The minimax rate for the Frobenius-norm risk is established and a combinatorial estimator based on the penalised maximum likelihood approach is shown to achieve it. Furthermore, it is shown that this rate cannot be attained by any (randomised) algorithm computable in polynomial time under a computational complexity assumption. (joint work with Q. Berthet)
02. Mai 2018
09. Mai 2018
Gitta Kutyniok (TU Berlin)
Optimal Approximation with Sparsely Connected Deep Neural Networks
Abstract: Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called $\alpha$-shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates. This is joint work with H. B\"olcskei (ETH Zurich), P. Grohs (Uni Vienna), and P. Petersen (TU Berlin).
16. Mai 2018
Moritz Jirak und Martin Wahl (TU Braunschweig / HU Berlin)
Relative perturbation bounds with applications to empirical covariance operators
Abstract: A problem of fundamental importance in quantitative science is to estimate how a perturbation of a covariance operator effects the corresponding eigenvalues and eigenvectors. Due to its importance, this problem has been heavily investigated and discussed in the literature. In this talk, we present general perturbation expansions for a class of symmetric, compact operators. Applied to empirical covariance operators, these expansions allow us to describe how perturbations carry over to eigenvalues and vectors in terms of necessary and sufficient conditions, characterising the perturbation transition. We demonstrate the usefulness of these expansions by discussing pca and fpca in various setups, including more exotic cases where the data is assumed to have high persistence in the dependence structure or exhibits (very) heavy tails. This talk is jointly given by Moritz Jirak and Martin Wahl, and divided into two parts.
23. Mai 2018
Randolf Altmeyer und Markus Reiß (HU Berlin)
A nonparametric estimation problem for linear SPDEs
Abstract: It is well-known that parameters in the drift part of a stochastic ordinary differential equation, observed continuously on a time interval [0,T], are generally only identifiable, if either T→∞, the driving noise becomes small or if a sequence of independent samples is observed. On the other hand, in the case of a linear stochastic partial differential equation dX(t,x) = ϑ AX(t,x)dt + dW(t,x) x ∈ Ω ⊂ R^d, for a nonpositive self-adjoint operator A and an unknown parameter ϑ > 0, [1] showed that consistent estimation of ϑ is also possible in finite time T< ∞ if ‹X(t,∙), e_k › is observed continuously on [0,T] for k=1,...,N as N→∞, where the test functions ek are the eigenfunctions of A. Our goal is to study this estimation problem for general test functions e_k. Using an MLE-inspired estimator, we extend the results of [1] and give a precise understanding of how the estimation error depends on the interplay between A and the test functions e_k. In particular, we show that more localized test functions improve the estimation considerably. It turns out that one local measurement ‹X(t,∙),u_h› is already sufficient for identifying ϑ, as long as h→ 0, where u_h(x)=h-d/2u(x/h) for a smooth kernel u. Central limit theorems are provided, as well. We further show that the same techniques extend to the more difficult nonparametric estimation problem, when ϑ is space-dependent. Indeed, we can show that ϑ (x_0) at x_0 ∈ Ω is identifiable using only local information. The rate of convergence, however, is affected by the bias, which is non-local and difficult to analyse, even when T→∞. Possible solutions are discussed, along with questions of efficiency.
References: [1] M. Huebner and B.L. Rozovskii. On asymptotic properties of maximum likelihood estimators for parabolic stochastic PDE's. Probability theory and related fields, 103, 1995, 143-163.
30. Mai 2018
Florian Schafer (Caltech)
Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity
Abstract: Many popular methods in machine learning, statistics, and uncertainty quantification rely on priors given by smooth Gaussian processes, like those obtained from the Mat ́ern covariance functions. Furthermore, many physical systems are described in terms of elliptic partial differential equa- tions. Therefore, implicitely or explicitely, numerical simulation of these systems requires an efficient numerical representation of the correspond- ing Green’s operator. The resulting kernel matrices are typically dense, leading to (often prohibitive) O N2 or O N3 computational complexity.
In this work, we prove rigorously that the dense N × N kernel matri- ces obtained from elliptic boundary value problems and measurement points distributed approximately uniformly in a d-dimensional domain can be Cholesky factorised to accuracy ε in computational complexity O N log2(N)log2d(N/ε) in time and O N log(N)logd(N/ε) in space. For the closely related Mat ́ern covariances we observe very good results in practise, even for parameters corresponding to non-integer order equa- tions. As a byproduct, we obtain a sparse PCA with near-optimal low- rank approximation property and a fast solver for elliptic PDE. We emphasise that our algorithm requires no analytic expression for the covariance function.
Our work is inspired by the probabilistic interpretation of the Cholesky factorisation, the screening effect in spatial statistics, and recent results in numerical homogenisation.
06. Juni 2018
Igor Cialenco (Chicago)
Parameter estimation problems for parabolic SPDEs
Abstract: In the first part of the talk we will discuss the parameter estimation problem using Bayesian approach for the drift coefficient of some linear (parabolic) SPDEs driven by a multiplicative noise of special structure. We assume that one path of the first N Fourier modes of the solution are continuously observed over a finite time interval, and we derive Bayesian type estimators for the drift coefficient. As custom for Bayesian statistics, we prove a Bernstein-Von Mises theorem for the posterior density, and consequently, we derive some asymptotic properties of the proposed estimators, as N goes to infinity. In the second part of the talk we will study parameter estimation problems for discretely sampled SPDEs. We will discuss some general results on derivation of consistent and asymptotically normal estimators based on computation of the p-variations of stochastic processes and their smooth perturbations, that consequently are conveniently applied to SPDEs. Both the drift and the volatility coefficients are estimated using two sampling schemes - observing the solution at a fixed time and on a discrete spatial grid, and at a fixed space point and at discrete time instances of a finite interval. The theoretical results will be illustrated via numerical examples.
13. Juni 2018
Alain Celisse (Lille)
Early stopping rule and discrepancy principle in reproducing kernel Hilbert spaces
Abstract: The main focus of this work is on the nonparametric estimation of a regression function by means of reproducing kernels and several iterative learning algorithms such as gradient descent, spectral cut-off, Tikhonov regularization,... First, we exploit the general framework of filter estimators to provide a unified analysis of these different algorithms. With the Tikhonov regularization, we will discuss the influence of the parametrization on the interaction between the condition number of the Gram matrix and the number of iterations. More generally, we also discuss existing links between the qualification assumption and the used filter estimators. Second, we introduce an early stopping rule derived from the so-called discrepancy principle. Its behavior is compared with that one of other existing stopping rules and analyzed through the understanding of the dependence of the empirical risk with respect to influential parameters (Gram matrix eigenvalues, cumulative step size, initialization). An oracle-type inequality is derived to quantify the finite-sample performance of the proposed stopping rule. The practical performance of the procedure is also empirically assessed from several simulation experiments.
20. Juni 2018
Zuoqiang Shi (Tsinghua University, Beijing, China)
Low dimensional manifold model for image processing
Abstract: In this talk, I will introduce a novel low dimensional manifold model for image processing problem. This model is based on the observation that for many natural images, the patch manifold usually has low dimension structure. Then, we use the dimension of the patch manifold as a regularization to recover the original image. Using some formula in differential geometry, this problem is reduced to solve Laplace-Beltrami equation on manifold. The Laplace-Beltrami equation is solved by the point integral method. Numerical tests show that this method gives very good results in image inpainting, denoising and super-resolution problem. This is joint work with Stanley Osher and Wei Zhu.
27. Juni 2018
Stanislav Nagy (Charles University Prague)
04. Juli 2018
11. Juli 2018
NO Seminar
there is IRTG Summer Camp: here
18. Juli 2018
Jan van Waaij (HU Berlin)

 Interessenten sind herzlich eingeladen.

Für Rückfragen wenden Sie sich bitte an:

Frau Andrea Fiebig

Mail: fiebig@mathematik.hu-berlin.de
Telefon: +49-30-2093-5860
Fax:        +49-30-2093-5848
Humboldt-Universität zu Berlin
Institut für Mathematik
Unter den Linden 6
10099 Berlin, Germany