Direkt zum InhaltDirekt zur SucheDirekt zur Navigation
▼ Zielgruppen ▼

Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Institut für Mathematik

Forschungsseminar Mathematische Statistik

Für den Bereich Statistik

S. Greven, W. Härdle, M. Reiß, V. Spokoiny



Weierstrass-Institut für Angewandte Analysis und Stochastik
Mohrenstrasse 39
10117 Berlin



mittwochs, 10.00 - 12.30 Uhr



16. Oktober 2019
Alexey Onatskiy (Cambridge University)
Spurious Factor Analysis
Abstrakt: This paper draws parallels between the Principal Components Analysis of factorless high-dimensional nonstationary data and the classical spurious regression. We show that a few of the principal components of such data absorb nearly all the data variation. The corresponding scree plot suggests that the data contain a few factors, which is collaborated by the standard panel information criteria. Furthermore, the Dickey-Fuller tests of the unit root hypothesis applied to the estimated `idiosyncratic terms' often reject, creating an impression that a few factors are responsible for most of the non-stationarity in the data. We warn empirical researchers of these peculiar effects and suggest to always compare the analysis in levels with that in differences.
23. Oktober 2019
Vladimir Spokoiny (WIAS und HU Berlin)
Bayesian inference for nonlinear inverse problems
Abstract: We discuss the properties of the posterior for a wide class of statistical models including nonlinear generalised regression and deep neuronal networks, nonlinear inverse problems, nonparametric diffusion, error-in-operator and IV models. The new calming approach helps to treat all such problems in a unified manner and to obtain tight finite sample results about Gaussian approximation of the posterior with an explicit error bound in term of so called effective dimension.
30. Oktober 2019
06. November 2019
Charles Manski North Western University, USA
** This is the Hermann Otto Hirschfeld Lecture 2019 **
Patient Care under Uncertainty
Abstract: https://press.princeton.edu/titles/30223.html
13. November 2019
Merle Behr (University of California, Berkeley)
Learning compositional structures
Abstract: Many data problems, in particular in biogenetics, often come with a highly complex underlying structure. This often makes is difficult to extract interpretable information. In this talk we want to demonstrate that often these complex structures are well approximated by a composition of a few simple parts, which provides very descriptive insights into the underlying data generating process. We demonstrate this with two examples.
In the first example, the single components are finite alphabet vectors (e.g., binary components), which encode some discrete information. For instance, in genetics a binary vector of length n can encode whether or not a mutation (e.g., a SNP) is present at location i = 1,…,n in the genome. On the population level studying genetic variations is often highly complex, as various groups of mutations are present simultaneously. However, in many settings a population might be well approximated by a composition of a few dominant groups. Examples are Evolve&Resequence experiments where the outer supply of genetic variation is limited and thus, over time, only a few haplotypes survive. Similar, in a cancer tumor, often only a few competing groups of cancer cells (clones) come out on top.
In the second example, the single components relate to separate branches of a tree structure. Tree structures, showing hierarchical relationships between samples, are ubiquitous in genomic and biomedical sciences. A common question in many studies is whether there is an association between a response variable and the latent group structure represented by the tree. Such a relation can be highly complex, in general. However, often it is well approximated by a simple composition of relations associated with a few branches of the tree.
For both of these examples we first study theoretical aspects of the underlying compositional structure, such as identifiability of single components and optimal statistical procedures under probabilistic data model. Based on this, we find insights into practical aspects of the problem, namely how to actually recover such components from data.
20. November 2019
Nikita Zhivotowskii (Google Zürich)
Robust covariance estimation for vectors with bounded kurtosis
Abstract: Let X be a centered random vector and assume that we want to estimate its covariance matrix. In this talk I will discuss the following result: if the random X satisfies the bounded kurtosis assumption, there is a covariance matrix estimator that given a sequence of n independent random vectors distributed according to X exhibits the optimal performance one would expect had X been a gaussian vector. The procedure also improves the current state-of-the-art regarding high probability bounds in the sub-gaussian case (sharp results were only known in expectation or with constant probability). In both scenarios the new bound does not depend explicitly on the dimension, but rather on the effective rank of the covariance matrix of X. The talk is based on the joint work with S. Mendelson "Robust covariance estimation under L4-L2 moment equivalence", to appear in AoS 2019.
27. November 2019
Alain Celisse (U Lille)
04. Dezember 2019
Nils Bertschinger (U Frankfurt)
Systemic Greeks: Measuring risk in financial networks
Abstract: Since the latest financial crisis, the idea of systemic risk has received considerable interest. In particular, contagion effects arising from cross-holdings between interconnected financial firms have been studied extensively. Drawing inspiration from the field of complex networks, these attempts are largely unaware of models and theories for credit risk of individual firms. Here, we note that recent network valuation models extend the seminal structural risk model of Merton (1974). Furthermore, we formally compute sensitivities to various risk factors -- commonly known as Greeks -- in a network context. In the end, we present some numerical illustrations and discuss possible implications for measuring systemic risk as well as insurance pricing.
11. Dezember 2019
18. Dezember 2019
08. Januar 2020
Dominik Liebl (Universität Bonn)
15. Januar 2020
Sven Wang (U Cambridge)
22. Januar 2020
Jorge Matteu (Universität Jaume I)
29. Januar 2020
Nadja Klein (HU Berlin)
05. Februar 2020
Tim Sullivan (FU Berlin)
12. Februar 2020
Alexandra Carpentier (Universität Magdeburg)

 Interessenten sind herzlich eingeladen.

Für Rückfragen wenden Sie sich bitte an:

Frau Andrea Fiebig

Mail: fiebig@mathematik.hu-berlin.de
Telefon: +49-30-2093-5860
Fax:        +49-30-2093-5848
Humboldt-Universität zu Berlin
Institut für Mathematik
Unter den Linden 6
10099 Berlin, Germany