Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Institut für Mathematik

Forschungsseminar Mathematische Statistik

Für den Bereich Statistik


A. Carpentier, S. Greven, W. Härdle, M. Reiß, V. Spokoiny

 

Ort

Weierstrass-Institut für Angewandte Analysis und Stochastik
Erhard-Schmidt-Raum
Mohrenstrasse 39
10117 Berlin

 

Zeit

mittwochs, 10.00 - 12.00 Uhr


Programm

 

Achtung!
The seminar will be hybrid and realized via Zoom. Our lecture room ESH has according to hygiene recommendations only a capacity of 16 people. If you intend to come to same of the talks in person, you must register for our mailinglist with Andrea Fiebig (fiebig@math.hu-berlin.de). Prior to each talk a doodle will be created where it is mandatory to sign in for attendance in person. Therefore, it is mandatory for those who want to participate in person to register (put your name in the list) using the doodle link sent by e-mail before the lecture. Please follow the streamed talk, if 16 guests have already registered under the zoom link (to be inquired at fiebig@math.hu-berlin.de).
The so-called ''3G rule'' applies at the Weierstrass Institute </
 
19. Oktober 2022
Otmar Cronie (Chalmers University of Technology & University of Gothenburg) (10 - 11 Uhr)
Point Process Learning: A Cross-validation-based Approach to Statistics for Point Processes
David Frazier (Monash University, Melbourne, Australia) (ca. 11-12 Uhr)
Guarenteed Robustness via Semi-Modular Posterior Inference
Abstract: Even in relatively simple settings, model misspecification can cause Bayesian inference methods to fail spectacularly. In situations where the underlying model is built by combining different modules, an approach to guard against misspecification is to employ cutting feedback methods. These methods modify conventional Bayesian posterior inference algorithms by artificially limiting the information flows between the (potentially) misspecified and correctly specified modules. By artificially limiting the flow of information when updating our prior beliefs, we essentially "cut" the link between these modules, and ultimately produce a posterior that differs from the exact posterior. However, it is generally unknown when one should prefer this "cut posterior" over the exact posterior. Rather than choosing a single posterior on which to base our inferences, we propose a new Bayesian method that combines both posteriors in such a way that we can guard against misspecification, and decrease posterior uncertainty. We derive easily verifiable conditions under which this new posterior produces inferences that are guaranteed to be more accurate than using either posterior by itself. We demonstrate this new method in a host of applications.
26. Oktober 2022
N.N.  
02. November 2022
Johannes Schmidt-Hieber (University of Twente) 
tba
09. November 2022
Claudia Schillings (FU Berlin)  
tba
16. November 2022
Aila Särkkä (Chalmers University of Technology and University of Gothenburg) 
Anisotropy analysis and modelling of spatial point patterns
23. November 2022
Alexey Kroshnin (WIAS Berlin)
tba
30. November 2022    
Anatoly Juditsky (Université Grenoble Alpes)
tba
07. Dezember 2022
Matthias Vetter (Universität Kiel)
tba
14. Dezember 2022
N.N.
04. Januar 2023
Franz Besold (WIAS Berlin)
Adaptive Weights Community Detection
Abstract: Due to the technological progress of the last decades, Community Detection has become a major topic in machine learning. However, there is still a huge gap between practical and theoretical results, as theoretically optimal procedures often lack a feasible implementation and vice versa. This paper aims to close this gap and presents a novel algorithm that is both numerically and statistically efficient. Our procedure uses a test of homogeneity to compute adaptive weights describing local communities. The approach was inspired by the Adaptive Weights Community Detection (AWCD) algorithm by Adamyan et al. (2019). This algorithm delivered some promising results on artificial and real-life data, but our theoretical analysis reveals its performance to be suboptimal on a stochastic block model. In particular, the involved estimators are biased and the procedure does not work for sparse graphs. We propose significant modifications, addressing both shortcomings and achieving a nearly optimal rate of strong consistency on the stochastic block model. Our theoretical results are illustrated and validated by numerical experiments.
11. Januar 2023
Leonhard Held (Universität Zürich)
tba
18. Januar 2023
Tim Jahn (Universität Bonn)
tba
25. Januar 2023
Maria Grith (Erasmus University Rotterdam)
tba
01. Februar 2023
N.N.
08. Februar 2023
N.N.
15. Februar 2023
N.N.

 

 
 


 Interessenten sind herzlich eingeladen.

Für Rückfragen wenden Sie sich bitte an:

Frau Andrea Fiebig

Mail: fiebig@mathematik.hu-berlin.de
Telefon: +49-30-2093-45460
Fax:        +49-30-2093-45451
Humboldt-Universität zu Berlin
Institut für Mathematik
Unter den Linden 6
10099 Berlin, Germany