Title : Sampling using Normalizing Flows and Quasi-Monte Carlo
Abstract : Monte Carlo simulation is the fact to sample from a distribution to estimate an expectation/integral and is broadly used in statistics, physics, machine learing... In the last few years, new methods coming from the Machine Learning community has been used to enhance existing Monte Carlo methods. In particular, normalizing flows, a class of generative models parametrized by a neural network can approximate a distribution by using its pointwise evaluation. On the other side, (randomized) quasi-Monte Carlo methods are used for decades to perform numerical integration by replacing the random sample of Monte Carlo by a sequence which cover more uniformly the hypercube, and allow to obtain better convergence rate than the $\mathcal{O}(n^{1/2})$ of plain Monte Carlo. In this talk, we will try to apply a normalizing flow on top of a quasi-Monte Carlo sample, to obtain a new estimator with (hopefully) smaller variance than a classic sample, and finally we will have a look at numerical experiments that illustrate this method.