The use of Piecewise Deterministic Markov Processes (PDMP) have recently gained interest in MCMC algorithms. They appeared as limits of lifted Markov chains or of rejection-free Metropolis algorithm, designed in order to improve the speed of convergence of the algorithms. We will briefly introduce these processes and present some of the arguments that motivate their use for sampling.
Several variants of PDMPs allow to sample from posterior distributions, namely the Randomized Hamiltonian Monte Carlo, the Zig-Zag process and the Bouncy Particle Sampler. The hypocoercivity techniques proposed in [DMS2015] produces spectral gap estimates with explicit dependence on the parameters of the dynamics, for a very general class of PDMPs. Moreover the general framework we consider allows to compare quantitatively the bounds on the asymptotic variance found for the different methods.
On considère une particule qui bouge à vitesse constante dans un convexe, et qui est réfléchie de façon aléatoire lorsqu’elle touche le bord du convexe. Le processus des positions de la particule aux instants de sauts est une chaîne de Markov sur le bord du convexe, et le processus position/vitesse est lui un processus de Markov déterministe par morceaux, que l’on appelle « billard stochastique ». On s’intéressera dans cet exposé aux vitesses de convergence de ces processus vers leurs mesures invariantes respectives, en explicitant des couplages.
In recent years piecewise deterministic Markov processes (PDMPs) have emerged as a promising alternative to classical MCMC algorithms. In particular these PDMP based algorithms have good convergence properties and allow for efficient subsampling. Although many different PDMP based algorithms can be designed, two algorithms play fundamental roles: the Bouncy Particle sampler and the Zig-Zag sampler. In this talk both algorithms will be introduced and a comparison of properties of these algorithms will be presented, including recent results on ergodicity and on scaling with respect to dimension.