Adap'ski, posters abstracts
INFERENCE ON DIFFUSION PROCESSES BY
POPULATION MONTE CARLO METHOD
Roberto Casarin
CEREMADE, University Paris IX Dauphine, France and University of
Brescia, Italy
E-mail: casarin@ceremade.dauphine.fr
ABSTRACT
In this work we apply Bayesian simulation
based inference to the estimation of the parameters in the drift
and diffusion terms of a stochastic differential equation (SDE),
from discretely observed data. The method, here proposed, requires
the discretisation of the stochastic differential equation and
some assumptions on the prior distribution of the drift and
diffusion parameters. Parameter estimation is then obtained by
averaging the values simulated from posterior distribution by
sequential importance sampling. In particular we use the
Population Monte Carlo (PMC) method, which allows to improve the
basic importance sampling by resampling values with higher
probability and adapting the importance distribution to the
sampling dynamic. Finally we provide a comparison on synthetic
data, between traditional Gibbs sampling and PMC method, also
combining them with the high frequency data augmentation
principle.
Keywords: Population Monte Carlo, Gibbs Sampling, Sequential Importance
Sampling, Monte Carlo Bridging, Stochastic Differential Equation, Ito
Process, CEV Process.
Genetic algorithms and Markov
Chain Monte Carlo:
Differential Evolution Markov Chain makes Bayesian computing easy
Cajo
J. F. ter Braak,
Biometris,
Wageningen University and Research Centre, Box 100, 6700 AC Wageningen,
The
Netherlands.
E-mail:
Cajo.terbraak@wur.nl
Differential
Evolution (DE) is a
simple genetic algorithm for numerical optimization in real parameter
spaces.
In a statistical context one would not just want the optimum but also
its uncertainty. The uncertainty distribution can be obtained by a
Bayesian
analysis (after specifying prior and likelihood) using Markov Chain
Monte Carlo
(MCMC) simulation. Here the essential ideas of DE and MCMC are
integrated into
Differential Evolution Markov Chain (DE-MC). DE-MC is a population MCMC
algorithm, in which multiple chains are run in parallel. DE-MC solves
an
important problem in MCMC, namely that of choosing an appropriate scale
and
orientation for the jumping distribution. In DE-MC the jumps are simply
a
multiple of the differences of two random parameter vectors that are
currently
in the population. Simulations and examples illustrate the potential of
DE-MC.
The advantage of DE-MC over conventional MCMC are simplicity, speed of
calculation and convergence, even for nearly collinear parameters and
multimodal densities. For the expert, DE-MC is a form of parallel
adaptive direction
sampling in which the Gibbs step in the parallel direction is replaced
by a
Metropolis step with a near optimal step size. A pdf of the paper is
available
at http://www.biometris.nl/Markov%20Chain.pdf.
Time-Stability Condition and Ergodicity of
Adaptive MC
Krzysztof
Latuszynski, Warsaw School of Economics, Warsaw, Poland
Ergodicity results for adaptive Monte Carlo algorithms usually assume
"time-stability" of transition kernels. On the other hand, a large
class of time-inhomogenous Markov Chains is ergodic. This suggests
existance of adaptive MC algorithms which fail to satisfy the
"time-stability" condition but are still ergodic. I present a simple
modification of Atchadé-Rosenthal ergodicity Theorems (3.1 and
3.2 in [1]) that does not assume "time-stability" of transition kernels
and provide some examples.
[1]
Atchade Y. F., Rosenthal J. S. "On Adaptive Markov Chain Monte Carlo
Algorithms"
A DISTANCE-BASED DIAGNOSTIC FOR
TRANS-DIMENSIONAL MARKOV CHAINS
Yanan
Fan and S.A. Sisson
Over the last decade the use of trans-dimensional sampling
algorithms has become endemic in the statistical literature. In
spite of their application however, very few attempts have been
made to assess the whether the chain has reached its stationary
distribution. Those methods that have been proposed tend to
associate convergence of at most a small number of marginal
parametric functionals of subsets of the parameter vector with
that of the full chain. In this article we present a
distance-based method for the comparison of trans-dimensional
Markov chain sample-paths for a broad class of models, that
naturally incorporates the full set of parameters for each visited
model, and an arbitrary number of ``marginal'' functionals.
Illustration of the analysis of Markov chain sample-paths is
presented in two common modelling situations: finite mixture
analysis and a change-point problem.
BAYESIAN
RECONSTRUCTION OF LOW RESOLUTION MAGNETIC RESONANCE IMAGING MODALITIES
John
Kornak, Karl Young, Norbert Schuff and Michael Weiner
University
of California, San Francisco, USA
Magnetic resonance imaging (MRI) data are available in a large variety
of modalities that individually can provide anatomical, metabolic,
physiological, and functional descriptions of the brain. Unfortunately
not all of these modalities are attainable with high signal-to-noise
ratio (SNR) and hence some modalities are necessarily acquired at low
spatial resolution. This research aims to utilise information from the
high SNR modality of structural MRI in order to boost the effective
resolution of lower SNR MRI modalities.
Structural MRI of the brain provides high resolution maps of the
spatial structure of tissue that can readily be segmented into gray
matter, white matter and cerebro-spinal fluid. These maps are here
employed as prior information to enable improved resolution
reconstruction of lower SNR MRI modalities such as: Magnetic Resonance
Spectroscopy Imaging for measuring metabolite levels; Perfusion
Imaging for measuring blood flow; Diffusion Tensor Imaging for
measuring diffusion in tissue; and functional Magnetic Resonance
Imaging for measuring blood oxygenation levels and hence functional
activation.
A Markov random field (MRF) prior distribution relates the segmented
tissue maps with the imaging modality to be reconstructed by
describing the expected behaviour of the signal within the different
homogeneous tissue regions, as well as the behaviour across tissue
boundaries. The likelihood relates the low-resolution imaging
modality's signal (observed in Fourier/$k$-space) to the map to be
reconstructed in standard image space. The ensuing posterior
distribution is then optimised via Markov chain Monte Carlo (MCMC)
methods in order to obtain reconstructed maps at the resolution of the
sMRI. However, the complex dependence structure relating every point
in Fourier-space to every point in image space leads to high
computational
burden for MCMC sampling.
IMPROVED EFFICIENCY IN
APPROXIMATE BAYESIAN COMPUTATION
Scott
A. Sisson and Paola Borto and Stuart G. Coles
Recently a number of rejection-based algorithms have been proposed for
approximating the posterior distribution when it is impossible, or
computationally prohibitive to evaluate the likelihood
function. Observed data are instead compared with the parameter set
via a surrogate data set, simulated according to a known process.
The utility of these algorithms has been improved by the adoption of a
sampling scheme based on a Markov chain \cite{marjoram} (see
\cite{beamont} for an alternative approach), however the resulting
sampler can be inefficient.
Here we present an extension to the Markov chain
approach based on an augmented state space, in direct analogy with the
simulated tempering algorithm, that substantially improves sampler
efficiency \cite{bortot}. We demonstrate the proposed methodology on a
stereological inclusions
analysis.
Marjoram,
P., J. Molitor, V. Plagnol and S. Tavar\'e. Markov chain
Monte Carlo without likelihoods. PNAS, 100: 15324--15328,
2003.
Beamont, M.
A., W. Zhang and D. J. Balding. Approximate Bayesian
computation in population genetics. Genetics, 162: 2025--2035, 2002.
Bortot, P.,
S. G. Coles, and S. A. Sisson. Inference for stereological
extremes. Technical Report, 2004.