Bayesian Core: A Practical Approach to Computational Bayesian Statistics. Jean-Michel MARIN and Christian P. ROBERT. New York: Springer, 2007.
ISBN 978-0-387-38979-0. xiii +255 pp. $74.95, GB £ 46.00, 64.15 euros.Wolfgang PolasekRecent
times have seen several new books introducing Bayesian computing. This
book is an introduction on a higher level. “The purpose of this book is
to provide a self-contained entry to practical & computational
Bayesian Statistics using generic examples from the most common models.”
Yes and no: the purpose is to introduce jointly
Bayesian statistics and its computational tools. We thus aim at
self-contained-ness, as indicated in the quote. The review seems to
question the purpose of the book as well as the intended audience... |
The contents of the book is as follows:
- Introductory chapter.
- Normal models: Conditional
distributions, priors, posteriors, improper priors,conjugate priors,
exponential families, tests, Bayes factors, decision theory, importance
sampling.
- Regression and variable selection: G-priors, non-informative priors, Gibbs sampling, variable selection.
- Generalised linear models: Probit, logit and log–linear models, Metropolis Hastings algorithms, model choice.
- Capture–recapture experiments: Sampling models, open populations, accept-reject algorithm, Arnason–Schwarz model.
- Mixture models: Completion, variable dimensional models, label switching, tempering, reversible jump MCMC.
- Dynamic models: AR,MA and ARMAmodels, state-space representation, hidden Markov models, forward–backward algorithm.
- Image analysis: k-nearest-neighbour, supervised classification, segmentation, Markov random fields, Potts models.
The
authors recommend this book for a class of about 7 blocks or “roughly
12–14 weeks of teaching (with 3 h of lectures per week), depending on
the intended level and the prerequisites imposed on the students.” In
my opinion this is feasible only for mathematics students with a fairly
advanced background in statistics.
We
[and others] have been teaching from this book for more than three
years and the outcome varies quite a lot. We acknowledge that the book
cannot be covered in one semester for an undergrad audience, but if
computer R labs are added to
the three hours of lectures, graduates (with a proper math background,
indeed) can achieve a complete picture. Again, there is variability
across countries and backgrounds... |
The nice thing about the book are the data sets and the
R
programs that can be downloaded from the authors’ website:
http://www.ceremade.dauphine.fr/~xian/BCS/. In addition, a solution
manual with solutions to all exercises is available on the Springer
webpage (www.springer.com), but only for instructors.
The book
will not be easy to use as a stand-alone textbook, and should be used
together with other introductory textbooks (see Bibliography).
This
recommendation somehow kills the whole purpose of the book! For
instance, the book by Dani Gammerman and Hedibert Lopes somehow covers
very similar grounds
[maybe at a slightly more advanced level?] and it would not make much
sense to have students using both books together. The MCMC book with
George Casella is a reference book, so using Bayesian Core
in addition does not make sense either, especially because the audience
cannot be the same: we are primarily aiming at those students who have
had no previous [real] exposure to computational methods and/or
Bayesian inference and whose goal is to develop a practical feeling
about them. So they can be majoring in other fields, like Economics or
Biology with a minor study in Statistics. E.g., our initial cohort of
students in Paris was in a professional degree in statistical and
financial engineering. |
Furthermore,
the notation does not follow the mainstream Bayes papers, at least not
in Econometrics. A table of abbreviations would have helped. The
extensive use of g-priors is surprising, especially in the context of
Gibbs sampling, since MCMC should be used to overcome such “convenient”
assumptions for priors.
This is more a matter of taste than of usage since the notations are the same as in Monte Carlo Statistical Methods and in The Bayesian
Choice.
If the notations are different in Econometrics, we are unaware of this
fact (and this is the first time we get a complaint about notations.) A
table of distributions and of common terms would be useful indeed and
can be incorporated in the next edition. As for the choice of the g-priors,
there is again a misunderstanding about the purpose of the book: we are
quite aware that alternatives can be used for modelling prior
information in regression models, but we want to give the student an
effective tool by the end of the chapter and g-priors
are such tools, especially when using our non-informative version! This
point is stressed several times: Bayesian experts will find our choices
restrictive and rightly so, but making firm choices about our prior
distributions is the only way to achieve an operational tool in 240
pages. |
The
main recent use of Bayesian model averaging (BMA) is not mentioned, but
the reversible jump algorithm is briefly described. Special features of
the book are the chapters on capture–recapture models and image
analysis.With 240 pages it only scratches on some important topics in
Bayesian modelling and the reader has to make up for the “missing
links” either by solving the exercises or looking up other books and
papers: Unfortunately the reference list could have been much more
extensive for this purpose.
Same
thing: we deliberately kept the reference list to a minimum, in order
to avoid confusing the students. If they want to pursue studies in
Bayesian inference, we provide the major textbooks in the area. Journal
papers are not appropriate for a crash course, since it is unrealistic
to expect students to work on practical implementation and to peruse research papers. In a sense the reference list is already too long! As for solving the exercises, as explained in the User's Manual, they are essential for the comprehension of the book, which means that they must
be solved as they come! There are certainly many missing topics for the
mature statisticians [and we plan to expand on at least two more
chapters in the coming second edition!], but this cannot be avoided in
a one-semester course. |
Many researchers and Ph.D. students will find the
R programs in the book a nice start for their own problems and an innovative source for further developments.
BibliographyGamerman D, Lopes HF (2006)
Markov chain Monte Carlo: stochastic simulation for Bayesian inference. Texts in statistical science series, 2nd edn. Springer, New York.
Geweke JF (2005)
Contemporary Bayesian econometrics and statistics. Wiley, New York
Geweke
J, Groenen PJF, Paap R, van Dijk HK (2007) Computational techniques for
applied econometric analysis of macroeconomic and financial processes.
Comput Stat Data Anal 51(7):3506–3508
Robert CP, Casella G (2004)
Monte Carlo statistical methods. Springer texts in statistics, 2nd edn. Springer, New York.