Thomson Reuters
 

 ScienceWatch
Spiegelhalter David Spiegelhalter
Featured Scientist from Essential Science IndicatorsSM

According to Essential Science Indicators from Thomson Reuters, the paper currently ranked at #3 in the field of Mathematics is "Bayesian measures of model complexity and fit," (Spiegelhalter DJ, et al., J. Roy. Stat. Soc. Ser. B-Stat. Met. 64: 583-616, Part 4, 2002). For the period ending October 31, 2008, this paper garnered 651 cites.

 


Lead author Dr. David Spiegelhalter is a Senior Scientist in the MRC Biostatistics Unit at the Institute of Public Health in Cambridge, UK. Since October 2007, he also serves as the Winton Professor of the Public Understanding of Risk in the Statistical Laboratory of the Centre for Mathematical Sciences at the University of Cambridge. His record in Essential Science Indicators includes 32 papers cited a total of 1,235 times. He is also a Highly Cited Researcher in the field of Mathematics.

 In the interview below, ScienceWatch.com talks with Dr. Spiegelhalter about this paper and its impact on the research world.

What was your inspiration for your paper?

I should first give a bit of background. Bayesian methods for data analysis allow great flexibility for handling complex problems and including multiple sources of evidence, but had been hampered for years by computational problems. Around 1990 it was realized that simulation-based techniques could be adapted, and this led to an explosion of activity, and we developed our BUGS software, which has become the most popular program for carrying out Bayesian analysis.

"… I feel confident that the basic idea is sound, but still hope that a better version can be developed."

But we quickly realized this new flexibility meant that people needed a way of comparing all the different models they could now fit, and no existing method would do. So it was a real practical demand that drove the research, and the paper summarizes what our multi-institution team (me from Cambridge, Nicky Best from Imperial College London, Brad Carlin from the University of Minnesota, and Angelika van der Linde from the University of Bremen) came up with after a lot of effort.

How did you develop this new method?

People had come up with all sorts of theoretical ideas, but we needed something that was reasonably straightforward, could be put into our software, gave equivalent answers to existing methods in simple circumstances, and gave sensible answers in new areas. We tried all sorts of options from 1995 onwards, but then in 1997 we came up with an idea that seemed to work well and had a reasonable theoretical justification. This was a very exciting time—we knew we had got something powerful—although it was another five years before the paper finally appeared.

Would you sum up your findings for our readers?

If you fit a complex model to some data, possibly making use of additional expert opinion, the method provides a way of assessing the essential complexity of the model that you have fitted and the adequacy with which it explains the past data; adding these two things up gives you an overall measure of how well you would expect the fitted model to predict new data. This final measure is called the DIC (Deviance Information Criterion), and can be used to compare any models you want. It can be applied generally and does not require specific software.

How was this paper received by the community?

"Bayesian methods for data analysis allow great flexibility for handling complex problems and including multiple sources of evidence, but had been hampered for years by computational problems."

Shall we say that there was a mixed reception? Methodological purists have not particularly liked it because, although the theoretical justification is reasonable, the specific estimation techniques are quite basic in order to make them widely applicable. But practitioners have loved it as it did something they just could not do before.

Where have you taken your research since the publication of this paper?

We and others have tried to widen the theory but with limited success. We have identified potential problems with the specific implementation, and alternative versions have been suggested—I feel confident that the basic idea is sound, but still hope that a better version can be developed. An added spin-off is increased interest in the general problem of estimating the essential complexity of a model and the residual "effective degrees of freedom."

What should the "take-away lesson" be for this paper and for your work?

If there is a real practical problem, and you worry away at it long enough and follow your intuition, then a solution can be found. An additional help is to provide the means so that others can try your ideas on problems that really concern them: our software is free and is used by thousands worldwide, and the citations we have received have largely been from practical papers that were trying to answer scientific questions. So another lesson is that a good "product" that fills a hole in the market will be successful. I just wish that everything I did had the same sort of impact.

David Spiegelhalter
Senior Scientist
MRC Biostatistics Unit
Institute of Public Health
Cambridge, UK
and
Winton Professor of the Public Understanding of Risk
Statistical Laboratory Centre for Mathematical Sciences
University of Cambridge
Cambridge, UK

David Spiegelhalter's current most-cited paper in Essential Science Indicators, with 651 cites:
Spiegelhalter DJ, et al., "Bayesian measures of model complexity and fit," J. Roy. Stat. Soc. Ser. B-Stat. Met. 64: 583-616, Part 4, 2002. Source: Essential Science Indicators from Thomson Reuters.
Additional Information:
 

David Spiegelhalter is featured in ISIHighlyCited.com.

Keywords: Bayesian methods, data analysis, complex problems, models, BUGS software, DIC.

Download this article



2009 : February 2009 - Author Commentaries : David Spiegelhalter - Interview
Science Home  |  About Thomson Reuters  |  Site Search
Copyright  |  Terms of Use  |  Privacy Policy
Previous
left arrow key
Next
right arrow key
Close Move