Kosuke Imai, Gary King, &
Elizabeth Stuart talk with ScienceWatch.com and
answers a few questions about this month's New Hot Paper in
the field of Economics & Business.
Article Title: Misunderstandings between
experimentalists and observationalists about causal
K;King, G;Stuart, EA
Journal: J ROY STATIST SOC SER A STAT
Year: Part 2 2008
* Princeton Univ, Dept Polit, Princeton, NJ 08544
* Princeton Univ, Dept Polit, Princeton, NJ 08544
* Harvard Univ, Cambridge, MA 02138 USA.
* Johns Hopkins Bloomberg Sch Publ Hlth, Baltimore, MD USA.
Why do you think your paper is highly
The results in our article have the potential to increase the effectiveness
of causal inference in research across many fields of science. Most
scholars make causal inferences, and all those making causal inferences use
either experimental or observational studies. We found that the scholarly
fields emphasizing experimental and observational studies have evolved in
isolation to some degree, resulting in serious misunderstandings and
fallacies about the advantages of each.
In fact, this isolation has paradoxically even led researchers to
misunderstand and misuse their own research methods. We show how to avoid
these problems for both groups and offer new conceptualizations and
statistical methods that improve causal inference in many areas of
Does it describe a new discovery, methodology, or
synthesis of knowledge?
The common misunderstandings and fallacies we address concern some of the
most basic advantages and disadvantages of observational and experimental
research designs. Problems include improper use of hypothesis tests for
covariate balance between the treated and control groups, and the
consequences of using random assignment, blocking before randomization, and
matching after treatment assignment to achieve covariate balance. Applied
researchers in a wide range of scientific disciplines seem to fall prey to
one or more of these fallacies, and as a result make suboptimal design or
To help researchers avoid these problems, we derive a new four-part
decomposition of the key estimation errors in making causal inferences. We
then show how this decomposition can help scholars from different
experimental and observational research traditions better understand each
other's inferential problems and attempted solutions by thinking about the
trade-offs of different study designs.
Would you summarize the significance of your paper
in layman's terms?
Our article shows how to make more accurate and reliable causal inferences,
a subject fundamental to most scientific research and even to large parts
of ordinary life. Whenever you ask "why" or "how," you are asking a
question that involves causal inference. Whenever you wonder whether taking
a medicine will improve your health, implementing a public policy will
improve the life of the citizenry, driving this way or that will get you to
work faster, or getting married or divorced will make you happier, you are
asking causal questions, requiring causal inferences.
A large part of scientific research is designed to make these inferences
with far less uncertainty and likelihood of bias than our qualitative
judgments in daily life. Our article reveals inferential fallacies and
other problems in the way scientists make causal inferences and offers ways
to greatly improve them going forward.
How did you become involved in this research, and
were there any problems along the way?
We worked together on an article that developed matching methods for
reducing model dependence and improving causal inference in observational
data—which, coincidentally, was named a
Paper" last year by Clarivate Analytics ScienceWatch.com.
In part of that article, we clarified misunderstandings that
observationalists had about experimental work. But when we pushed farther,
we were surprised to learn that experimentalists made these same errors in
their own work. That insight opened a new vista for making progress on
causal inference in both areas of science.
Where do you see your research leading in the
We have already written and made publicly available open-source software
that implements some of our methods for improving causal inferences
("MatchIt: Nonparametric Preprocessing for Parametric
Causal Inference,") and applied our methods in a large evaluation of the
Mexican universal health care system, as well as in evaluating
school-based programs to prevent delinquent behavior. We hope to
continue to develop methods and apply them in a variety of areas of
social science and public health.
Do you foresee any social or political implications
for your research?
Whether denominated in the millions of federal (and other) research dollars
that can be saved or the increase in knowledge that can be learned for the
same research funding, the new methods and understandings offered in our
article, for both experimentalists and observationalists, seem to have much
potential across a wide variety of applications.
These methods and research designs, which we've discussed and improved
upon, are routinely used in studying numerous social and political
questions in public policy, political science, medicine, and public health.
The implications of improving these methods in improving public policy and
understanding democratic representation are likely to be widespread.
Department of Politics
Princeton, NJ, USA
David Florence Professor of Government
and Director, Institute for Quantitative Social Science
Cambridge, MA, USA
Department of Mental Health
Department of Biostatistics
Johns Hopkins Bloomberg School of Public Health
Baltimore, MD, USA
KEYWORDS: AVERAGE TREATMENT EFFECTS; BLOCKING; COVARIATE
BALANCE; MATCHING; OBSERVATIONAL STUDIES; RANDOMIZED EXPERIMENTS.