Gelman’s Analysis

Andrew Gelman Induction and Deduction in Bayesian Data Analysis RMM Vol. 2, 2011, 67–78 (Special Topic: Statistical Science and Philosophy of Science) Eds. DG Mayo, A Spanos and KW Staley

From my experience using and developing Bayesian methods in social and environmental science, I have found model checking and falsification to be central in the modeling process.

To [falsificationists], any Bayesian model necessarily represented a subjective prior distribution and as such could never be tested. The idea of testing and p-values were held to be counter to the Bayesian philosophy.

Model checking plays an uncomfortable role in statistics. A researcher is typically not so eager to perform stress testing, to try to poke holes in a model or estimation procedure that may well represent a large conceptual and computational investment. And the model checking that is done is often invisible to the reader of the published article. Problems are found, the old model is replaced, and it is only the new, improved version that appears in print.

I abandon a model when a new model allows me to incorporate new data or to fit existing data better.

Our key departure from the mainstream Bayesian view … is that we do not attempt to assign posterior probabilities to models or to select or average over them using posterior probabilities. Instead, we use predictive checks to compare models to data and use the information thus learned about anomalies to motivate model improvements.

My second explanation for the tenacity of the subjective Bayesian approach (in the face of the much-noted general tendency toward Popperian objectivism among working scientists) is simple logic: the argument made … that any complete set of inferences must be either Bayesian or incoherent (see Savage 1954). I believe this argument fails because of the imperfections of any statistical model—Bayesian or otherwise—in real-world settings; nonetheless, it has been a powerful motivation for the subjective inductive philosophy.

What rejection tells us … that certain potentially important aspects of the data are not captured by the model.

Comment

The subjective Bayesian approach is useful to a researcher, in helping them to see where their prejudices lead, and hence in uncovering any mistakes sooner. For example, if someone believed in the efficient markets hypothesis and hence that crashes such as that of 2008 were extremely improbable, they might be led to reconsider their assumptions when the crash did occur. But the subjective reasoning of others has little purchase on our own thinking, unless we happen to share their beliefs.

In some areas it is supposed that a community will have ‘reasonable’ subjective assumptions, and that these will lead to reasonable conclusions. A subjective Bayesian might conclude that a certain statement has a certain probability. We might suppose that their probability is approximately correct, subject to certain conditions, one of which is that their hypotheses adequately span the range of possible models, while another is that the data is – in some sense – stably stochastic. Given these concerns, it seems prudent to check the fit of the data to the model, as Good advocates.

In contrast, Gelman notes an academic reluctance to consider that the assumptions may be wrong, because if so, one doesn’t have a paper. Similarly, many organizations need a ‘definite method’ for analyzing data, because they need to ‘operationalise’ it. A narrow Bayesian approach can yield such a method, whereas there is no such method for generating new models. To me, the critical issue is to develop methods that can be operated as ‘definite methods’ in the short-run, but which allow for oversight and intervention in the longer-run. The current assumption seems to be (as here) that the overall Bayesian approach is fixed, while models are monitored for fit and corrected or supplemented as necessary. This is clearly better than not monitoring or altering, but is it always adequate?

More generally, it seems to me that while Bayesian methods are informative, one has to be careful about the interpretation of their results.

See Also

Don’t know, can’t know.

Dave Marsay

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: