Interesting piece in *The Guardian* by Philip Ball in the Saturday *Critical Scientist* slot (replacing Ben Goldacre, and still worth reading): http://www.guardian.co.uk/commentisfree/2011/dec/23/critical-scientist-higgs-boson

He’s discussing the value of statistical analysis of the results in the Higgs Boson and ‘faster-than-light neutrinos’ studies.

In any experiment, all sorts of complications can influence results. So if you see something interesting, you need to make sure it’s not just a random fluctuation. That depends on how widely spread out your results are: the bigger the fluctuations, the more you’re apt to be misled by them. The spread is measured by a quantity called sigma. The bigger your “interesting” signal is relative to sigma, the more “statistically significant” it is: the more likely it is worth heeding.

In psychology, we use p to express the likelihood of a result occurring by chance, rather than sigma, the number of standard deviations from the mean of a chance distribution, but the basic principle is the same.

…these statistics don’t put numbers on the probability of a particular hypothesis being right or wrong, because experiments don’t care a hoot about your hypothesis. They just show the universe doing its thing.

And to interpret what the universe just did requires that we take into account what we know already: as evidence changes, so do the degrees of belief we may hold in a theory. This is commonly called Bayesian reasoning, after the 18th-century mathematician Thomas Bayes.

Ball’s argument is that he’s pretty well prepared to accept Higgs Boson results with low statistical significance, but even high levels of unlikeliness and statistical significance won’t be very convincing for the ‘faster than light’ results. The one result is in line with what we know about the universe: the other isn’t. As he says:

You could put it crudely this way: the real question about the faster-than-light neutrinos experiment is not “what is the chance it disproves relativity?” but “what is the chance that it disproves relativity given that your GPS system (which relies on relativity) works?”

What’s that got to do with psychology and Schools of Thought? Well, it fits with the fact that many well-known effects in social psychology are demonstrated by a small number of classic experiments with rather low levels of statistical significance, and could fit in the category of comfortable myths – but they fit with other stuff we know, and probably with our non-scientific expectations as well. So it’s not that unreasonable to accept that fairly low-grade evidence. On the other hand, as I said in the lecture about ‘unacceptable ideas’, there are bodies of research with some pretty impressive reported significance levels which I’m not going to believe in, whatever the stats: telekinesis and precognition, for instance. I presented my beliefs there as being in some way unscientific, but Ball (and Bayes) show how there’s scientific sense as well as prejudice in my judgement.

Relying too much on statistical significance is complicated because very unlikely things do happen by chance all the time. There’s line in a Paul Simon song about that. After all, 14+ million to one is pretty low odds, but a 14m: 1 chance comes off most weeks, when someone matches the lottery numbers and wins the jackpot. If people buy 20m+ tickets each week, that’s not surprising. Just knowing that people *do* win doesn’t make it any more likely that you will, but the fact that it seems to be vanishingly unlikely doesn’t make it any *less* real for those who do win.

So I’m with Ball when he says:

Which is why I’m only being scientific when I say screw the sigmas: I’d place a tenner (but not a ton) on the Higgs, while offering to join Jim Al-Khalili in eating my shorts if neutrinos defy relativity.

### Like this:

Like Loading...

*Related*