Just another site

Category Archives: 1. Schools of Thought in Psychology

Chaos, Determinism, & Psychology

I’ve been rereading James Gleick’s excellent book Chaos  (1988), and it started me thinking about the practical usefulness of a deterministic psychology.

Determinism in psychology has always been a personal problem for me, because it’s difficult to reconcile the rigid determinism that the science of psychology must lead to: ‘varying factor X will result in effect Y’, with the feeling of free will and choice which is an everyday experience. As a scientist, I have to go with determinism; as an individual, I feel I have free will and I regret the bad choices I continually make. OK, that’s an existential problem, but what about the practical usefulness of a deterministic psychology?

I think understanding chaotic systems and how they work gives us some ideas about this.

Here’s the creation myth of chaos theory: a meteorologist called Lorenz constructed a simple mathematical weather model in 1961 consisting of a dozen non-linear equations. These describe things like the relationship between temperature and atmospheric pressure, and pressure and windspeed. He fed data on these variables into a computer model and let it run to see what weather it would predict. In those days, computers were slow and calculations took a long time to run. On one occasion, he restarted the calculation that he had had to stop partway through by retyping in the figures that the incomplete run had produced.

To give the machine its initial conditions, he typed the numbers straight from the earlier printout. Then he walked down the hall to get away from the noise and drink coffee. When he returned an hour later, he saw something unexpected, something that planted the seed for a new science.

The new run should have exactly duplicated the old. Lorenz had copied the numbers into the machine himself. The program had not changed. Yet as he stared at the new printout, Lorenz saw his weather diverging so rapidly from the pattern of the last run that, within just a few months, all resemblance had disappeared. He looked at one set of numbers, then back at the other, he might as well have chosen to random numbers out of a hat. His first thought was that another vacuum tube had gone bad.

Suddenly he realised the truth. They have been no malfunction. The problem lay in the numbers he had typed. In the computer’s memory, six decimal places were stored: .506127. On the printout, to save space, just three appeared: .506. Lorentz had entered the shorter, rounded off numbers, assuming that the difference – one part in thousand – was inconsequential.
Gleick (1988), p16

But it wasn’t inconsequential. What Lorenz had discovered was that even a tiny change in the starting conditions of a process which depends on several non-linear functions can lead to unpredictable and far-reaching changes in final outcomes. This is what we now call the ‘Butterfly Effect ‘: a tiny change in weather conditions in one part of the world may lead to large unpredictable changes elsewhere. Because of this, it is now generally recognised that long-term weather prediction is practically impossible, no matter how sophisticated our computer models or how extensive and precise measurements of the conditions are.

I think the same applies in psychology. Although we can describe some psychological functions in terms of how factor X leads to effect Y, those functions are generally non-linear. A trivial but obvious example is the effect of amount of alcohol consumed on how good you feel. At low levels, increasing the amount consumed increases the sense of well-being in many people; a higher levels, increasing the amount consumed just leads to the resolution to never, ever, do this again.

Now, if the deterministic relationships which control our behaviour are non-linear, and we are complex systems in which many of these non-linear relationships interact, we are perfect examples of a chaotic system. As such, no matter how well we understand the relationships, nor how precisely we can measure (or control) the starting conditions, we cannot make long-term predictions of the outcomes.

Gleick sums this up later in the book in describing the views of psychiatrist Arnold Mandell:

To Mandell, the discoveries of chaos dictate a shift in clinical approaches to treating psychiatric disorders. By any objective measure, the modern business of ‘psychopharmacology” – the use of drugs to treat everything from anxiety and insomnia to schizophrenia itself – has to be judged a failure. Few patients, if any, are cured. The most violent manifestations of mental illness can be controlled, but with what long-term consequences, no one knows. Mandell offered his colleagues a chilling assessment of the most commonly used drugs. Phenothiazines, prescribed for schizophrenia, make the fundamental disorder worse. Tricyclic antidepressants “increase the rate of mood cycling, leading to long-term increases in numbers of relapsing psychopathological episodes.” And so on. Only lithium has any real medical success, Mandell said, and only for some disorders.

As he saw it, the problem was conceptual. Traditional methods of treating this “most unstable, dynamic, infinite-dimensional machine” were non-linear and reductionist. “The underlying paradigm remains: one gene – one peptide – one enzyme – one neurotransmitter – one receptor – one animal behaviour –  one clinical syndrome – one drug – one clinical rating scale. It dominates almost all research and treatment in psychopharmacology. More than 50 transmitters, thousands of cell types, complex electromagnetic phenomenology, and continuous instability-based autonomous activity at all levels, from proteins to the electroencephalogram – and still the brain is thought of as a chemical point-to-point switchboard.” To someone exposed to the world of non-linear dynamics the response could only be: how naïve. Mandell urged his colleagues to understand the flowing geometries that sustain complex systems like the mind.
Gleick (1998), pp 298-299 (Gleick gives a reference to Mandell’s original writing: I’ve put that at the end).

We might not be quite so pessimistic as Mandell about the effectiveness of psychopharmacology, though even 25 years later I’m not sure that much has changed, and his description of the models used is a bit of a caricature, but the basic point of the unpredictable chaotic nature of the human system is surely valid.

So, even if it were the case that we were completely deterministic systems (like the meteorological systems of weather), and we could determine the relationships within those systems (which we are clearly a very long way away from being able to do at the moment), would that be useless in producing a fully descriptive, fully predictive psychology?

Well, yes and no. We now know that long-term fine-grained meteorological prediction is impossible, but short-term local weather forecasts can still be very useful, even though we don’t expect them to be completely accurate. Similarly (until we started messing around with things with ever-rising CO2 levels, at least) we can make reasonably reliable long-term general predictions. We know how April in Spain will generally differ from August in Spain, and how the weather there will generally differ from the weather in Finland at the same times of year. In many cases, that’s good enough to be going on with, but we are always aware of the possibility of ‘freak’, ‘unpredictable’ weather events.

Similarly, we can make pretty good short-term psychological predictions, certainly in terms of predicting the general outcome of experimental manipulations, and generally useful long-term predictions, based on the climatic differences between ‘introvert’ and ‘extrovert’, or convergent and divergent thinkers.

In fact, in a chaotic deterministic model, failures of prediction, such as the unpredictable extroverted behaviour of some introverts, and people’s ability to switch from convergent to divergent in certain circumstances, might not be disconfirming evidence for our models. Some unpredictability is to be expected. As long as we limit predictions to the very short term or to generalisms, and have some idea of the unpredictability to be expected (which chaos theory can give us), our models may serve pretty well. That is, they can serve understanding of the processes involved, but may be much less useful for control or categorisation. Even in a fully deterministic world, the ‘gene for believing in flying saucers’ is not going to be simplistically effective, and the test for leadership potential is not going to unerringly detect potential leaders.

So where does this leave the effective usefulness of a completely deterministic psychology, and what does it mean for the existential problem of the possible illusion of free will? I think it shows that the aim of describing, understanding and controlling human behaviour through deterministic (and reductionist) models is over-optimistic. We can make some weather-forecaster-like predictions, but more holistic and phenomenological ways of understanding are going to be equally useful. I think the same applies to determinism and free will. It may be that all my thoughts, reactions, and behaviours are determined, but if so, since they are determined in a way which is unpredictable (and may be unfathomable) carrying on behaving as though I have free will and I’m responsible for the choices I make not only seems to work, but might be the most practical alternative. We are aware that we are to some extent determined; we have ideas of internal and external compulsion, but we also have ideas about ways of working with that, and to the extent that these ideas work, they are practically, humanly, useful – even if fundamentally illusory. This is the solution that that old determinist Skinner came to in his book Beyond Freedom and Dignity. Although he felt that behaviour was determined by reinforcement contingencies, somehow, if we have the ability to understand and manipulate those contingencies, we can choose to create better or worse worlds.

In some ways this is similar to the practical solution of the Cartesian problem, that we can never be sure that the world we experience is as it seems to be – that it is not an illusion produced by our senses. It could well be an illusion, but unless someone is offering us the red pill or the blue pill, there is no way of establishing that, and the only sensible thing we can do is to operate in the world as we experience it. What other world could we operate in? Also, we know that some parts of our world experience are illusory, and the understanding of that gives us a more secure basis for operating in good faith in other parts of the world.

Yes I know that’s simplistic, and ignores problems like the false consciousness associated with late-phase capitalism, but it works for me. Just as Samuel Johnson established the existence of the stone by kicking it*, my world of free will is established by the consequences of the good and bad choices I seem to be making, and the pleasure I experience in looking at the trees  and birds which seem to be in front of me.


Gleick, James (1988) Chaos: Making a new science London: Cardina

Mandell, Arnold J. (1985) From Molecular Biological Biological Simplifiaction to more Realistic Central Nervous System Dynamics: an Opinion in Cavenar & al (eds) Psychiatry: Psychobiological Foundations of Clinical Psychiatry New York: Lippincott (cited in Gleick, 1988)

Skinner, B.F. (1971) Beyond Freedom and Dignity NewYork: Knopf

*After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the non-existence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it, “I refute it thus.”
— James Boswell In Boswell’s Life of Johnson (1820), Vol. 1, 218.

One of the foundation myths of modern psychology: “Brain Scans Show”

I’ve written about this before ( and, but reading through Dorothy Bishop’s excellent BishopBlog (, I came across a post of hers which made the points more clearly than I can:

Bishop also links to from Neuroskeptic, who makes similar points. Neuroskeptic’s argument is not as carefully organised as Bishop’s (and ends up by dismissing the James-Lange theory of emotions as obviously rubbish, which isn’t really justified), but is pleasantly forceful.

Neuroskeptic also discusses the Bennet & al (2009) ‘brain scan of emotion-judging activity in a dead fish’ study (  which Christina mentioned in her lecture. The original poster by Bennet & al (it didn’t make it into a peer-reviewed journal, as far as I know) is at – .

Why do we believe these stories, and believe that brain scans are the royal road to an understanding of the unconscious (or at least a way of answering psychological questions)? I’ll try to explain in my next lecture.

Exam prep: how much further reading should you do?

Question on first year revision from a student:

I’m wondering if you could advise me on how much revision I should be doing. I am now using books to read over and make notes from in addition to the lecture notes. So how much more would you say is necessary for me to do in addition to revising lecture notes? I ask because I am finding some modules harder than others and perhaps would like to balance the revision evenly to give more priority to the modules I am struggling with.

That’s a good question (i.e. there isn’t a simple answer).
I’ve organised my ideas below in terms of what kind of mark each level of further reading might lead to at first year level.

Basic safe pass (50s, low/mid 60s)
I think the most important thing to go for is understanding  the basic lecture content. Two reasons for that:

  • Understanding (not just being able to repeat) the material is what we’re aiming at, so exam questions will be testing that.
  • We know from research in memory that people remember meanings better than specific details, that meaningful (i.e. understood) material is easier to remember than meaningless material, and that having a structure of understanding (a schema) makes it easier to remember new material which is related to that schema.

Now, we may be great at explaining things in lectures, so it’s always perfectly clear, and you may be great at making notes, so that you can always understand everything you’ve noted down afterwards – but i wouldn’t bet on either of those. So the first use of further reading is to read different  accounts of the lecture material – in textbooks, websites, whatever. Different people will explain stuff in different ways, and the chances are if you don’t understand one version clearly, another, different, version will work for you.
Once you’ve got a solid understanding of the main lecture material, you’re likely to be able to get a good mark in the exam (provided you can remember it in the exam and you use that understanding to actually answer the question).

Good pass (mid/high 60s)
But it’s worthwhile going further. (In the following two sections, I’ve guessed at what the basic lecture material was. If my ‘further’ examples actually were part of your basic content, then I hope you can think of equivalent examples.)
To start with, test your understanding and develop your schematic overview of the material.
For instance, if you know about the three-colour-receptor explanation of colour vision, what could you predict about different ways of being ‘colour-blind’? And why is it unlikely that people with anomalous colour vision don’t simply see in shades of grey? Then go and read up on anomalous colour vision, and see if your guesses are confirmed. Again, from what we know about memory, it’s likely that information gained as a result of active exploration like this is retained better than stuff that’s more or less passively read.

Excellent performance (70+)
Then, pick up on any extensions or complications of the main lecture material. For example, our main account of brain activity is in terms of nerve cells communicating with each other, and all the other brain structures, like glial cells, are just there to support the neurons. But you’ve probably seen some hints that people are beginning to think these other cells are also important in brain activity. OK, see if you can find any stuff about that.
The disadvantage to this ‘going further’ approach is that you’ll find that the picture gets more complicated the further you go (all this stuff is very complicated: that’s why we start out with the simple, ‘mythical’ versions to get you started) – but if you have a good basic understanding, you should be able to build the complications into your model, rather than finding them too confusing.

To go back to the original question: “I ask because I am finding some modules harder than others and perhaps would like to balance the revision evenly to give more priority to the modules I am struggling with”. I think that should be your primary guideline. If ‘struggling with‘ means ‘don’t really understand all of it‘, then the most important thing to do is to read around, at a fairly basic level, until you’re happy that you do understand the basic stuff in all the modules. Once you’ve got that basis of confidence, then it’s time to go for some more detail (and more complication).

Comments (from students: ‘I don’t get this’; or other teachers: ‘no. you’re wrong, because..’) are welcome.

Does having thin friends give you anorexia? …and should there be government intervention about that?

This is a long post about something people might have heard me going on about before, but I think there are some useful points near the end – and a personal confession. If you don’t want to go through all my nit-picking about the research behind these headlines, just scroll down to where it says RANT STARTS HERE.

A story cropping up all over this week (but please read on, past these headlines – because I don’t think the headlines are at all justified):

The Guardian: Anorexia research finds government intervention justified: Economic analysis finds that banning very skinny models from catwalk and pictures from magazines may prevent ‘epidemic’

Vox: Research-based policy analysis and commentary from leading economists: When distorted self-image takes its toll: The effects on the health of European females Joan Costa-i-Font &  Mireia Jofre-Bonet (authors of the original article)
Striving for the perfect body can take its toll, both physically and mentally. This column shows how excessive preoccupation with self-appearance can give rise to preventable eating disorders, such as anorexia and bulimia, among European females. It is time for policy action to shift people’s perceptions of their ideal body closer to what is healthiest.

The Age (an Australian newspaper): Skinny model ban ‘could curb anorexia’
Governments are justified in using the law to stop modelling agencies using very skinny women on catwalks and prevent magazines from printing photographs that suggest extreme thinness is attractive, according to research from the London School of Economics.

These are based on an upcoming paper in Economica. It’s not available online yet, though it looks as though Economica makes the current issue available free online, which is good, so you might be able to get to it in a few months. The authors link to CEP Discussion Paper No 1098 November 2011: Anorexia, Body Image and Peer Effects: Evidence from a Sample of European Women Joan Costa-Font and Mireia Jofre-Bonet in their Vox piece above, which looks as though it’s likely to be very similar to the forthcoming article.

The article is long and complicated and based on economic modelling, with lots of equations. I don’t understand the modelling process, and even if I did, I couldn’t follow the maths. So perhaps I shouldn’t comment, but I think I get the drift of the argument and the evidence – and how that relates, or doesn’t, to the headlines. I don’t think the article justifies the conclusions above, and I think it misses an important psychological point about eating disorders.

The research is a piece of economic modelling about the relative utility of health and body image, and how that might be influenced by various social and demographic factors, which comes to the conclusion:

Our results were consistent with the assumption that individuals trade off health against self-image.

Also, there’s a demonstration that ‘severe anorexia’ rates, as defined by the number of women who had a very low BMI (body mass index & so were extremely thin), who saw themselves as being ‘fine’ or ‘too fat’, and who also thought they were eating adequately, are higher in those European countries where women have lower BMIs generally. I don’t think that’s a great definition of extreme anorexia. But OK, then – and then what are their conclusions?

Also, in agreement with the epidemiological literature, we found that weight-related food disorders happen mostly at younger ages and require attention before they extend to older age groups. Note that the findings showed that anorexia primarily affected women aged between 15 and 34, and that it was primarily socially induced. These results have serious policy implications. They call for urgent action on individual identity, probably while it is still being formed, so as to prevent severe damage to women’s health and in order to improve their well-being and that of their families and friends.

Well, we sort-of knew about the younger-ages bit, and could have guessed that there are social influences (in accordance with a number of ‘you catch being fat from your friends & family’ findings),but does that really provide solid backing for the conclusions in the last two sentences?

Both the newspaper headline stories above talk about how this research supports a government ban on thin models. All I could find out about that in the article was the final paragraph:

In the light of this study, government intervention to adjust individual biases in self-image would be justified to curb or at least prevent the spread of a potential epidemic of food disorders. The distorted self-perception of women with food disorders and the importance or the peer effects may prompt governments to take action to influence role models and compensate for social pressure on women driving the trade-off between ideal weight and health. However, given the nature of the data and the absence of natural experiments we can’t prove our results as being causal and should be taken with caution.

Nothing about banning thin catwalk models (actually nothing about models at all) in the paper. The authors did try out a measure of exposure to inappropriate images by using subscription rates to ‘women’s magazines’ – and found it unrelated to anorexia rates. They comment:

The result of non-significance for the women’s magazine circulation per capita was quite puzzling as it was not consistent with some specific studies on the subject (Turner et al., 1997). This may be due to the crudeness of the country measure and the possibility that the categories are not comparable across countries; perhaps better quality data was required to measure the effect of environmental or media-related variables.

In other words, as good scientists, if we don’t find the results we wanted we presume it must be a problem with our measurements (this isn’t meant to be a snide criticism of the authors: that’s the way most people react to disconfirmation, really – and their measure was pretty crude). More importantly, it was NOTHING to do with skinny catwalk models.

Actually I think other measures in the paper seem pretty crude and/or inappropriate: for instance, their measure of health-consciousness was “the declared number of gynaecological screenings taken in the last 6 months.” The study uses a big general-purpose European dataset, so they have to use whatever measures were taken in compiling the dataset, rather than choosing appropriate measures – but it might be better not to force too much meaning into those measures.

The index of ‘severe anorexia’, as defined above, for women 15-34 varies a lot across European countries, from 4+% in Austria to 0.0% in Northern Ireland, what used to be West Germany, Greece, France and the Netherlands. What used to be East Germany (right next to West Germany) has a rate of 1.45%, and Ireland (right next to Northern Ireland) has 2.66%, so those are medium and high rates compared with other European countries. It’s a bit surprising that what you might think are closely related countries have such different rates of severe anorexia – though the mean BMIs of the population of young women in the those countries (the peer comparison measure) do go in the appropriate direction: higher in WGermany and NIreland than EGermany and Ireland.

So, I’m not convinced by the evidence, and it looks as though the ‘government should ban skinny models’ stuff just comes out of reporters’ fevered imaginations (or, more likely, the headline they’ve used several times before without thinking about it properly then, either). But on top of that….


Two things to rant about.

Yes, anorexia can be dreadful, both for those individuals who want to starve themselves, and for those around them, but it’s not the important weight epidemic. Overeating and obesity are what kills many more people, and looks to be getting to be a bigger and bigger problem. So if underweight models really do encourage young women (and men) to eat less – bring them on. Starve them more: their sacrifice will be worth it for the good of the nation. When I walk down the street, I don’t see much evidence of the malign influence of skinny models; more the effect of cheap calories and low-effort transportation, and I’m sure that’s the case in the diabetes clinics, too. [Disclaimer: my BMI is around 30, so I could definitely do with some of that influence, if it worked].


Whenever I read first-hand accounts of anorexia, the thing that strikes me most are issues of control, not body dysmorphia or inappropriate models. A couple of recent, anecdotal, examples: Gok Wan talking about his anorexia on TV last week: ‘I felt I couldn’t control anything in my life except what I put in my mouth, so I started to control that’.

Laurie Penny (identifying herself as a recovered anorexic) in the New Statesman, 5 March, 2012 (this article may appear on her blog:, which has other interesting stuff on it, though it’s not there as I write):

The most important thing to recognise about eating disorders is that starving, bingeing, purging and puking are not causes of distress, they are symptoms of it. The diseases are replete with contradictions, at once about denying hunger for food, for rest, for fun, for sex, for freedom while the sufferer – a curious combination of aggression and compliance. Eating disorders are what happens when youthful rebellion cannibalises itself.

She compares anorexia with work-to-rule strikes:

Women, precarious workers, young people, and others for whom the stakes of social non-conformity are high, lash out by doing only what is required of them, to the point of extremity. Work hard; eat less; consume frantically; push yourself to the point of collapse.
We followed all the rules, sufferers seem to be saying – now look what you made us do.

Seems a more psychologically (and socially) sophisticated account to me – and suggests that even if we locked up all those skinny models, the problem won’t go away.

Final admission That’s my position, intellectually, but actually, deep down, I’m influenced by the skinny models, too, and they’ve led me into dysmorphia. I would love to be able to put on a light-coloured linen suit and look like Bill Nighy

 or Dan Cruickshank,

 but when I look in the mirror, all I see is Sydney Greenstreet (Casablanca, The Maltese Falcon, and below):

Why does Scottish Country Dancing (SCD) make you happy?

Michael Argyle, who I think is one of the unrecognised founders of the positive psychology movement, always used to maintain that as well as being married, having a religion, and various other things, taking part in Scottish Country Dancing made people more likely to be happy. Argyle liked to play the part of the English Eccentric, and he enjoyed Scottish country dancing, and he knew people would think it ridiculous and eccentric to propose it as a route to happiness.
But, actually, if you look at modern guidance about things that tend to promote well-being, as in the headings below, SCD does fit a lot of the criteria. I think Michael Argyle knew this, and that’s why he used to mention it – but I think it just made him happy, and he didn’t see why it shouldn’t work for others.
For Argyle, it was SCD, but many types of traditional social dancing fit this pattern, and have probably evolved for just this reason, just as many traditional board games are optimised to support flow.

Moderate levels of exercise With the option of making it more or less strenuous to fit your needs, without upsetting the rest of the group.

Mindfulness/alertness/awareness You have to concentrate on the patterns of the dance, fitting in with the music, matching your movements to your partner’s – and you swop partners as you go through most dances, so you have to be aware and responsive to a number of people. You also have to keep track of where you are on the dance floor, and where others are – there should be a coherent pattern of movement within each set, and several sets often dance together in a space which is a bit too small, so you have to avoid collisions with people from other sets.

Sociality Needs a number of people, and likely to be an organised occasion which puts pressure on you to go and be sociable, whether you feel like it or not. The structure of many dances ensures you look at and touch a number of other people. So if you came with a partner, you can’t ignore everyone else. If you don’t have a partner, whoever you start the dance with only has to put up with you for a small proportion of the dance, so people are fairly likely to agree to dance together, even if the prospective partner doesn’t look promising. On top of that, the dance structure requires certain numbers of couples, so there is social pressure on the unchoosing and unchosen to pair up and join in to make the dance possible.

Cooperativeness Obvious, to make the dance work, but also skilled dancers are motivated to help/tolerate/support unskilled dancers (especially ones who are uncertain about the figures) to enable the dance to proceed, and to make the experience satisfying for themselves.

Varying/developing skill levels, so encouraging flow Can be done by novices (simple patterns, support from others, you don’t have to get the steps right as long as you get the main movements right) but capable of developing a long way in precision, delicacy, vigour, etc. Experienced dancers will choose more complex dances and more subtle tunes.

Are the ideas of positive psychology an example of the fundamental attribution error?

Two short pieces in The Guardian on 22 Feb, by David Harper, reader in clinical psychology at the University of East London,

The sad truth about the Action for Happiness movement
Being happy isn’t only down to the individual

and Peter Stratton, professor of family therapy at Leeds University (

Wellbeing is not about the individual – it’s about relationships
We won’t cure anxiety and depression by ignoring people’s social connections

Both raise doubts about simple-minded ideas from positive psychology. Harper, criticising Lord Lyard’s Action for Happiness initiative ( suggests that there are problems with the idea that action for happiness should focus on the individual:

…the approach is based on two flawed assumptions: that the source of unhappiness lies inside people’s heads – in how they see the world, and that the solution lies in change at the level of the individual.

Surely being put in positions of threat, powerlessness, deprivation* is likely to cause unhappiness, he argues, which some people might be able to overcome, but it’s unreasonable to blame those who are made unhappy by such things as being lacking in ‘resilience’ and ‘well-being’.

A person’s ability to make changes in their lives depends not only on the individual but on their social context – whether they have supportive relationships, a reasonable income and so on. Unfortunately, we have a tendency to attribute a person’s behaviour to individual factors such as intelligence or moral strength, rather than their social context such as poverty or child abuse. This is such a common research finding that psychologists have a term for it: the fundamental attribution error.

Harper points out the well-known case made by Wilkinson & Pickett in The Spirit Level ( that “mental health problems are highest in those countries with the greatest gaps between rich and poor, and lowest in countries with smaller differences”. This doesn’t really contrast with the other well-known findings that national ‘happiness’ scores aren’t much related to national GDP (for instance Inglehart & Klingemann,2000) – at least beyond a GDP per capita of about $13,000 in 1995 – and that US happiness didn’t increase noticeably between 1950 and 2000, although average buying power tripled over the period (Myers, 2000)†. One of the parallels of recent growth in wealth in both the UK and the USA is a considerable increase in inequality: could maybe possible positive effects of increase in income beyond $13,000 have been cancelled out by increase in inequality.

Harper suggests that:

To increase happiness we need firm action on inequality, rather than this vague Action for Happiness.

Stratton is also criticising the individualistic focus of Cognitive Behaviour Therapy, the NHS treatment of choice for depression. If there’s

 a recognition that our problems of “social recession” are rooted in society’s undermining of our core human need for confirming and mutually supportive relationships….

[….] the things that matter are security, connectedness to others, authenticity and autonomy, and feeling competent. Can you imagine anyone achieving these without drawing strength and resources from family and other relationships? Can you draw from relationships without putting into them? Why, then, are we clinging to the notion that individually focused “cures” are what will turn us into a society of “happier” people?

Stratton quotes Madeleine Bunting in a 20 February Guardian article (February 20, 2012: not available online) Britain is at last waking up to the politics of wellbeing that our focus on the individual has left us with “an unpleasant cocktail of celebrity, cool, acquisitiveness and depression”.

Perhaps that means we should be thinking more about well-being as a collective social process: ‘positive sociology’ rather than ‘positive psychology’. This starts to sound dangerously like the ‘social engineering’ we’re all encouraged to be wary of ‡.

*For an extreme example, see the story posted by Marie Colvin from Syria this week – shortly before being killed herself in Homs: –and then wonder whether the stuff I’m talking about here really matters much.

† My well-being and happiness has definitely improved since I started working part-time and lost £20,000 or so in income, but I have the social support of the NTU choir (next performances 15 &16 April, Albert Hall, Nottingham and Birmingham Town Hall: tickets available from – and I still have enough money to go to see Toumani Diabaté when he comes to the UK, so I’m in a privileged position.

‡ I’ve always been puzzled by the fear of social engineering. You wouldn’t cheer up airline passengers by saying ‘thank goodness, Boeing has avoided the temptation to apply aeronautical engineering to this 787 Dreamliner: I feel much safer now’ or decide that your new phone is rubbish because Nokia persist in building circuits which follow the principles of electronic engineering. If there is such a thing as society (and Thatcher was wrong), what’s wrong with trying to work out ways to make it go well? And aren’t cities, road numbering, schools (state and private), elections, and the rules of etiquette all forms of social engineering, anyway?

Inglehart, Ronald & Klingemann, Hans-Deiter (2000) Genes, Culture, Democracy and Happiness  in Ed Diener & Eunkook M. Suh (eds) Culture and Subjective Well-Being Cambridge:  The MIT Press. Available at

Myers, David (2000) The Funds, Friends, and Faith of Happy People American Psychologist, 55 (1), 56-67

Was: Cognitive Psychology as the science of killing people; now: Neuroscience as the science of….

In this week’s lecture, I’ll present the case that the rise of cognitive psychology in the 50s and 60s, and then the development of computational models in psychology in the 80s, and cognitive neuroscience more recently, were heavily financed by the military, because they helped to provide the knowledge required to enable soldiers to operate increasingly complex weapons systems, and more recently to replace soldiers with smart weapons.

I admit that my view of the development of cognitive psychology may be biased because many years ago, as a hard-line pacifist, I refused to apply for an attractive post-doc research job (in visual search, the topic of my PhD thesis) because it was financed by the Navy – and maybe my career has been downhill ever since. I’m still a hard-line pacifist: show me a war and I’ll march against it (never seems to do much good)*.

But, every time I start thinking this is just an eccentric personal concern, something comes along which reminds me that psychological research is useful to the military, they do finance it, and it is something to be concerned about.

An example from 2008: ‘You really can smell fear, say scientists’ (  an article in The Guardian by James Randerson. Great study involving parachutists’ armpits and brain scanners, looking for a ‘fear pheromone’ (psychologists know how to have fun). And the fourth paragraph reads:

The research was funded by the US Defence Advanced Research Projects Agency – the Pentagon’s military research wing – raising speculation that it is a first step to isolating the fear pheromone for use in warfare, perhaps to induce terror in enemy troops. But DARPA denied that it had any military plans for fear pheromones or plans to fund further research into the field.

I was preparing this year’s lecture, and thinking that example was a bit dated, when along came (7 February 2012): Rise of the man-machines: how troops could plug their brains into weapons, by Ian Sample in The Guardian. That’s an over-sensationalist title: like most articles like that, the title should have a compulsory ‘sometime, maybe’ added at the end, but it’s a serious article about a just-released report by the (UK) Royal Society which “considers some of the potential military and law enforcement applications arising from key advances in neuroscience”. The intro to the report is at, and the full report is at:

From The Guardian article:

The authors argue that while hostile uses of neuroscience and related technologies are ever more likely, scientists remain almost oblivious to the dual uses of their research.

The article quotes Vince Clark, a US researcher who is using transcranial direct current stimulation to enable soldiers to spot targets more quickly, as saying:

As a scientist I dislike that someone might be hurt by my work. I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don’t know how to deal with that.
If I stop my work, the people who might be helped won’t be helped. Almost any technology has a defence application.

Clark’s work is also potentially useful for dementia sufferers, so I hope he makes a lot of progress in time for it to be useful to me, but still…. (Actually another article by Sample the same day: points out “How dementia drugs could be used by the military”.)

Both the article and Royal Society report are fascinating reading, but I was struck that the Royal Society’s first recommendation for the scientific community is:

There needs to be fresh effort by the appropriate professional bodies to inculcate the awareness of the dual-use challenge (i.e., knowledge and technologies used for beneficial purposes can also be misused for harmful purposes) amongst neuroscientists at an early stage of their training.

So, that’s what I’m doing in my lecture (and here). All you early-stage neuroscientists, think about this. Just saying.

* Bring home our boys from Iran. I’d like to claim you read it here first, but Mad Magazine got there before me.

Good luck with that, Vince. You, me, and most Miss World contestants, they say.

There’s more to scientific judgement than statistical significance

Interesting piece in The Guardian by Philip Ball in the Saturday Critical Scientist slot (replacing Ben Goldacre, and still worth reading):

He’s discussing the value of statistical analysis of the results in the Higgs Boson and ‘faster-than-light neutrinos’ studies.

In any experiment, all sorts of complications can influence results. So if you see something interesting, you need to make sure it’s not just a random fluctuation. That depends on how widely spread out your results are: the bigger the fluctuations, the more you’re apt to be misled by them. The spread is measured by a quantity called sigma. The bigger your “interesting” signal is relative to sigma, the more “statistically significant” it is: the more likely it is worth heeding.

In psychology, we use p to express the likelihood of a result occurring by chance, rather than sigma, the number of standard deviations from the mean of a chance distribution, but the basic principle is the same.

…these statistics don’t put numbers on the probability of a particular hypothesis being right or wrong, because experiments don’t care a hoot about your hypothesis. They just show the universe doing its thing.
And to interpret what the universe just did requires that we take into account what we know already: as evidence changes, so do the degrees of belief we may hold in a theory. This is commonly called Bayesian reasoning, after the 18th-century mathematician Thomas Bayes.

Ball’s argument is that he’s pretty well prepared to accept Higgs Boson results with low statistical significance, but even high levels of unlikeliness and statistical significance won’t be very convincing for the ‘faster than light’ results. The one result is in line with what we know about the universe: the other isn’t. As he says:

You could put it crudely this way: the real question about the faster-than-light neutrinos experiment is not “what is the chance it disproves relativity?” but “what is the chance that it disproves relativity given that your GPS system (which relies on relativity) works?”

What’s that got to do with psychology and Schools of Thought? Well, it fits with the fact that many well-known effects in social psychology are demonstrated by a small number of classic experiments with rather low levels of statistical significance, and could fit in the category of comfortable myths – but they fit with other stuff we know, and probably with our non-scientific expectations as well. So it’s not that unreasonable to accept that fairly low-grade evidence. On the other hand, as I said in the lecture about ‘unacceptable ideas’, there are bodies of research with some pretty impressive reported significance levels which I’m not going to believe in, whatever the stats: telekinesis and precognition, for instance. I presented my beliefs there as being in some way unscientific, but Ball (and Bayes) show how there’s scientific sense as well as prejudice in my judgement.

Relying too much on statistical significance is complicated because very unlikely things do happen by chance all the time. There’s line in a Paul Simon song about that. After all, 14+ million to one is pretty low odds, but a 14m: 1 chance comes off most weeks, when someone matches the lottery numbers and wins the jackpot. If people buy 20m+ tickets each week, that’s not surprising. Just knowing that people do win doesn’t make it any more likely that you will, but the fact that it seems to be vanishingly unlikely doesn’t make it any less real for those who do win.

So I’m with Ball when he says:

Which is why I’m only being scientific when I say screw the sigmas: I’d place a tenner (but not a ton) on the Higgs, while offering to join Jim Al-Khalili in eating my shorts if neutrinos defy relativity.

Abuse and changes in adolescents’ and children’s brains

Two studies reported recently on changes in the brains of adolescents and children who have suffered abuse. Despite my prejudice against ‘we’ve found some kind of brain activity, so that explains everything’ research, this does look interesting, and maybe meaningful.
First ‘past abuse leads to loss of gray matter in the brains of adolescents’, reported in both Medical News Today: and PsyPost: (you probably don’t need both links: they say very much the same things, being lifted from the same Yale University press release). The study was on ‘forty-two adolescents without psychiatric diagnoses’. Hilary Blumberg, one of the authors, has published quite a bit on brain changes in people with bipolar disorder (and so is looking for Szasz’ ‘bad brains’: for all the criticism there is of strictly medical models of mental illness, it’s quite possible that some problems do have physical origins or physical accompaniments).

The brain areas impacted by maltreatment may differ between boys and girls, may depend on whether the youths had been exposed to abuse or neglect, and may be linked to whether the neglect was physical or emotional.
[…]The reduction of gray matter was seen in prefrontal areas, no matter whether the adolescent had been physically abused or emotionally neglected. However, in other areas of the brain the reductions depended upon the type of maltreatment the youth had experienced. For example, emotional neglect was associated with decreases in areas that regulate emotions.
The researchers also found gender differences in patterns of gray matter decreases. In boys, the reduction tended to be concentrated in areas of the brain associated with impulse control or substance abuse. In girls, the reduction seemed to be in areas of the brain linked to depression.

The original paper is Erin E. Edmiston; Fei Wang; Carolyn M. Mazure; Joanne Guiney; Rajita Sinha; Linda C. Mayes; Hilary P. Blumberg (2011) Corticostriatal-Limbic Gray Matter Morphology in Adolescents With Self-reported Exposure to Childhood Maltreatment Arch Pediatr Adolesc Med.;165(12):1069-1077.
The abstract is here:

Blumberg points out that adolescents’ brains are still pretty malleable, so these changes may not have long-term significance

Here’s another related finding (

When children have been exposed to family violence, their brains become increasingly “tuned” for processing possible sources of threat, a new study reports. The findings, reported in the December 6th issue of Current Biology, a Cell Press publication, reveal the same pattern of brain activity in these children as seen previously in soldiers exposed to combat.
The study is the first to apply functional brain imaging to explore the impact of physical abuse or domestic violence on the emotional development of children, according to the researchers.
“Enhanced reactivity to a biologically salient threat cue such as anger may represent an adaptive response for these children in the short-term, helping keep them out of danger,” said Eamon McCrory of University College London. “However, it may also constitute an underlying neurobiological risk factor increasing their vulnerability to later mental health problems, and particularly anxiety.

The stimuli used were pictures of angry, neutral and sad women’s faces. The heightened response was shown to angry faces, but not sad faces. The children had been ‘exposed to documented violence in home’ and were matched with controls. In the .pdf version, I can’t see any information about the age of the children, but there were 20 in the experimental sample.
The reference is McCrory, De Brito, Sebastian, Mechelli, Bird,  Kelly and Viding (2011) Heightened neural reactivity to threat in child victims of family violence Current Biology, 21 (23), R947-R948, and the full article is at

Again, this looks as though it might be saying something useful, though the ‘long-term’ claims would maybe depend on plasticity again.

Both news releases on PsyPost have the same old useless ‘brain’ picture on them.

British Psychological Society rubbishes DSM-5, backs Miller

The British Psychological Society* has posted a response to the American Psychiatric Association’s (APA) invitation to comment on the Development of the DSM-5 (the latest revision of the Diagnostic and Statistical Manual of Mental Disorders):

There are two big points here:

  • The BPS is effectively trashing the whole idea of a multiple medical categorisation of mental illness problems.
  • What the BPS say brilliantly backs up what I said in this Tuesday’s ‘Ways of Being Mad’ lecture – and no, I hadn’t read their response before doing the lecture.

The BPS group that reviewed the DSM-5 proposals did include Richard Bentall, who I highlighted in the lecture as throwing doubt on the whole Kraepelian schizophrenia/bipolar model, so perhaps it’s not surprising that they were doubtful about the DSM’s systematic classification, but they go further than that.

Their central statement (repeated over and over in their analysis of specific categories):

We believe that classifying these problems as ‘illnesses’ misses the relational context of problems and the undeniable social causation of many such problems. For psychologists, our well-being and mental health stem from our frameworks of understanding of the world, frameworks which are themselves the product of the experiences and learning through our lives.(emphasis added)

I’ve ended up quoting the BPs document very extensively, because they do make a lot of (what I think are) good points.
rst of all, they’re speaking up for a more psychological and less medical approach:

The Society is concerned that clients and the general public are negatively affected by the continued and continuous medicalisation of their natural and normal responses to their experiences; responses which undoubtedly have distressing consequences which demand helping responses, but which do not reflect illnesses so much as normal individual variation.

They point out, that although the overall model is medical, the criteria that are used aren’t medical/biological ones:

The putative diagnoses presented in DSM-V are clearly based largely on social norms, with ‘symptoms’ that all rely on subjective judgements, with little confirmatory physical ‘signs’ or evidence of biological causation. The criteria are not value-free, but rather reflect current normative social expectations. Many researchers have pointed out that psychiatric diagnoses are plagued by problems of reliability, validity, prognostic value, and co-morbidity.

This echoes Szasz’s (admittedly simplistic and over-the-top) distinction between ‘bad brains’ (illness) and ‘bad behaviours’ (expressions of problems in living). I think you can see Szasz’ (‘there is no such thing as mental illness’) and Laing’s (‘schizophrenia is a way of trying to coping with unbearable social relationships’) criticisms as extreme and over-the-top extensions of the BPS position, and not quite as weird and irrational as they appear at first.

The BPS response is politely phrased, but the underlying message seems to be to be ‘this is a load of rubbish, and is based on a mistaken, and over-inclusive, idea of mental illness’. The model the APA is using fits well with the ’Neo-Kraepelinian Manifesto’ that I showed you in the lecture.
The BPS ends up by saying:

Diagnostic systems such as these therefore fall short of the criteria for legitimate medical diagnoses. They certainly identify troubling or troubled people, but do not meet the criteria for categorisation demanded for a field of science or medicine (with a very few exceptions such as dementia.) We are also concerned that systems such as this are based on identifying problems as located within individuals. This misses the relational context of problems and the undeniable social causation of many such problems. For psychologists, our wellbeing and mental health stem from our frameworks of understanding of the world, frameworks which are themselves the product of the experiences and learning through our lives.

The Society recommends a revision of the way mental distress is thought about, starting with recognition of the overwhelming evidence that it is on a spectrum with ‘normal’ experience, and that psychosocial factors such as poverty, unemployment and trauma are the most strongly-evidenced causal factors. Rather than applying preordained diagnostic categories to clinical populations, we believe that any classification system should begin from the bottom up – starting with specific experiences, problems or ‘symptoms’ or ‘complaints’. Statistical analyses of problems from community samples show that they do not map onto past or current categories (Mirowsky, 1990, Mirowsky & Ross, 2003). We would like to see the base unit of measurement as specific problems (e.g. hearing voices, feelings of anxiety etc)? These would be more helpful too in terms of epidemiology.

I think it’s unlikely that the BPS views will affect the new version of the DSM much. As I said in the lecture, psychiatric care is an industry and, particularly in the US, the insurance industry is a large and powerful part of the system, and such industries need to be able to apply industrial standards, which is what the DSM does.

Read on if you want a bit more detail about the BPS response (and how it fits with the lecture):

They point out the overall lack of specificity of the apparently very specific diagnostic approach used in the DSM:

Finally, disorders categorised as ‘not otherwise specified’ are huge (running at 30% of all personality disorder diagnoses for example).

They make Bentall’s point that the basic categorisation system doesn’t hold up well:

Since – for example – two people with a diagnosis of ‘schizophrenia’ or ‘personality disorder’ may possess no two symptoms in common, it is difficult to see what communicative benefit is served by using these diagnoses. We believe that a description of a person’s real problems would suffice. Moncrieff and others have shown that diagnostic labels are less useful than a description of a person’s problems for predicting treatment response, so again diagnoses seem positively unhelpful compared to the alternatives.

They point out that ‘showing some symptoms’ isn’t really an indication of mental illness, but needs to be taken in context:

Personality disorder and psychoses are particularly troublesome as they are not adequately normed on the general population, where community surveys regularly report much higher prevalence and incidence than would be expected. This problem – as well as threatening the validity of the approach – has significant implications. If community samples show high levels of ‘prevalence’, social factors are minimised, and the continuum with normality is ignored. Then many of the people who describe normal forms of distress like feeling bereaved after three months, or traumatised by military conflict for more than a month, will meet diagnostic criteria.

This fits with the (brief) discussion in the lecture of statements like “1 in 4 British adults experience at least one diagnosable mental health problem in any one year, and one in six experiences this at any given time.” Perhaps all this means is that lots of people feel depressed/anxious/confused/persecuted from time to time, and there’s often a good reason for that, and these are not necessarily ‘symptoms of mental illness’ in themselves, even though people who do have serious mental problems may show the same responses.

They’re also dubious about the vagueness and ‘borderline-ness’ of some categories:

In this context, we have significant concerns over consideration of inclusion of both “at-risk mental state” (prodrome) and “attenuated psychosis syndrome”. We recognise that the first proposal has now been dropped – and we welcome this. But the concept of “attenuated psychosis system” appears very worrying; it could be seen as an opportunity to stigmatize eccentric people, and to lower the threshold for achieving a diagnosis of psychosis.

This parallels the point I made in the lecture about Blueler’s (1911) worryingly general category of ‘latent schizophrenia:

“There is also a latent schizophrenia, and I am convinced that it is the most frequent form, although admittedly these people hardly ever come for treatment. It is not necessary to give a detailed description of the various manifestations of latent schizophrenia …irritable, odd , moody, withdrawn or exaggeratingly punctual people arouse, among other things, the suspicion of being schizophrenic.” Blueler, 1911, quoted by Bentall in Understanding Madness

“irritable, odd , moody, withdrawn or exaggeratingly punctual”? Yes, that’s me.

They’re particularly concerned about ADHD diagnoses and medication with children:

We have particular concerns about the inclusion of Attention Deficit/Hyperactivity Disorder in this categorisation. Many of the concerns about the scientific validity and utility of diagnoses per se (articulated above) apply to ADHD. We are very concerned at the increasing use of this diagnosis and of the increasing use of medication for children, and would be very concerned to see these increase further.

*I have often rubbished the BPS in the past, and compared it unfavourably with the American Psychological Association, which I think is more socially aware and constructively self-critical than the BPS (one of the reasons I recommend so many articles from American Psychologist), but maybe I should apologise: I think this is well done.