Just another site

Monthly Archives: October 2011

Conscientiousness and curiosity contribute to academic success (who knew?)

Doing the work and being prepared to be interested in it is as important to academic success as being clever, research shows

This post is mainly a long quote from a press release about a paper in Perspectives in Psychological Science, based on “a meta-analysis, gathering the data from about 200 studies with a total of about 50,000 students.” It’s one of those cases where loads of psychological effort goes into telling you stuff you knew perfectly well anyway – but it’s always good to get some evidence.

Note that traits like ‘conscientiousness’ and ‘curiosity’ are related to/part of the Big Five personality test model, and some views of personality traits like this are that they’re pretty static – you’re born conscientious or open to experience – or not – and that’s all there is to it. Others think that it’s much more dynamic – these traits can develop out of intention and experience. Either way, if you want to be a successful student, it’s worth developing/using/faking your conscientiousness and curiosity.

Intelligence is important to academic performance, but it’s not the whole story. Everyone knows a brilliant kid who failed school, or someone with mediocre smarts who made up for it with hard work. So psychological scientists have started looking at factors other than intelligence that make some students do better than others.

One of those is conscientiousness – basically, the inclination to go to class and do your homework. People who score high on this personality trait tend to do well in school. “It’s not a huge surprise if you think of it, that hard work would be a predictor of academic performance,” says Sophie von Stumm of the University of Edinburgh in the UK. She co-wrote the new paper with Benedikt Hell of the University of Applied Sciences Northwestern Switzerland and Tomas Chamorro-Premuzic of Goldsmiths University of London.

Sophie von Stumm and her coauthors wondered if curiosity might be another important factor. “Curiosity is basically a hunger for exploration,” von Stumm says. “If you’re intellectually curious, you’ll go home, you’ll read the books. If you’re perceptually curious, you might go traveling to foreign countries and try different foods.” Both of these, she thought, could help you do better in school.

The researchers performed a meta-analysis, gathering the data from about 200 studies with a total of about 50,000 students. They found that curiosity did, indeed, influence academic performance. In fact, it had quite a large effect, about the same as conscientiousness. When put together, conscientiousness and curiosity had as big an effect on performance as intelligence.

I couldn’t find the original paper online (the Medical News Today version of the press release doesn’t give details, and it may still be in press), but here’s the web page of one of the authors:, which references more of her work.

Unacceptable ideas?

From the science correspondent of The Guardian: Sally Morgan challenged to prove her psychic powers on Halloween
Sceptics have invited Sally Morgan to demonstrate her ability to communicate with the dead in a specially designed test

I was talking in Tuesday’s lecture about how psychic powers were (for me) an Unacceptable Idea in psychology – and a story about scientists testing psychic powers crops up within a week! Is this spooky? No. It’s a coincidence, but our, very sensible, tendency to be on the watch for patterns and connections leads us into seeing connections in chance occurrences – just like seeing shapes in clouds (“very like a whale”*).
But it is interesting.  I was too general in talking about psychic powers in the lecture. Here are some different categories:

Communicating with the dead: that’s the skill that’s being tested here. That’s an unacceptable idea to me, because it challenges too much about my ideas about life and consciousness. When Garry talks to you about the mind/brain problem ask him what relevance evidence about being able to talk to the dead would have.
Reading other people’s minds: OK, I’d go along with this, if I could see a scientifically acceptable mechanism for it – but see later.
Moving things with the power of your mind: as I said in the lecture, this violates too much of what I understand about physics and causation to be acceptable to me. It would also mean that no physically-based experiment would be reliable -someone could be reaching in with heir mind and moving things around. Having said that, for quite a long time it’s been possible to monitor brain activity and use that (via non-psychic mechanisms, like switches and levers) to change or move things in the world. Back when I was an undergraduate, W. Grey Walter was able to get people to learn how to turn a light on and off by thinking about it – or to be more precise, he was able to pick up a specific EEG pattern that someone could learn to produce, and use that as a signal for a mechanism which operated the light. There are really exciting developments in that area now, which could enable people with disabilities to operate a wider range of aids, or even regain control of paralysed limbs. That would look like magic/psychic powers, but “any sufficiently advanced technology is indistinguishable from magic”

This suggests another reason for some ideas being unacceptable – accepting them would mean rearranging too much of your current understanding of the world

The ‘challenge’ being set to Sally Morgan seems pretty straightforward:

In the challenge, Morgan will be shown photographs of 10 deceased women and asked to match each to an entry on a list of their first names, by connecting with their spirits. Singh said the test was expected to last 20 minutes. To pass, Morgan will be required to match seven or more names to the right photographs.

The test was designed by “Professor Chris French, head of the anomalistic psychology research unit at Goldsmith’s, University of London”. The two problems are assessing the level of accuracy required to be convincing (French wants seven out of ten or better) which is basic stats/probability, and ruling out trickery, which can be very difficult. Scientists aren’t very good at dealing with cheats, because we expect the world to be fairly regular and lawful – and consistent. I used to do a classroom demo of mindreading with a colleague (who was the real magician: I was just the attractive assistant), in which we invited the class to hypothesise how we did it and set up simple experiments to test their hypotheses. We could fool them all the time, because we had three, different, alternative methods. They often guessed accurately that we were using method A and devised a way of preventing that. We just switched to method B so their method A hypothesis seemed to fail – so they ruled that out, and went on to test their hypothesis about method B. We just switched back to method A, and fooled them.
French says, in the online version of the Guardian story:

With the right controls in place, we can perform an experiment where anyone who is deluded or who wants to cheat would find it very hard to be successful, but someone with genuine psychic ability, as Sally claims to have every night in her sold-out shows, should find the whole thing a breeze.

Good luck, Chris.

By the way, one of the ‘important questions in psychology’ from session 2 was ‘what is Derren Brown?’ My answer in class was: ‘he’s an entertainer’, which is a bit unfair to Brown. He knows a lot of psychology, and is very skilled and inventive in applying that knowledge (along with a lot of other skills) in his act. One of his demonstrations (reading people’s characters) is a straight re-run of a classic psychology experiment (Forer 1949). Works great.
Not surprisingly, Brown was asked by The Guardian to comment on the Sally Morgan test. Here’s what he said in the online article:

It’s important people don’t think that a test is a way of debunking or disproving. It’s a great way of anyone making amazing claims to show that they hold up and are not just a result of trickery or self-deception. The test should be both scientifically rigorous and yet fair to the psychic: it would show, if the psychic is successful, that what he or she does is real.

Such tests are important because it’s too easy for a person to fool others (or themselves) into thinking he or she has special abilities. If someone is going to put you in touch with your dead child you’d want to know if they were real, deluded or a scam artist.

The print version left out the first paragraph, which actually changes the meaning for me – Brown seems a bit less dismissive of the possibility of Morgan actually being able to do this in the fuller version.

A Guardian request to Morgan for comment on the challenge was passed to her lawyers, who did not respond.

Forer BR (1949). The fallacy of personal validation: A classroom demonstration of gullibility Journal of Abnormal and Social Psychology. 44, 118-123.

* Harmless time-waster: find the source of the quotations

BPA and kids’ brains: Small scale psych controversy supports NTU lecturer’s points brilliantly

Here’s an example of the kind of thing Christina was talking about in Tuesday’s lecture – only worse, really.
The link here is to a post by John Grohol on the PsychCentral blog (
It’s about a paper in Pediatrics (a peer-reviewed journal) about a study on whether pre-natal exposure to a possibly damaging chemical, BPA (used in plastics production, I think), affects hyperactivity and aggression in 2-year-olds. The headline  for the paper on PsychCentral (probably based on a press release from the journal or university) says it does:
BPA Prenatal Exposure Linked to Behavioral Problems in Kids
– then the first paragraph says it doesn’t, really:

New research suggests fetal exposure to a chemical used to make plastic containers and other consumer goods called BPA is associated with a slight but nonsignificant increase in behavioral and emotional problems in young girls.

‘slight, but non-significant’ means that we can’t be confident that the difference didn’t arise by chance, and as scientists, we shouldn’t find that evidence convincing (or conversely, we should be fairly confident that the null hypothesis has been supported). So this is really, an example of the kind of negative result Christina was talking about.
So, how to deal with it? It could be stuffed away in the bottom drawer, though it would be worth publishing as a negative finding:
“We know that BDA has been shown to have neurotoxic effects elsewhere, but it doesn’t look as though moderate levels of prenatal exposure  has much effect on hyperactivity and aggression in toddlers, so that’s one thing less to worry about.”
– no! Much better to publish it as a positive finding, even though it isn’t: ‘Deadly chemical poisons our kids’ brains!’
That gets round Christina’s negative result problem nicely*.
If you look at the detail Grohol gives further down the blog, it actually gets a bit worse. Grohol points out that the accepted level of difference for significance (established in the norming of the original scales) on the scores for the scales the researchers used is 10 points, and only two of the published 40 differences (for different age groups, boys and girls, etc) reach or exceed that level. Grohol comments:

Here’s a study that looked at a total of 44 variables (when you count the analysis of gestational BPA versus childhood BPA levels) and found significance in only 2 of them.
To me, that’s an interesting correlation.

Hang on: if we’re talking about significant at the 0.05 level, that means we would expect a result like this to arise by chance in one out of 25 trials. So, we find two results significant at this level in 44 trials? Isn’t that really very close to what we’d expect to find by chance? Don’t we teach you that if you do lots and lots of comparisons, you have to allow for the odd apparently significant result which WILL pop up, just by chance? Anyone who thinks it’s positive evidence is a statistical ignoramus. Here’s a recipe for scientific success, kids – do lots and lots of comparisons: some of them are bound to be significant – and statistical significance is the only thing that counts, right? (Actually, wrong – but maybe that’s another blog post.)
Grohol comments:

It seems like a month doesn’t go by when this journal is publishing more crappy science, and then draping it in a public relations campaign that gets everyone’s attention. (Actually, to be fair, the science is sometimes fine; it’s the over-reaching conclusions drawn by the researchers and the PR media machine that is truly vomit-inducing.)

I think that’s a good summary (though ‘vomit-inducing’ is both a bit strong and wimpy: this stuff doesn’t make me want to throw up: it makes me want to put my fist through the computer screen). Always suspend judgement on the headline, until you’ve read down to the the 27th paragraph – the truth is often down there in the details. Better still, look at the original paper, if you can. In this case, it’s at
Braun, Kalkbrenner, Calafat, Yolton, Ye, Dietrich & Lanphear (2011) Impact of Early-Life Bisphenol A Exposure on Behavior and Executive Function in Children †

*Actually, my analysis above is over-simplified. The results do look non-significant and unconvincing, but they are (mostly) in a negative direction. So we might say  ‘it looks a bit as though there might be a negative effect but we can’t be (scientifically) sure about that’. But hang on: these are our children’s lives we’re talking about! Do you mean that there’s even a slight risk that exposure to BPA would lead my child to grow up to be a London rioter or to be like Paris Hilton? Shouldn’t we think about banning it right away, just to be safe? (review Christina’s points about large and small effects here). Maybe this study is at least a basis for further research after all.
…but it’s also politically complicated. Let’s say someone, somewhere, tried to make a court case about BDA and psychological damage. This might be energetically opposed by companies who find using BDA in their products convenient or profitable, or who might be liable for damages (it’s happened with asbestos and tobacco like this). so the plastics company calls an expert witness:

Counsel: One of the pieces of evidence the opposition has produced is the paper by Braun & al, which shows that two results out of 44 apparently showed a significant negative effect of my client’s product. Dr Miller, how would you comment on such an interpretation?
Miller: Errr, it’s not very convincing
Counsel: Didn’t you write, in 2011, that anyone who thought that such a result was convincing was a ‘statistical ignoramus’?
Miller: OK, yes
Counsel: So we’re dealing here with a case based on statistical ignoramicity?
Miller: You could say that, yes

So dodgy results like this are both (strictly) scientifically worthless, potentially significant (in the non-statistical sense) pointers, fodder for misleading scare stories, and hostages to fortune if used to support a case in the real world – all at the same time. This stuff is complicated (one of the messages of Schools of Thought, after all).

If you look at the original paper, the conclusion is:

The results of this study suggest that gestational BPA exposure might be associated with anxious, depressive, and hyperactive behaviors related to impaired behavioral regulation at 3 years of age. This pattern was more pronounced for girls, which suggests that they might be more vulnerable to gestational BPA exposure than boys. In contrast, childhood BPA exposure did not exhibit associations with behavior and executive function at 3 years of age. There is considerable debate regarding the toxicity of low-level BPA exposure, and the findings presented here warrant additional research.

…which is pretty much what I was suggesting (note the use of words like ‘suggest’ and ‘might’), so maybe we (and Grohol) shouldn’t be too hard on the authors.

17th century Iroquois and late 19th century Austrian Jew share psychological insights

Here’s an extract from Apologies to the Iroquois by Edmund Wilson, an informal journalistic anthropology of Six Nations Peoples, published in 1960:

Quoting Jesuit priest Fr Paul Ragueneau, writing in 1648:

“In addition” he says, “to the desires which we generally have which are free, or at least voluntary in us, [and] which arise from a previous knowledge of some goodness that we imagine to exist in the thing desired, the Hurons believe that our souls have other desires, which are, as it were, inborn and concealed. […]
“Now, they believe that our soul makes these desires known by means of dreams, which are its language.  Accordingly, when these desires are accomplished, it is satisfied; but , on the contrary, if it be not granted what it desires, it becomes angry, and not only does not give the body the good and the happiness that it wished to procure for it, but it often also revolts against the body, causing various diseases, and even death.”

According to Wilson, Ragueneau thought the Hurons* were mistaken about this , but a few hundred years later Sigmund Freud thought “The interpretation of dreams is the royal road to a knowledge of the unconscious activities of the mind.” (in The Interpretation of Dreams, 1900) – and the first paragraph above is a pretty good rough description of the Freudian unconscious.

A couple of pages later in Wilson, there’s a description of a myth which is very similar to the Judaeo-Christian myth of Jacob & Isaac – testing faith to limit of inhuman practice, with a last-minute reprieve for the tested one. Funny how these ideas go round and round.

Here’s another chunk of Freud which seems to fit with shamanic practices (which is probably partly what he was talking about): “It can easily be imagined, too, that certain practices of mystics may succeed in upsetting the normal relations between the different regions of the mind, so that, for example, the perceptual system becomes able to grasp relations in the deeper layers of the ego and in the id which would otherwise be inaccessible to it.” (in New Introductory Lectures on Psychoanalysis).

OK, to be fair, ideas about the unconscious and the revelatory nature of dreams have been around for ever, but that’s not to devalue the insights of the Native American philosophers, or of good old Sigismund Shlomo.

*note for John LaR: all right, the Huron weren’t actually part of the Iroquoian Federation, but they’re part of the same language broup.

Professor Mark Griffiths talks sense, Baroness Greenfield doesn’t*†

*so, what else is new?
† Or, at least, she never produces any evidence for what she’s saying, and it does seem nonsensical

A couple of years ago, I was teaching a third year option on Psychology & Media, and we monitored psych stories as they appeared in the press & on TV – and online – and tried to work out where they came from and why so many seem so wrong. Well, many come from university press releases, and are usually basically OK, though they may suffer from over-simplification or over-enthusiastic headlines, like some I’ve already discussed here, but there was also a category of spurious-seeming psychology stories which run and run.

The ‘computer games are ruining our kids’ brains’ is a good example, especially in the version promoted by Baroness Susan Greenfield, who used to be an eminent neurologist, but is now widely publicised for speeches she makes at minor, usually non-scientific, functions. Essentially the same story has been coming up every six months or so for several years. Here’s this week’s version

Computer games leave children with ‘dementia’ warns top neurologist!!!!!

(OK, I put the scary red bits and silly exclamation marks in myself)
Although I have great disrespect for the Daily Mail generally, what we found when tracking psychology stories in it was that they were generally quite informative and reasonably accurate, though they often had very inappropriate headlines. The problem here isn’t the Mail, it’s the talk that’s being reported (and the headline, maybe, if you read the whole article).

Eminent neurologist Baroness Susan Greenfield said yesterday that spending time online gaming and browsing internet sites such as Facebook could pose problems for millions of youngsters.

She told attendees at a Dorset conference that an unhealthy addiction to technology could disable connections in the brain, literally ‘blowing the mind.’

The ‘Dorset Conference’ was actually ‘the opening of a £2.5million science centre at Sherborne Girls’ School’ (not a state school).
The article goes on to say:

However, she did not reveal any research that had made a connection between screen technologies and brain degeneration.

To repeat and emphasise: she did not reveal any research. And she never does: I have a clip from an interview from a serious TV programme where she says there’s no evidence.
The Mail goes on to say:

Professor Mark Griffiths, a psychologist and Directory of Nottingham Trent University’s International Gaming Research Unit, said he knew of no scientific evidence that such a link existed.

Go, Mark! To repeat and emphasise: he knew of no scientific evidence that such a link existed. That sounds like a responsible social scientist to me. Is it responsible to go on and on and on (and on) peddling a scare story for which no scientific evidence exists?
The Daily Telegraph fell for it too: and The Sun:

Good grief.

Further reading: Ben Goldacre’s  take on all this

If you’re a Schools of Thought student, you could try out my ‘why do we believe this stuff?’ list on this.

Just what are fMRI scans supposed to be ‘proving’?

Why We Remain Optimistic In The Face Of Reality Revealed By Brain Imaging

I won’t go into the detail of the study this time, because you can follow the link to a nice clear press release (and I want to get on with my rant), but it was basically about how people are likely to modify their idea of the chances of something bad happening if they’re presented with evidence that it’s less likely than they thought, but less likely to modify their judgement if the evidence suggests it’s more likely than they thought – a mechanism for optimism in the face of disconfirming evidence from the world (something we all need). I haven’t explained that well – the original is clearer.

But the bit I want to go on about is the interpretation of the research. People were observed making these ‘reasonably’ and ‘unreasonably’ optimistic judgements in an fMRI scanner, and different decisions were associated with different activity if the frontal cortex. They also looked to see if people who scored higher on an ‘optimism’ questionnaire showed different brain activity.
The results show:

… that our failure to alter optimistic predictions when presented with conflicting information is due to errors in how we process the information in our brains.


….the more optimistic a participant was (according to the personality questionnaire), the less efficiently activity in these frontal regions coded for it, suggesting they were disregarding the evidence presented to them.

“The less efficiently activity in these frontal regions coded…”? – just what kind of ‘coding’ are we talking about here? What does ‘efficient’ coding mean: what’s efficient and inefficient (not in psychologically using the information: we’re not talking about that here, but in coding in the frontal cortex)? And just what is the evidence for that? Some bits of the frontal cortex are more active than others, I guess – but even if we knew functionally what ‘efficient coding’ was, I don’t think that we know anything about how that shows itself in brain activity.

And the final sentence of the quote is the most exasperating: ‘suggesting they were disregarding the evidence presented to them’. You needed a fancy fMRI machine to tell you that? You’d already established that with pencil and paper: you presented them with information and they disregarded it – you DON’T need fMRI to tell you that what you’ve just solidly observed in other ways ‘really’ happens.

Think of it the other way round, what if you did know how to identify ‘efficient coding’ in the frontal cortex, and you did this study and found that the more optimistic respondents (who have, behaviourally, already ’disregarded the evidence’) didn’t show any difference, would you say this suggests that ‘they DON’T disregard the evidence presented to them’? That would be nonsense. So the final sentence in the quote above adds absolutely nothing to our psychological understanding.

Don’t get me wrong. Like Dobbs, I’m not saying fMRI studies are rubbish – but as a psychologist, I am really exasperated by the way some people talk about fMRI studies as a way of ‘proving’ what we already know perfectly well, and have demonstrated perfectly well, as psychologists – when those fMRI studies don’t yet add anything to our understanding. And it is worrying if these high-powered researchers make what seem to me to be elementary logical errors in their statements – maybe they’re not rocket scientists after all.

When is neuroscience not neuroscience?

Here’s a story:
Specific Social Difficulties in People with Autism
New finding provides insight into the psychology of autism-spectrum disorders

This post of mine is a sort-of commentary on the Dobbs Fact or Phrenology? article. The story headlined above  is a report on neuroscience/fMRI research which doesn’t actually contain any neuroscience at all.

I need to explain the study and results (which are interesting, anyway) before getting back to my point.

There is a theory that one factor associated with autism is the lack of a developed ‘theory of mind’ (something you’ll come across in developmental psych sometime). Do a search on ‘autism’ and “theory of mind” (the double quotes are useful there) and you’ll get lots of explanation. Put very simply, people with autism are not aware of what other people are thinking.
In this study, they gave people the opportunity to donate real money to a good cause, either in private or in front of someone else. People without autism tend to give more in front of someone else – we’re sensitive to wanting to look good/generous in front of others. “By contrast, participants with autism gave the same amount of money regardless of whether they were being watched or not. The effect was extremely clear.” Keise Izuma, the first author, is quoted as saying.

They checked out whether this was just a result of the participants with autism not paying any attention to the other person, by trying people witha  maths test. It is normal to do worse on a maths text when you have an audience – distraction, audience anxiety, maybe. The participants with autism were put off by an audience to the same extent as others when doing maths – so they show awareness of presence, but they either aren’t aware of the opinions of others, or don’t care about them, so feel no pressure to look good when donating.

OK, that’s an interesting study, which maybe tells us a bit more about autism (though the basic theory of mind idea behind it is well known and quite well-researched) – but the Press Release headline says “Caltech Neuroscientists Pinpoint…” where’s the ‘neuroscientific pinpointing’?
The last paragraph, apart from the credits, is:

“Next up for the team: MRI studies to investigate what occurs in the brain during such social interactions, as well as other investigations into the biology and psychology of autism.”

So this study is (just) straight psychology, and the neuroscience may come later. It looks from the credits that the study was carried out by one neuroscientist, one undefined person (probably a neuroscientist) and one ‘professor of behavioural economics’.

So what’s my point? I think there are two ways of looking at this. One of these is that the headline is misleading because it’s foregrounding (more newsworthy) neuroscience over (plain old, boring old, experimental social psychology) – but the job of the PR writer (Deborah Williams-Hedges) is to get stories about her university into the news – and this duly appeared on my Twitter feed after being passed on by PsyPost, so that worked. A quick Google search shows that lots of places have picked up and reprinted or reposted the story.

The other way of looking at it is that the title does fairly represent the study (probably two out of three of the researchers were neuroscientists), but the research team is establishing a solid psychological understanding of what they’re looking at before doing the neuroscience, which seems excellent practice, and fits with the recommendations of Dobbs in the fMRI article.

More about myth – and relating it back to psychology a bit

This is to do with psychology, and with the Schools of Thought course, I promise – eventually. Just trust me and keep reading.

Zoe Williams, writing in The Guardian* about Theresa May’s ‘strategic cat fib’, goes on to talk about other powerful non-truths:

“However, Cameron used exactly that tactic, in his not-very-famous “health and safety” speech of December 2009: “I think we’d all concede that something has gone seriously wrong with the spirit of health and safety in the past decade. When children are made to wear goggles by their headteacher to play conkers … When village fetes are cancelled because residents can’t face jumping through all the bureaucratic hoops … ”
Now these examples were untrue, of course, but the interesting bit is that they were the very examples that the Health and Safety Executive’s website ( had given in illustration of the stupid, untrue things that people say about them. Cameron wasn’t just perpetuating myths as part of a melange of things he didn’t like, some of which may or may not have been true. He was actively, one has to assume knowingly, disseminating untruths because his version of the underlying truth – that an overweening state is against common sense and ruins all our fun – was best served by vivid illustration, and fantasy is nothing if not vivid.”

The crucial phrase there is the one I’ve highlighted: knowingly, disseminating untruths because his version of the underlying truth….was best served by vivid illustration. That’s what myths are about, and for. The ‘saved by the cat’ myth (still a myth, even though I heard on the radio this morning that Conservative Central Office have claimed to find a case where a real criminal really was saved from deportation because he had a cat, really – honest [later update: that story vanished quickly – maybe because that one wasn’t true, either] ) is a powerful myth because it expresses what Theresa May, and many other people, think is an underlying truth:  ‘article 8 of the Human Rights Act has driven a coach and horses through our immigration law’ (quote from Williams again). The ‘Health & Safety’ myths ( also express some people’s genuine concern that fun stuff is being disallowed because it’s (maybe, a bit) dangerous, or that workers are being given a legal basis for avoiding dangerous working conditions, and that will damage profits.
[Just for the record, I don’t agree with either of those concerns, but I can still see the real issue behind the myth]

OK, what’s that got to do with psychology? As you’ll see in the lecture, Little Albert was not straightforwardly fear-conditioned, with classic generalisation effects; Milgram didn’t show that everyone obeys inhumane commands mindlessly, and his demonstration doesn’t have much to do with massacres like My Lai or the death squads in Poland in WWII, still less with Rwanda; Zimbardo’s Stanford Prison Experiment didn’t show that personality is subsumed by role, and can’t be used simplistically (ie in a foolishly over-simplified way) to explain what happened at Abu Ghraib.

But…why did your teachers (and some of the textbooks) tell you all those lies, then? Because they express underlying truths (and, probably, because they suppress other, less welcome truths – as the ‘human rights’ and ‘health and safety’ myths also do).
People can learn fears by association; generalisation is a feature of classical conditioning (and probably most kinds of learning), and has been shown in lots of studies – just not (at least not simply) in the Little Albert case.
People will treat others inhumanely because the system or authority requires it, and will say ‘it’s not my fault; I’m just applying the rules’. The Tuskegee Experiment (, or, indeed, decisions made about asylum seekers (the people formerly known as refugees) both fit the Milgram scheme quite well, I think.
(Some) People will misbehave and bully and abuse those in their power if the situation allows it, and if there’s tacit approval from those in power for doing that. Dominant, charismatic, abusive individuals can lead others to follow their example (this was a feature of the Stanford Prison Experiment, and of Abu Ghraib, though it isn’t part of the dominant myth).

So – the standard, misleading, accounts of Little Albert, Milgram, and the Stanford Prison Experiment do contain basic truths – but we should recognise that they’re just the stories we tell (and they’re good stories: any decent myth has to be a good story) to the uninitiated to get them to understand the deeper truths of psychology. Now, as undergraduates, you’re moving from the role of A-level uninitiated worshippers to certified (by the BPS) members of the theocracy, and you are being allowed to see the truths behind these myths. Some of you will go on to teach psychology, and maybe perpetuate these myths – for the good of your students, of course.

…and what are the uncomfortable truths which these myths hide, as I mentioned above? With Little Albert, I think it’s recognition that there could be different kinds of causes for neurotic fears, maybe even psychodynamic ones (you’ll hear, later in the course, about conflict between these kinds of explanations. That was a big deal when the ‘behaviourist or Freudian?’ debate was significant. Nowadays, we just know it’s a matter of flawed thinking, and Cognitive Behavioural Therapy will sort people out OK). I think there’s also a deeper hidden truth: mad or neurotic behaviour causes enormous amounts of damage and misery, and we don’t really know how understand it, or how to deal with it. This is like the problem of death, and the religious responses to that.
And Milgram and the SPE? The hidden truth behind applying these to real-life abuse and atrocity is racism (as, pretty obviously, in the Tuskegee Experiment). Not a feature in the original studies, but very often a feature (and maybe THE feature) of the cases we try to explain by referring back to these studies/myths.

*you’ll have guessed by now that I hardly read anything else

Of course, the myths aren’t all psychological…..

You may have been reading about the Home Secretary’s embarrassing blunder over the ‘criminal can’t be deported because he had a cat’ myth.
Here’s a breakdown, which points out that it’s a myth that’s been around for some time, and it derives from misleading headlines in newspaper articles: – and also shows that even though there’s lots of disconfirming evidence and information it doesn’t go away – just like lots of myths in psychology.
You could try to do the ‘why do we believe these things?’ analysis for yourself on this one – though you will need to assume some racism or at least xenophobia in the people who keep it going.

Footnote: why are these people so worried about human rights that they have to use lies to argue against them?

Computers are rewiring our kids’ brains: how does that myth work?

I’ve just posted about some online material which has been distorted to fit the ‘computers are rewiring our kids’ brains’ myth, so I thought it might be useful to talk about how that fits with the ‘myth-making principles’ I’ll be talking about in my lecture later this term. (Also see the previous post on the myth of water.)

In that lecture, I’ll say we keep on repeating myths because:

  • They’re partly true
  • People thought they were true once
  • They express things we think are true, really
  • They express things we think ought to be true
  • We like confirmatory stuff
  • They help us to make sense of psychology
  • They help us to help you to make sense of psychology

Partly true: It is possible to trace changes in brain chemistry or activity as a result of experience – though we’re not very sure how that works or what it means, and the brain may be just as ‘rewired’ by eating a can of baked beans as by playing Grand Theft Auto – and learning and singing the alto part to Mozart’s Requiem is likely to rewire you even more.
People thought they were true once: well, since this is new technology, there can’t be a historical explanation – though it’s worth pointing out that novels, movies, horror comics and death metal have all been proposed as things which will ruin our children (the ‘rewiring’ bit is more recent: it’s not a concept they used much in the anti-novel backlash)
Express things we think are true, really: seems likely that working and communicating in different ways might change how we think and react
Express things we think ought to be true: The world is going to pot; kids are getting dumber (and ruder); there’s no regard for Proper Culture any more. This must be true, because generation after generation have felt this way for thousands of years (you will too, just wait) – and since it’s obviously not the fault of the universities, schools, BBC4, etc, it’s got to be the fault of either computer games or the Daily Mail.
We like confirmatory stuff: How much press coverage would a story like: What rats see doesn’t change their brains much, it turns out or Computer Games probably a waste of time, but completely harmless get? Actually the rat story in the previous post didn’t tell us anything about the reduced attention span* of today’s kids, but it could be spun so it did give confirmation to the ‘rewiring’ myth – and so it gets picked up.
Help us make sense of psychology (or the world generally): Well, given all the problems noted above, we need some explanation – doesn’t this sound like a good one? All those bankers, too: they’re like that because they played Space Invaders too much. Once we have a population of bankers who’ve had their brains rewired by Doom, we’ll be in real trouble.

*’reuced attention span’ could be re-interpretated as ‘quick-witted’, ‘capable of doing several things at once’ or ‘doesn’t pay attention to anything I say’.