Just another site

Category Archives: Cognitive Psychology

Psychologist* shows that you can see almost anything in complex, ambiguous figures

A piece in Psychology Today (July 29, 2012) by Neel Burton in their ‘hide & Seek’ column: The Creation of God: Michelangelo’s awesome hidden message.

It’s an analysis of Michaelangelo’s famous picture from the Sistine chapel ‘The Creation of Adam’ ( claiming to show that “what almost everyone has missed is the hidden message that Michelangelo inserted: a human brain dissimulated in the figure of God.”

Although the Creation of Adam was painted around 1511, it is not until 1990 that Frank Lynn Meshberger, a physician in Anderson, Indiana, publicly noted in the Journal of the American Medical Association that the figures and shapes that make up the figure of God also make up an anatomically accurate figure of the human brain. Take a close look at the picture above and you will see the Sylvian fissure that divides the frontal lobe from the parietal and temporal lobes: it is represented by a bunching up of the cape by one of the angels and by a fold in God’s tunic. The bottom-most angel that appears to support the weight of God is the brainstem, and his trailing scarf the vertebral artery. The foot of another angel is the pituitary gland, and his bent knee the optic chiasm where the optic nerves from the eyes partially cross over. The ingenuity and level of detail is simply staggering, and a lasting testament to Michelangelo’s extraordinary—and, for the time, very unusual—knowledge of human anatomy.

Yeah, right. Take another look and you’ll see that if it is an image of the brain, then the cerebellum has been blown to ribbons, and there’s a very unhealthy-looking overdevelopment of the occipital lobe at the back of the brain. Also most of God’s body is in the midbrain, with a bit of His head sticking through into the frontal lobes. Given that God is the primary image in the right-hand cloud, what was M’s meaning in slicing up his body so randomly amongst different brain areas, given his “extraordinary—and, for the time, very unusual—knowledge of human anatomy”?

I think this is just another example of the powerful and compelling ability we have to extract meaningful information from very complex or confusing input – which sometimes leads us to ‘see’ clearly things which just aren’t there. You will have heard of images of the Virgin Mary  on pieces of toast, or sliced tomatoes (I regularly see Lao Tsu in my porridge).

An old example is this image:

Hidden face

Said to be originally a photograph of a snowy mountainside, but ‘revealing’ an image of a bearded man with long hair (some say Christ, some say Gerry Garcia: it’s probably Allen Ginsberg) if you look at it long enough. If you don’t see it,don’t worry. The face will pop out at you sooner or later, and once you’ve seen it, you won’t be able to go back to the meaningless blobs.

We all do this kind of thing with clouds, as Shakespeare noted some time ago:

Hamlet: Do you see yonder cloud that’s almost in shape of a camel?
Polonius: By the mass, and ’tis like a camel, indeed.
Hamlet: Methinks it is like a weasel.
Polonius: It is backed like a weasel.
Hamlet: Or like a whale?
Polonius: Very like a whale.

— Hamlet, III.ii

OK, this is Hamlet mocking Polonius for always agreeing with what the boss says, but it only makes sense because we all know that we can see all kinds of things in the complex, ambiguous patterns of clouds.

Charles Schultz used the idea too, in Peanuts:

Peanuts strip

In case you can’t read the speech bubbles above,or the image link stops working:

Lucy Van Pelt: Aren’t the clouds beautiful? They look like big balls of cotton. I could just lie here all day and watch them drift by. If you use your imagination, you can see lots of things in the cloud’s formations. What do you think you see, Linus?

Linus Van Pelt: Well, those clouds up there look to me look like the map of the British Honduras on the Caribbean. [points up] That cloud up there looks a little like the profile of Thomas Eakins, the famous painter and sculptor. And that group of clouds over there… [points] …gives me the impression of the Stoning of Stephen. I can see the Apostle Paul standing there to one side.

Lucy Van Pelt: Uh huh. That’s very good. What do you see in the clouds, Charlie Brown?

Charlie Brown: Well… I was going to say I saw a duckie and a horsie, but I changed my mind.

So, I think Neel Burton is wrong – and I think he’s missed an even more remarkable clue: God is passing the spark of life from his finger to Adam – just like the spark of life which ignites the petrol/air mixture in the combustion chamber of the petrol engine (M. did actually do drawings of a flat-four hemihead air-cooled engine, intended to power his famous helicopter, but they were lost in the 19th century). So every time I fire up the Bristol, I reflect on M’s secret message about the true meaning of life.

If you really want to get into the ‘M’s secret messages’ stuff, here’s  Orion in the Vatican by Daniel A. Wilten, an online book (only $9.99):

Witness the Orion nebula hidden in high altars and in famous frescoes by masters such as Michelangelo since the early 1500’s
Discover the famous fresco depicting the Orion constellation in the main vault of the mother Jesuit church
Discover the true origin of the winged disk and where the ancients derived its symbolism
Learn why Hermes Trismegistos declared Egypt the image of heaven
See proof of the Hall of Records and the Orion nebula matching recent development in Egypt
Resdiscover mystical knowledge uncovered after thousands of years
Learn man’s connection to the Orion nebula and its association to consciousness
Learn why the Orion nebula is the master code

*Not a psychologist, actually. Psychology Today says: “Neel Burton, M.D., is a psychiatrist, philosopher, and writer who lives and teaches in Oxford, England.”


Scientists find excuse for Comic Sans!*

Just found out about an interesting piece of research on the effects of making things difficult to read on learning:

Diemand-Yauman, Connor, Daniel M. Oppenheimer & Erikka B. Vaughan. (2011) Fortune favors the bold (and the italic): Effects of disfluency on educational outcomes. Cognition, 118 (1),111-115
(at in a pre-print form)

Abstract: Previous research has shown that disfluency – the subjective experience of difficulty associated with cognitive operations – leads to deeper processing. Two studies explore the extent to which this deeper processing engendered by disfluency interventions can lead to improved memory performance. Study 1 found that information in hard-to-read fonts was better remembered than easier to read information in a controlled laboratory setting. Study 2 extended this finding to high school classrooms. The results suggest that superficial changes to learning materials could yield significant improvements in educational outcomes.

The lab study used Comic Sans and Bodoni Italic in a smaller size (12pt) and 60% grey compared with 16pt Arial in full black, and tested recall of fairly simple facts. The school study used teachers’ own existing learning materials – worksheets and PowerPoint slides – and used two classes for each teacher to give a per-teacher control (there was a good effort to make the study ecologically valid).  “The fonts of the learning material in the disfluent condition were either changed to Haettenschweiler [a heavy Gothicy font], Corsiva [light and flowing script-style] or Comic Sans italics [ugh], if the material was on PowerPoint, or were copied disfluently (by moving the paper up and down during copying) when electronic documents were unavailable.” I don’t quite understand the last bit – motion-smeary photocopies?

The children who had the disfluent presentations scored better in “exams”/”classroom tests” (I think these mean the same: no details of the tests are given ) in English (at various levels), Physics (at various levels) and History, but not in Chemistry. There weren’t significant differences between the disfluent fonts.

Diemand-Yauman & al conclude:

This study demonstrated that student retention of material across a wide range of subjects (science and humanities classes) and difficulty levels (regular, Honors and Advanced Placement) can be significantly improved in naturalistic settings by presenting reading material in a format that is slightly harder to read. While disfluency appears to operate as a desirable difficulty, presumably engendering deeper processing strategies (c.f. Alter et al., 2007), the effect is driven by a surface feature that prima facie has nothing to do with semantic processing.

Interesting – and suggests that all the effort I put into my PowerPoints – allowing room for uncrowded text and reasonable point sizes, breaking lines for meaning, trying to find simple, clear, sentence structures….  – might be wasted or counterproductive. It’s worth noting that D-Y&Al were careful to avoid illegibility. They just wanted to add some slight difficulty, and they speculate that the disfluency effect might be U-shaped, and so interfere with learning at higher levels of disfluency.

I picked this up from an article by Matha Gill (a distant relative of Eric Gill, she points iout) in New Statesman. Thanks Martha. The article is headed How Comic Sans got useful. Useful maybe; acceptable, no. In particular, anyone who uses Comic Sans to suggest anything to do with children and their writing should have to read Finnegan’s Wake in condensed Haettenschweiler, or better still Wingdings – and take a test on the content.  That’s what I’d call disfluency.

*There is no excuse for Comic Sans

This is one of those cases, like  Rind, Tromovich & Bauserman (1998), discussed in Garrison & Kobor (2002) [this is a Schools of Thought reference], where science has come up with an unacceptable result.

Alter, A. L., Oppenheimer, D. M., Epley, N., & Eyre, R. (2007). Overcoming intuition: Metacognitive difficulty activates analytic reasoning. Journal of Experimental Psychology, 136(4), 569–576.

Diemand-Yauman, Connor, Daniel M. Oppenheimer & Erikka B. Vaughan. (2011) Fortune favors the bold (and the italic): Effects of disfluency on educational outcomes. Cognition, 118 (1),111-115

Garrison, Ellen & Kobor, Patricia (2002) Weathering a Political Storm: a contextual perspective on a psychological research controversy American Psychologist57 (3), 165-175

Rind, Bruce, Tromovich, Philip & Bauserman, Robert (1998) A Meta-analytic Examination of Assumed Properties of Child Sexual Abuse Using College Samples Psychological Bulletin, 124 (1), 22-53

it deosn’t mttaer in waht oredr the ltteers in a wrod are

it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae… it doesn’t matter in what order the letters in a word are, the only important thing is that the first and last letter be at the right place

Lots of people have seen this, and it’s fun – but what does it really show?

It’s not actually true that it’s ‘Aoccdrnig to rscheearch at Cmabrigde Uinervtisy’, as some versions have it, but a researcher at Cambridge University (Matt Davis at the MRC Cognition and Brain Sciences Unit) has been thinking about it, and published a fascinating page taking the meme apart:

This has versions in many languages: Hebrew, Czech, Russian, Icelandic…. (he’d like to know if it works in Thai or Chinese). It does vary from language to language; it’s fine in French and Spanish (even I, with basic French and very little Spanish, can read it), but apparently not in Hebrew (no vowels) or Finnish (long complex words, and all those vowels can pile up a bit).

Davis has traced some previous research by Graham Rawlinson in 1976, and also shows that the ‘first and last letters’ thing doesn’t necessarily work, even in English, and goes on to take apart the standard version, relating to what we know about reading, to demonstrate that the usual example is quite carefully tailored to be easier than many other passages in English might be.

A fascinating bit of real-life, non-anglocentric research, and then applying standard theories about reading to a unconventional example. Would be the basis of a good theories-of-reading lecture, I think. I don’t teach cognitive psych any more, but it could be an idea for someone else. Thanks, Matt.

(….and also thanks to Bart van Leeuwen who posted the link in the middle of a fairly heated argument about proper spelling and punctuation on a photography discussion group – no, I can’t understand how that got started, either — well, actually, if you know what discussion groups are like sometimes, you can understand it.)

My typing is awful, and I make many mistakes, often reversing the order of letters if one is right-handed (-fingered, actually) and one left-handed. I’ve gone back through this post correcting those errors, as usual, but I need not hvae btoherd, raelly.

You can rewire your brain! Well, maybe

As usual, a psychology story in the press which made me think ‘yes, but…’.

This is in today’s (Weds 13 June) Guardian: How Barbara Arrowsmith-Young rebuilt her own brain:

Barbara Arrowsmith-Young had a phenomenal memory but was ‘living in a fog’. She realised that part of her brain was not functioning properly so she devised a series of cognitive exercises to develop it. The results changed her life – and now she has helped thousands of children with learning disabilities

It looks as though this is a PR-inspired article. The second paragraph has the line: “She has just published a groundbreaking, widely praised and enthralling book called The Woman Who Changed Her Brain”. The online version of the article comes with a link to the book in the Guardian bookshop:
Some quick research turned up various online interviews and articles from various parts of the world in the last month or two, like this Australian book fair video: , and she was on at the Hay Festival on 5 June, so I guess the Guardian article is part of a world tour publicising the book – and her Arrowsmith cognitive program for children with learning disabilities:

So I think it’s important to note that this is a story that promotes a commercial operation from Arrowsmith-Young’s point of view, though that’s presumably not why The Guardian thought it worth publishing. That doesn’t mean it’s not psychologically interesting, or (more important) that there might be something here which really could benefit people with cognitive problems.

This is the story. AY (sorry, I ‘m too lazy to keep on typing Arrowsmith-Young) was a child with multiple cognitive problems: in the Australian video linked to above she describes a wider range of problems than are identified in the Guardian article. The basic point seems to be, though, that although she had a “phenomenal” memory, she “didn’t understand anything. Meaning never crystallised. Everything was fragmented, disconnected.” For example, she couldn’t grasp the relationship between hands of a clock and the time. “I was just not attaching meaning to symbols.” In spite of this, by hard work and memory power, she was able to pass school and university courses.
Then she came across two pieces of psychological research. The first was a case study by Alexander Luria of a Russian soldier who had been shot in the head* and suffered damage to the left occipital-temporal-parietal region:

I recognised somebody describing exactly what I experienced. His expressions were the same: living life in a fog. His difficulties were the same: he couldn’t tell the time from a clock, he couldn’t understand bigger and smaller without drawing pictures, he couldn’t tell the difference between the sentences ‘The boy chases the dog’ and ‘The dog chases the boy.’ I began to see that maybe an area of my brain wasn’t working.” [Luria’s book, The Man With a Shattered World (1972), which describes this case, is still available. There’s a useful, but very basic, summary at]

and then:

She read about the work of Mark Rosenzweig, an American researcher who found that laboratory rats given a rich and stimulating environment, with play wheels and toys, developed larger brains than those kept in a bare cage. Rosenzweig concluded that the brain continues developing, reshaping itself based on life experiences, rather than being fixed at birth: a concept known as neuroplasticity. Arrowsmith-Young decided that if rats could grow bigger and better brains, so could she. [Some details of Rosenzweig’s work further down]
So she started devising brain stimulation exercises for herself that would work the parts of her brain that weren’t functioning. She drew 100 two-handed clockfaces on cards, each one telling a different time, and wrote the time each told on the back of the card. Then she started trying to tell the time from each, checking on the back each time to see if she was right. She did this eight to 10 hours a day. Gradually, she got faster and more accurate. Then she added a third hand, to make the task more difficult. Then a fourth, for tenths of a second, and a fifth, for days of the week.
I was experiencing a mental exhaustion like I had never known,” she says, “so I figured something was happening. And by the time I’d done that for three or four months, it really felt like something had shifted, something had fundamentally changed in my brain, allowing me to process and understand information. I watched an edition of 60 Minutes, with a friend, and I got it. I read a page of Kierkegaard – because philosophy is obviously very conceptual, so had been impossible for me – and I understood it. I read pages from 10 books, and every single one I understood. I was like, hallelujah! It was like stepping from darkness into light.””

After all that (some years ago), AY has moved on to become able to talk “fluently and passionately and with great erudition” about her book and about her program for helping children with cognitive deficits. She has developed a range of mental exercises for helping a range of cognitive functions (The Guardian says 19) to help thousands of children diagnosed with ADD or ADHD over the years in 35 schools in the US and Canada.

OK, that’s the story, and it’s very interesting. But a few things worry me.

The first one was wondering how someone with no clear idea of cause and effect, and not able to understand a television news programme (she gives the ability to understand such a program after her exercises as evidence that they had worked), could understand the ideas and implications of Luria’s and Rosensweig’s work, and then make the conceptual jump from that to the clockface card exercise. I think I need more information to understand how that worked. I guess I should read the book.
The second worrying thing is that I don’t know of any peer-reviewed research to support this. A quick search in Google Scholar shows links to stuff published on her website, but not much else. I do know of research which suggests that ‘muscle-style’ training of cognitive abilities doesn’t seem to do much good. So Melby-Lervåg & Hulme (2012), after a meta-analysis of twenty three studies of working memory training, conclude in their abstract:

Meta-analyses indicated that the programs produced reliable short-term improvements in working memory skills. For verbal working memory, these near-transfer effects were not sustained at follow-up, whereas for visuospatial working memory, limited evidence suggested that such effects might be maintained. More importantly, there was no convincing evidence of the generalization of working memory training to other skills (nonverbal and verbal ability, inhibitory processes in attention, word decoding, and arithmetic). The authors conclude that memory training programs appear to produce short-term, specific training effects that do not generalize.

The third thing is my generalised cynicism about the spurious convincingness of explanations which depend on brain function. Now, I may be being unfair to AW, but there is evidence for this spurious convincingness as a general effect**. Weisberg, Keil, Goodstein, Rawson, and Gray’s (2008) paper The Seductive Allure of Neuroscience Explanations (available at tried out good and bad explanations for psychological phenomena, and found that adding a bit of neuroscience flannel enhanced credibility, at least for non-experts. Here’s their abstract:

Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people’s abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good  explanation vs. bad explanation)  2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on non-experts’ judgments of bad explanations, masking otherwise salient problems in these explanations.

For camparison, here’s an accountof the AY approach from AY’s commercial website,

Recent discoveries in neuroscience have conclusively demonstrated that, by engaging in certain mental tasks or activities, we actually change the structure of our brains–from the cells themselves to the connections between cells. The capability of nerve cells to change is known as neuroplasticity, and Arrowsmith-Young has been putting it into practice for decades. With great inventiveness, after combining two lines of research, Barbara developed unusual cognitive calisthenics that radically increased the functioning of her weakened brain areas to normal and, in some areas, even above normal levels. She drew on her intellectual strengths to determine what types of drills were required to target the specific nature of her learning problems, and she managed to conquer her cognitive deficits.

I’d prefer some empirical evidence for determining “what types of drills were required”, rather than drawing on AY’s “intellectual strengths”, but the main point is that I think the opening statement is only really supportable in a fairly trivial sense: “by engaging in certain mental tasks or activities, we actually change the structure of our brains–from the cells themselves to the connections between cells.” Well, yes: to the extent that we’re cognitively changed by what we do, our brains change. What else could be happening? Those changes can affect our experience qualitatively, even in later life. Some years of struggling with singing in a choir, and trying to cope with big books full of notes, have made me almost able to read music directly and recognise intervals in a way which is experientially quite different from my earlier strictly by-ear experience of music, and I encourage anyone to try it – your brain will work better, and you’ll experience things you didn’t before!! – but I don’t see that as a neurological breakthrough. Is the AY statement a neurologically-enhanced not-much-of-an-explanation? Certainly the Rosenzweig*** studies, while important and fascinating, don’t take us into AY territory. You can read an original 1964 Bennet, Diamond, Kreech & Rosenzwieg paper here: (It’s always really valuable to read the originals), and a later 1996 summary (Rosenzweig and Bennett, 1996) here:
R&B were mainly concerned with increase in brain size and connectivity, and later on with improvements in memory and learning (obvious things to look at in rats). I always took that research as being more of a warning about the damaging effects of deprivation more than the enhancing effects of stimulation (though the B&Al paper does distinguish between non-deprivation and extra stimulation). I’m not up-to-date on this stuff, so I’d be interested to hear of more recent evidence which might suggest changes in more advanced cognitive functioning as a result of changed experience (apart from the non-result of M-L&H, cited above).

Am I being too sceptical here?

*People getting shot in the head is a valuable source for psychological/neurological research. If we ever run out of wars (unfortunately, not likely) we’ll have to make do with motorcyclists (note to my friend John: be careful out there).

**I got this reference from one of Ben Goldacre’s blogs about Mind Gym. Goldacre is wonderfully scathing, and funny, about Brain Gym, which also has some neurological explanations which don’t convince me (actually he’s wonderfully funny and scathing about lots of Bad Science – read the book, follow the blog, follow him on Twitter [for an interesting example of one way of using Twitter, including crowdsourcing advice about what to eat in your fridge]). This particular blogpost was
(The link G gives at the end to the Weisberg & al paper doesn’t work, but the ones I give here are OK – in June 2012, anyway)

***I can’t resist pointing out that Rosenzweig’s grandparents were asylum seekers (the people formerly known as refugees) or economic migrants (as with many valuable contributors to their new host society) – and no, not ‘bogus asylum seekers’ – what’s the point of seeking bogus asylum? Or even not really (bogusly) seeking asylum?: ‘Oh, thanks for giving me refugee status, but I don’t really want it: it was just a windup, actually.” Anyone who uses that phrase needs to take a (non-subsidised) course to improve their understanding of English and logic, and then be deported (to whatever planet they came from) if they fail. [Mild trolling here]

Bennet, Diamond, Kreech &, Rosenzwieg (1964) Chemical and Anatomical Plasticity of Brain Science 146, 610-619

Melby-Lervåg, M., & Hulme, C. (2012). Is Working Memory Training Effective? A Meta-Analytic Review. Developmental Psychology. Advance online publication. doi: 10.1037/a0028228
A short writeup about  this paper: No Evidence That Working Memory Training Programs Improve General Cognitive Performance

Rosenzweig, Mark R. and Edward L. Bennett. (1996) Psychobiology of plasticity: effects of training and experience on brain and behavior Behavioural Brain Research 78 57-65

Weisberg, Deena Skolnick, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, and Jeremy R. Gray (2008) The Seductive Allure of Neuroscience Explanations Journal of Cognitive Neuroscience 20:3, pp. 470–477

Was: Cognitive Psychology as the science of killing people; now: Neuroscience as the science of….

In this week’s lecture, I’ll present the case that the rise of cognitive psychology in the 50s and 60s, and then the development of computational models in psychology in the 80s, and cognitive neuroscience more recently, were heavily financed by the military, because they helped to provide the knowledge required to enable soldiers to operate increasingly complex weapons systems, and more recently to replace soldiers with smart weapons.

I admit that my view of the development of cognitive psychology may be biased because many years ago, as a hard-line pacifist, I refused to apply for an attractive post-doc research job (in visual search, the topic of my PhD thesis) because it was financed by the Navy – and maybe my career has been downhill ever since. I’m still a hard-line pacifist: show me a war and I’ll march against it (never seems to do much good)*.

But, every time I start thinking this is just an eccentric personal concern, something comes along which reminds me that psychological research is useful to the military, they do finance it, and it is something to be concerned about.

An example from 2008: ‘You really can smell fear, say scientists’ (  an article in The Guardian by James Randerson. Great study involving parachutists’ armpits and brain scanners, looking for a ‘fear pheromone’ (psychologists know how to have fun). And the fourth paragraph reads:

The research was funded by the US Defence Advanced Research Projects Agency – the Pentagon’s military research wing – raising speculation that it is a first step to isolating the fear pheromone for use in warfare, perhaps to induce terror in enemy troops. But DARPA denied that it had any military plans for fear pheromones or plans to fund further research into the field.

I was preparing this year’s lecture, and thinking that example was a bit dated, when along came (7 February 2012): Rise of the man-machines: how troops could plug their brains into weapons, by Ian Sample in The Guardian. That’s an over-sensationalist title: like most articles like that, the title should have a compulsory ‘sometime, maybe’ added at the end, but it’s a serious article about a just-released report by the (UK) Royal Society which “considers some of the potential military and law enforcement applications arising from key advances in neuroscience”. The intro to the report is at, and the full report is at:

From The Guardian article:

The authors argue that while hostile uses of neuroscience and related technologies are ever more likely, scientists remain almost oblivious to the dual uses of their research.

The article quotes Vince Clark, a US researcher who is using transcranial direct current stimulation to enable soldiers to spot targets more quickly, as saying:

As a scientist I dislike that someone might be hurt by my work. I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don’t know how to deal with that.
If I stop my work, the people who might be helped won’t be helped. Almost any technology has a defence application.

Clark’s work is also potentially useful for dementia sufferers, so I hope he makes a lot of progress in time for it to be useful to me, but still…. (Actually another article by Sample the same day: points out “How dementia drugs could be used by the military”.)

Both the article and Royal Society report are fascinating reading, but I was struck that the Royal Society’s first recommendation for the scientific community is:

There needs to be fresh effort by the appropriate professional bodies to inculcate the awareness of the dual-use challenge (i.e., knowledge and technologies used for beneficial purposes can also be misused for harmful purposes) amongst neuroscientists at an early stage of their training.

So, that’s what I’m doing in my lecture (and here). All you early-stage neuroscientists, think about this. Just saying.

* Bring home our boys from Iran. I’d like to claim you read it here first, but Mad Magazine got there before me.

Good luck with that, Vince. You, me, and most Miss World contestants, they say.

Today’s newspaper: some spoilers for Schools of Thought later in the year

Just a post to show how the kind of thing we talk about in Schools of Thought comes up in the mainstream press (well, The Guardian, anyway). Today (Mon 20th  November)  there are two stories about topics which we’ll be taking up later in the year:
About the ideas behind the Siri personal assistant on the latest iPhone (you know you want one). Two themes here. The original voice recognition/artificial intelligence/natural language recognition research was financed by the US military, along  with most of the rest of cognitive psychology (as I’ll discuss in Cognitive Psychology as the Science of Killing People), and also how it’s possible to build computer systems which mimic how we understand everyday speech (something which it is still a big problem for psychology to understand). Christina will be talking about the usefulness of the computer simulation approach in The Rise and Fall of Computational Psychology.

The other story:
is about a veteran forensic psychotherapist who uses a psychoanalytic/Freudian approach. The subheading and first few paragraphs sound as though it’s about really weird ideas, but read on – it gets more sensible. The point here, apart from the intrinsic interest, is that this is a three page article about a psychoanalyst – in 2011. I’ll be arguing that Psychoanalysis is Alive and Well next term (well, maybe more accurate to say that Psychoanalysis’s Zombie is Differently Alive* and Still Shambling Among Us).

*Who is trying to rehabilitate the undead by (politically correctly) calling them the differently alive?

Was: More Psychology Disguised as Physiology. Now: Psychology: the Secret of Life

“Researcher examines how brain perceives shades of gray” the headline (italics added)

From Psypost:

…but the actual study was on how people perceive shades of grey (or gray, depending on which language you’re writing in).

I intended this to be a minor moan about mislabelling interesting psychological research, but it developed into a discussion of the nature of Life and Meaning itself (sort of): see the bit after BUT… below.

This is about the interesting perceptual problem that we see white things as being white, even in rather dim light, when they’re reflecting thousands of times less light than they do in bright light, and also very much less light than is reflected from black things in bright light, which still look really black, even though they’re actually reflecting lots of light back to us. We must be working to some kind of baseline – ‘what’s the grey level brightness here?’ – or maybe ratio – ‘I’ll see the brightest thing here as white, and anything 100 times less bright as black’ – or something, but I don’t think that’s been worked out.
The study referred to here is testing out the ‘ratio’ explanation, by asking people to estimate levels of brightness over a checkerboard with a very great range of levels of brightness.

Sarah Allred, an assistant professor of psychology at Rutgers–Camden […] conducted the research with Alan L. Gilchrist, a professor of psychology at Rutgers–Newark, and professor David H. Brainard and post-doctoral fellow Ana Radonjic, both of the University of Pennsylvania. Their research will be published in the journal Current Biology. [No further reference given]
Participants were asked to look at a 5×5 checkerboard composed of grayscale squares with random intensities spanning the 10,000-to-1 range. They were asked to report what shades of gray a target square looked like by selecting a match from a standardized gray scale.
If the visual system relied only on ratios to determine surface lightness, then the ratio of checkerboard intensities the participants reported should have had the same ratio as that of the black and white samples on the reflectance scale, about 100-to-1.
Instead, the researchers found that this ratio could be as much as 50 times higher, more than 5,000-to-1.

Sounds like good evidence against the ratio model, which is interesting, because it’s probably the most obvious explanation for how we do this. I’d be tempted to follow this up and find out more (it’s also interesting for a photographer, because understanding the difference between how we appreciate tones and contrast and how the much more ‘objective’ system of the camera sensor/digital representation/monitor or paper output system does it would help in getting things to look the way we want them to look, in spite of the fact that the world of light is wilder and more varied than either our eyes or the digital systems can really cope with. Playing around with all the sliders and controls in Photoshop can help to give some ideas of what the issues are).


…you may guess what I’m going to say next. Why is this represented as brain research, rather than people research? The researcher is quoted as saying:

She continues, “In addition, even though we used behavioral rather than physiological measures, our results provide insight into the neural mechanisms that must underlie the behavioral results.”

“even though we used behavioural rather than physiological measures”?! Good grief.  Yes the neural mechanisms are (probably equally) interesting, and understanding them will extend our understanding of the, apparently less valuable, behavioural processes – but isn’t the only* reason we want to find out about the brain processes because the behaviour/experience is interesting/puzzling/maybe practically important? If we didn’t have all those puzzles of consciousness and complex behaviour, would anyone give a toss about the brain processes?

Now I’ve got going on this, I could take it further: what’s the root cause here? What’s driving the evolution and development of our brains? There can’t be evolutionary selection of brains: it’s not the tissue, or even the wiring (it’s not really wiring, of course, but that’s our late-20th century metaphor for whatever weird things are going on in there) which is selected: it’s actions in the world, which result in survival for long enough, and then success in mating, to produce viable, adaptive offspring – in other words, behavioural and psychological things.

Now, brain processes may limit the range of adaptation possible: in Terry Pratchett’s books, trolls are fick because their silicon-based nervous systems don’t run as fast as carbon-based ones at Ankh-Morpork temperatures (any real-life examples?), and physiological changes may give behavioural advantages which pay off in evolutionarily useful behaviour – like trichromatic vision in primates, which is said to give us better ability to distinguish between ripe and unripe fruits (and therefore allows better nutrition) than the crappy old two-colour system allowed – but the physiology is only important, and only selected for, as it’s mediated through behaviour.

So there you go – it’s psychology which got us where we are now; the physiology was just dragged along as a necessary underpinning. It’s about time we started misrepresenting brain research as psychology, just to make it sound important, not the other way round.

*OK, some nerds might be interested in it for its own sake, just like some people like to speculate about prime numbers – and that kind of interest can sometimes be useful in the long term.

I admit that some of the physiology can be adaptive in straightforwardly physiological ways: so the mutation which makes some people resistant to the malaria parasite clearly improves their ability to survive and reproduce without that effect being psychologically mediated or significant, but I don’t see that as applying to brain-based evolution.

Young woman with amnesia unable to hold a single face in short-term memory… unless it’s Paris Hilton!

The headline above is not mine: it’s from the press release about this story. Turns out, as far as I can tell, it’s not just Paris Hilton, but any familiar face. Just as well: only being able to remember Paris Hilton sounds like a fate worse than death.
Here’s the press release, from Baycrest*:, and the original (not-yet-in-print) journal article: (or at least the abstract: I can’t find an institutional access link – just ‘purchase PDF’ for the whole article. Maybe it’ll be more available once it’s appeared in the journal).
The press release, including a one-minute video of Nathan Rose. the lead researcher, talking about the case: (posted below: also visible at gives quite a lot of information and explanation. Worth looking at the press release before reading the rest of this post, maybe.

I think there are two things interesting about this: one is that it seems to give new insight into hippocampus damage and how it relates to ‘short term memory’, ‘long term memory’ and ‘working memory’ (and maybe how those are unhelpful/inaccurate ways of describing what people do: certainly the labels are used a bit loosely in the article).

The other thing I think is interesting, though, is mentioned almost in passing in the release:

Despite HC’s severe memory impairment – the result of experiencing hypoxia (loss of oxygen) in the first week of life – she is a relatively normal functioning individual and college graduate, who is an avid film buff and celebrity watcher.

She’s described as having ” a profound long-term memory deficit” (about 40 seconds into the video). I guess that means the common amnesia problem of transferring immediate memory into something that is accessible over longer periods. But she also has “relatively preserved semantic memory” (from the journal article abstract). Her brain damage is described as:

This woman is missing 50 percent of the normal volume of her hippocampus with no obvious damage to other parts of her brain. This provides an extraordinary opportunity to generate new insights about how this crucial memory centre of the brain affects both short-term and long-term memory,” said lead investigator Nathan Rose, a post-doctoral fellow in Cognitive Neuroscience at Baycrest’s Rotman Research Institute.

Right: so she’s had a severe problem from just after birth, but has developed a “relatively preserved semantic memory”, and is normally functioning and a college graduate. OK, ‘relatively preserved’ could mean relative to her severe deficit in other areas, and still be quite poor, but if she’s normally functioning and can follow films well enough to enjoy them, she sounds pretty normal. But wouldn’t the standard ‘short-term transfer to long-term’ amnesia deficit mean that she couldn’t really build up any kind of semantic memory? We know that other individuals with anterograde amnesia can accumulate procedural and implicit memories, but the kind of learning/memory required to learn to make sense of the world (and to get a college degree, though making sense of the world is harder, I think) seems to be different from that.
I guess my interpretation is that people can be better at coping than the brain science suggests – and if the damage occurs early enough, there may be ways of getting round it/compensating for it – but also that our categories of STM, LTM, semantic, procedural, explicit, implicit, while very useful for describing, and maybe theorising, memory, are probably crude over-simplifications abut an activity which is much more complex and fluid than these models.

…and: you always need a good headline:
Young woman remembers mum’s face better than strangers’ faces
– which I think is also an accurate reading of the press release (but does admittedly leave out the psychological point) wouldn’t generate much interest. Maybe there’s a positive side to Paris Hilton after all. Maybe.

*“Headquartered on a 22-acre campus in Ontario and fully affiliated with the University of Toronto, Baycrest is the global leader in developing and providing innovations in aging and brain health.” From what I can see on the website, I think it’s a combined hospital, care centre, and research centre.

Understanding narratives, in film and elsewhere, actually requires a lot of cognitive abilitiesOne of the most damaging (for her) early features of my mum’s gradually developing vascular dementia was her inability to follow (and therefore enjoy) stories, films, radio plays. On the other hand, I wasn’t that worried when my son, at about age four, enjoyed watching the video of Star Wars again, and again, and again, and again. This was valuable cognitive development stuff: he was coming to understand about sequences of events, and narrative – and probably about film representation of the real world (well Tatooine, anyway). He grew up to take a Drama degree….