Just another site

Monthly Archives: June 2012

You Are Not a Gadget: Jaron Lanier on Technology and Personhood

Saw Jaron Lanier on Newsnight last night, adding some sensible wider persepectives to a debate on ways of filtering internet porn, and was reminded of how interesting his ideas have been over the last twenty years or so.

Lanier is a virtual reality programmer and internet activist from way back. He looks like a dreadlocked Buddha (on Newsnight like a dreadlocked Buddha who has let himself go a bit), but that shouldn’t undermine his authority (actually, for me, it probably enhances it).

His recent (2010, 2011) book You Are Not a Gadget (here’s the book on*)is a fascinating discussion of the social and philosophical implications of the particular ways we have chosen to structure computer systems. The basic idea is that particularly successful systems, like the World Wide Web, the mouse and windows interface, MIDI, and the UNIX operating system, both structure our reality and lock us in to those systems, pre-empting other ways of doing things (and therefore pre-empting other ways of thinking about things – and maybe pre-empting other ways of being) . The installed base/lock-in problem isn’t a new phenomenon which appeared with computing. Other examples are the qwerty typewriter/keyboard layout (did you know it was originally designed to slow down typing, but repeated attempts to introduce faster, easier-to-use systems have been complete failures – because too many people know how to do it the qwerty way?), and the steering wheel/two or three pedals way of controlling motor vehicles (actually, probably quite a good system, from what we know about multi-tasking, but probably the result of a few technological accidents 100+ years ago). These things aren’t just technology and design issues, though: they can have psychological and social implications, which is why I’m discussing his book here. One section is headed “Digital Reification: Lock-in Turns Philosophy into Reality”, which sums up the starting idea of the book well, I think.

Here’s Lanier’s opening statement about the book:

You Are Not a Gadget argues that certain specific, popular Internet designs of the moment – not the Internet as a whole – tend to pull us into life patterns that gradually degrade the ways in which each of us exists as an individual. These unfortunate designs are more orientated towards treating people as relays in a global brain. De-emphasising personhood, and the intrinsic value of an individual’s unique internal experience and creativity, leads to all sorts of maladies, many of which are explored in these pages. While the core argument might be described as “spiritual,” there are also profound political and economic implications.
p. x

And here’s something about the mechanisms of how it affects us:

The most important thing about a technology is how it changes people
When I work with experimental digital gadgets, like new variations on virtual reality, in a lab environment, I’m always reminded of how small changes in the details of the digital design can profound unforeseen effects on the experiences of the humans who are playing with it. The slightest change in something as seemingly trivial as the ease of use the button can sometimes completely altered behaviour patterns.
For instance, Stanford University researcher Jeremy Bailenson has demonstrated that changing the height of one’s avatar in immersive virtual reality transforms self-esteem and social self-perception. [Here’s the publications page at Bailenson’s Virtual Human Interaction Lab at Stanford: interesting stuff. Specific refs at ** below] Technologies are extensions of ourselves, and, like the avatars in Jeremy’s lab, our identities can be shifted by the quirks of gadgets. It is impossible to work with information technology without also engaging in social engineering.
One might ask, “If I am blogging, twittering, and wikiing a lot, how does that change who I am?” or “if the ‘hive mind’ is my audience, who am I?” Inventors of digital technologies are like stand-up comedians or neurosurgeons, in that our work resonates with the philosophical questions; unfortunately, we’ve proven to be poor philosophers lately.
When developers of digital technologies design a program that requires you to interact with a computer as if it were a person, they ask you to accept in some corner of your brain that you might also be conceived of as a program. When they design an Internet service that is edited by vast anonymous crowd, they are suggesting that a random crowd of humans is an organism with a legitimate point of view.
Different media designs stimulate different potentials in human nature. We shouldn’t seek to make the pack mentality as efficient as possible. We should instead seek to inspire the phenomenon of individual intelligence.
“What is a person?” If I knew the answer to that, I might be able to program an artificial person in a computer. But I can’t. Being a person is not a passive formula, but the quest, a mystery, a leap of faith.
Pp. 4-5

Now, I guess I might disagree with Lanier in believing that a random crowd of humans does have a legitimate point of view, at least a legitimate artistic/aesthetic point of view, as I argued in talking about the evolutionary view of traditional music, and so I’d go for more of a dialectic between the social/individual, because “inspiring individual intelligence” also seems a good idea. Lanier isn’t a swivel-eyed individualist, though:

A happy surprise
The rise of the web was a rare instance when we learned new, positive information about human potential. Who would have guessed (at least at first) that millions of people would put so much effort into a project without the presence of advertising, commercial motive, threat of punishment, charismatic figures, identity politics, exploitation of the fear of death, or any of the other classic motivators of mankind. In vast numbers, people did something cooperatively solely because it was a good idea, and it was beautiful.
Some of the more wild-eyed eccentrics in the digital world and guessed it would happen – but even so it was a shock when it actually did come to pass. It turns out that even an optimistic, idealistic philosophy is realisable. Put a happy philosophy of life in software, and it might very well come true!

It’s an interesting book. You might want to read it.

Lanier, Jaron (2010, 2011) You Are Not a Gadget: a manifesto Alfred Knopf, 2010; Penguin Books, 2011 (‘with updated material’)

*Remember that you shouldn’t really be buying stuff from Amazon UK unless you’re a citizen of Luxembourg, where they pay their taxes.

**Yee, N. & Bailenson, J. (2007). The Proteus Effect: The effect of transformed self-representation on behavior. Human Communication Research, 33(3), 271-290.

Yee, N. Bailenson, J.N. & Ducheneaut, N. (2009). The Proteus effect: Implications of transformed digital self-representation on online and offline behavior. Communication Research, 36(2), 285-312.

Here’s Nick Yee’s 2007 Doctoral Dissertation The Proteus Effect, which describes a range of similar effects:

Warning: if you read this post, your hard disk will be wiped and all the sweet fluffy kittens within a two mile radius will die horribly!!!!!!

This warning was issued by Microsoft* this morning… you know the rest.

BUT we should take these warnings seriously – because they are themselves viruses which are evolving and spreading through our systems and our minds.
Another post about some kind of evolution; I’ll stop after this one.

This post is a summary of a paper presented {sometime} at {some conference or other} that I went to. I think the 1998 IRISS conference in Bristol, but I’m not sure. I don’t know who presented it either. If anyone knows, please tell me, so I can credit them properly, because it was a great presentation.

Generally, people know that paedophiles aren’t harvesting baby pictures from Facebook, or watching YouTube videos doesn’t allow Russian gangsters access to your building society account – but the dreadful warnings keep coming. Why do these memes do so well?

Generally humans, because of sophisticated but fallible information transmission systems (talking and singing), are good vehicles for meme evolution. That’s how traditional music works, after all (see last post). The world of blogs and Twitter is a competitive memeocracy, after all, but there’s some information or aesthetic gain there. What makes the useless, stupid virus warnings viable? They are alive and well out there: I glimpse one passing through my patch of the Facebook jungle about once a month.

The case presented at the conference was: they’ve got access to mechanisms for rapid multiplication and transmission, so they can quickly reproduce themselves millions of times to allow for very high fatality rates (like oceanic fish); they have very low energy needs (copy and paste or a click on ‘share’ is all they need to survive) and (and this is the bit I liked) they have a mutation mechanism to provide the variation they need for evolution. Although the lowest-energy form of reproduction is to pass them on directly, people find it difficult to do that without changing something: removing new lines, changing the spelling, adding or removing exclamation marks…..

Compare these versions of the ‘Budweiser Frogs’ virus warning:

URGENT READ IMMEDIATELY. NOT A JOKE!! READ IMMEDIATELY AND PASS ON TO EVERYONE YOU KNOW! Someone is sending out a very cute screensaver of the Budweiser Frogs. If you download it, you will lose everything! Your hard drive will crash and someone from the Internet will get your screen name and password! DO NOT DOWNLOAD IT UNDER ANY CIRCUMSTANCES! It just went into circulation yesterday. Please distribute this message. This is a new, very malicious virus and not many people know about it. This information was announced yesterday morning from Microsoft. Please share it with everyone that might access the Internet. Once again, Pass This on Please!!!!!!

READ AND PASS ON TO EVERYONE YOU KNOW Someone is sending out a very cute screensaver of the Budweiser Frogs.
If you download it, you will lose everything! Your hard drive will crash and someone from the Internet will get your screen name and password! DO NOT DOWNLOAD IT UNDER ANY CIRCUMSTANCES!
It just went into circulation yesterday. Please distribute this message.This is a new, very malicious virus and not many people know about it. This information was announced yesterday morning from Microsoft. Please share it with everyone that might access the Internet.
Press the forward button on your email program and send this notice to EVERYONE you know.
Let’s keep our email safe for everyone

Both these examples from thanks.

No, you may never have heard of the Budweiser frogs: it was a long time ago. But that’s the thing about these parasites; they can evolve and change their hosts. The same warning will appear linked to the Jimmy Carr sex tape, when it emerges.

It’s difficult to resist the tamper urge. I deliberately didn’t insert the missing space in the second example, but I did reformat it a bit to fit the layout of this blog. Of course. That’s what you do.

I guess/hope someone is studying these things systematically, but I couldn’t find anything in a quick search. Please let me know if you know of any research.

It’s not just the reproductive mechanism, of course: there’s information content as well, which is probably where they adapt, through random editing and evolve into currently viable forms.  These messages show who we’re afraid of: paedophiles, communists, Russian gangsters, your future employer, council snoopers – or just ’someone from the Internet’. In content, these warnings are related to urban folktales. One explanation for urban folktales is that they express our hidden fears. In this case, distrust of technology, and the uneasy feeling that people out there can reach out and fiddle with your computer without you knowing (Microsoft messes with my computer while I’m asleep: I got a warning from them this morning).

Urban folktales are very adaptable: stories like The Twopenny Lean, The Phantom Hitchhiker, The Holland Handkerchief, The Fatal Hairdo, The Rich Beggar go on from generation to generation and get changed according to social conditions and fashions. I first heard The Fatal Hairdo about a beehive hairdo (about 1960-65), but it only took that form for a few years before it moved on. If you don’t know about these, is a good source, or a series of books by Jan Harold Brunvand. His homepage is at

There must be must be some online versions by now: an email which mysteriously arrives with a request to pass it on to the sender’s mother, which turns out to have been sent (from an IP address that doesn’t exist) by someone who died just a year before, or a Facebook account which was mysteriously wiped at the exact moment the tsunami hit (yes, I know neither of these really make technical sense, but that’s not important: neither does The Fatal Hairdo). If anyone knows any of these, I’d love to hear them. I’ve heard the one about the real origin of the term ‘bug’. has a few examples of scams and warnings, but no real social media ones.

Of course, there’s always the one about video games rewiring our kid’s brains.

*Why never Apple? Is it because the folks at Apple are too cool to care about those kittens?

How does music evolve?

Two questions about music and evolution. How did humans evolve to be musical? (last post) How does music evolve? (below)

 Warning: this starts with interesting stuff about the psychology of music and evolutionary mechanisms applied to non-biological systems, but then drifts off into quite a lot about traditional music.

An experimental demonstration of how random sounds can evolve into something that seems quite musical by means of human selection. Here’s an intro to the project on Psypost:

It’s more fully written up in the paper: Evolution of music by public choice by MacCallum, Mauch, Burta, and Leroia of Imperial College London and the National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan at

Published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS) in 2012. Here’s their abstract:

Music evolves as composers, performers, and consumers favor some musical variants over others. To investigate the role of consumer selection, we constructed a Darwinian music engine consisting of a population of short audio loops that sexually reproduce and mutate. This population evolved for 2,513 generations under the selective influence of 6,931 consumers who rated the loops’ aesthetic qualities. We found that the loops quickly evolved into music attributable, in part, to the evolution of aesthetically pleasing chords and rhythms. Later, however, evolution slowed. Applying the Price equation, a general description of evolutionary processes, we found that this stasis was mostly attributable to a decrease in the fidelity of transmission. Our experiment shows how cultural dynamics can be explained in terms of competing evolutionary forces.

You can find examples of the evolved music at where they’ve now got up to 3,500 generations, and you can also take part in the study. The ‘selective influence’ is just asking people to rate the clips – do they like them or not? The ‘sexual reproduction’ is done by splitting and mixing the clips with each other to simulate chromosome mixing (sex is good for mixing up genes), and the ‘mutation’ is introducing a bit of random variation. So that looks like a nice model of reproductive selection, and what comes out sounds more and more like music as you go down the generations. In fact, there may even be new species evolving: a tweet today says: “Amazing stuff on the main channel right now – a whole new phenotype has emerged – inter-loop chord changes and more!” (Yes, you can follow them on twitter at darwintunes).

Well, that’s fascinating and fun but as an old folky I thought ‘Duh!: I thought everyone knew that music evolved.’ A long established theory of the development of traditional music is one of evolution with variation provided by imperfect recall and bits of musical innovation, and selection provided by people’s preference for what they would like to hear and play again, or maybe just by what sticks in memory.

Here’s the definition from the International Folk Music Council (no, I didn’t know there was one of those, either) in 1954:

..folk music is the product of a musical tradition that has been involved in the process of oral transmission. The factors that shape the tradition are i) continuity which links the present with the past; ii) variation which springs from the creative impulse of the individual or group; iii) selection by the community, which determines the form of forms in which the music survives. (Quoted in Lloyd 1975, p15)

Cecil Sharp said much the same kind of thing in 1920:

…the most typical qualities of the folk-song have been laboriously acquired during its journey down the ages, in the course of which its individual angles and irregularities have been rounded and smoothed away just as the pebble on the seashore has been rounded by the action of the waves; that the suggestions, unconsciously made by individual singers, have at every stage of the evolution of the folk-song been weighed and tested by the community, and accepted or rejected by their verdict; and the life history of the folk-song has been one of continuous growth and development, always tending to approximate the form which should be at once congenial to the taste of the community, and expressive of its feelings, aspirations, and ideals. (p. viii)

Sharp was looking it from the point of view of National Song. Lloyd, a Marxist, uses a different framework:

..the formulation is valuable for its clear suggestion of the vital dialectic of folksong creation, that is, the perpetual struggle for synthesis between the collective and individual, between tradition and innovation, between what is received from the community and what is supplied out of personal fantasy, in short, the blending of continuity and variation. (Lloyd, 1975, p16).

Gerould points out in The Ballad of Tradition (1932, 1957) that this process can also produce a range of equally admirable (in his terms: equally viable, for the evolutionary argument) variants. He does want to bring artistic judgement and ability into it:

the existence of many variants, both melodic and contextual, which are manifestly not due to haphazard, undirected substitution for what has been forgotten shows a widespread power of musical and poetic expression (p183)

…and I guess that’s fair enough. What Mississippi John Hurt or Harry Cox brought to the tradition is probably a step which goes beyond natural selection.

It also seems to me that the biological idea of hybrid vigour is shown when different musical traditions cross: what happened when Scotch-Irish ballads met African-derived music in the Appalachians*, or Toumani Daibaté (and others) combining the power of West African classical music with other traditions**.

A nice modern summary comes from the blogger The Irate Pirate in a post on his Wrath of the Grapevine blog (, 2009)

Like most musics, I suppose, the more you listen to folk music the more you develop a taste for it. But part of the fascination that’s particular to folk music is that you’ll hear bits and pieces of one song that you could have sworn you heard in a completely different song. And you’d be right. Because folk music is an evolved music, and like humans & chimpanzees, there are uncanny similarities lurking just below the surface that point to some invisible, unknowable ancestral precedent. And, like all things subject to evolution by natural selection, the essential parts are maintained and the extraneous, inconsequential bits fall aside. What this means in terms of folk music, particularly these old traditional ballads, is that while a song may be quirky and seemingly obtuse, at some level (often a non-conscious, irrational level), the song is deeply meaningful and helps people to negotiate the trials and uncertainties of this muddled mortal existence.

And, of course, since folksong-evolution is an organic process in an oral tradition, sometimes bits and pieces get lost along the way and we’re left with only fragments (you could say this too is a product of natural selection: the part that remains is that which is most memorable). And since it is sung by people who weren’t professional musicians, it had to relate to things that everyday people could relate to, rather than abstruse musical concepts and the self-indulgent wankery that professional artists are susceptible to. The universal subjects are thus revealed: love, death, nature, heartbreak, childhood, remorse, dream/spiritual encounters, and leaving home. These themes can be found recurring in folk music and most great narrative art across time, from Homer to Shakespeare to Stan Brackage. It’s as if these subjects keep coming back because they’re the moments in our lives that stay with us, and we need songs & stories like these to help mark those moments and distill meaning from them.

So, the process that produced the Lowlands of Holland or the Leaves of Life is rather similar to the process that produced the cheetah or the kingfisher (and the warthog and the platypus, to be fair). It’s not surprising that traditional music is so good.

GeroulD, G.H. (1932, 1957) The Ballad of Tradition London: Galaxy, OUP

Lloyd, A.L. (1975) Folk Song in England St Albans: Paladin (orig. publ. Lawrence & Wishart, 1967

MacCallum, Robert M, Matthias Mauch, Austin Burt, & Armand M. Leroi (2012) Evolution of music by public choice, PNAS, no paper version yet
Available at:

Sharpe, Cecil (1920) English Folk Songs, 2nd ed Novello; London

* Here’s Clarence Ashley doing CooCoo bird (music doesn’t start until 3.30):

**TD with the AfroCubism band:

..and playing Cantelowes:

How did humans evolve to be musical?

I just came across interesting stuff relating to two questions about music and evolution. How did humans evolve to be musical? and How does music evolve?

For this post: How did humans evolve to be musical?

There’s a paper by Geoffrey Miller (no relation) Evolution of human music through sexual selection (, which works through the idea being musical and producing music might give the mating advantage, and so evolve through sexual selection.

One thing I like about this paper is that it carefully works through the mechanisms and criteria for thinking that of behaviour or characteristic can be evolutionarily selected, rather than just making up a sort-of convincing story that some characteristic could be a mating advantage and leaving it at that. It does give you a sort-of convincing story as well:

Consider Jimi Hendrix, for example. This rock guitarist extraordinaire died at the age of27 in 1970, overdosing on the drugs he used to fire his musical imagination. His music output, three studio albums and hundreds of live concerts, did him no survival favours. But he did have sexual liaisons with hundreds of groupies, maintained parallel longterm relationships with at least two women, and fathered at least three children in the U.S., Germany, and Sweden. Under ancestral conditions before birth control, he would have fathered many more. Hendrix’s genes for musical talent probably doubled their frequency in a single generation, through the power of attracting opposite-sex admirers. As Darwin realized, music’s aesthetic and emotional power, far from indicating a transcendental origin, point to a sexual-selection origin, where too much is neverenough. Our ancestral hominid-Hendrixes could never say, “OK, our music’s good enough, we can stop now”, because they were competing with all the hominid-Eric-Claptons, hominid-Jerry-Garcias, and hominid-John-Lennons. The aesthetic and emotional power of music is exactly what we would expect from sexual selection’s arms race to impress minds like ours.

…which is great, but he then goes on to carefully work through the mechanisms of selection to build up quite a convincing argument. He also points out that Darwin suggested much the same idea, though it wasn’t taken up in mainstream evolutionary thought.

It’s quite a long and detailed paper, but I think well worth reading, if only as an example of careful thought that I think is often missing in evolutionary psychology. There’s also a bit later on which I found fascinating about possible runaway effects in sexual selection – the kind of thing which leads to hyper exaggerated characteristics which don’t seem very adaptive, like the enormous antlers of the Irish elk or the peacock’s ludicrous tail. You may be able to think of human equivalents. Miller cites mathematical models for the effect, and notes: “Only when the courtship trait’s survival costs became very high might the runaway effect reach an asymptote.”

The power of the runaway theory is that it can explain the extremity of sexual selection’s outcomes: how species get caught up in an endless arms race between unfulfillable sexual demands and irresistible sexual displays. Most relevant for us, the preferences involved need not be cold-blooded assessments of a mate’s virtues, but can be deep emotions or lofty cognitions. Any psychological mechanism used in mate choice is vulnerable to this runaway effect, which makes not only the displays that it favors more extreme, but makes the emotions and cognitions themselves more compelling. Against the claim that evolution could never explain music’s power to emotionally move and spiritually inspire, the runaway theory says: any emotional or spiritual preferences that influence mate choice, no matter how extreme or subjectively overwhelming, are possible outcomes of sexual selection (cf. Dissanayake, 1992). If music that emotionally moves or spiritually inspires tended to sexually attract as well, over ancestral time, then sexual selection can explain music’s appeal at every level.

Although I’m usually very suspicious of evolutionary explanations of psychological phenomena (mainly because they often seem to me to be based on sloppy evolutionary theory), I think Miller makes a good case. There is one thing that worries me, though. The explanation is based on the mate selection, which is Miller says is primarily females selecting males. The peacock and Irish elk examples show characteristics developed in males than females. So this looks like an explanation for why human males develop musicality and the ability to produce music, while human females need only develop musicality – the ability to appreciate the music that males are producing to improve their chances of being selected. So where’s the explanation for women’s musical skills? Many of my favourite musicians, and most of my favourite singers, are women. Is this musical skill an epiphenomenon? Just a matter of gene leakage from the male-selected characteristic?

The other question is how does music evolve? See the next post for that.

Miller, G. F. (2000). Evolution of human music through sexual selection. In N. L. Wallin, B. Merker, & S. Brown (Eds.), The Origins of Music, MIT Press.

Scientists find excuse for Comic Sans!*

Just found out about an interesting piece of research on the effects of making things difficult to read on learning:

Diemand-Yauman, Connor, Daniel M. Oppenheimer & Erikka B. Vaughan. (2011) Fortune favors the bold (and the italic): Effects of disfluency on educational outcomes. Cognition, 118 (1),111-115
(at in a pre-print form)

Abstract: Previous research has shown that disfluency – the subjective experience of difficulty associated with cognitive operations – leads to deeper processing. Two studies explore the extent to which this deeper processing engendered by disfluency interventions can lead to improved memory performance. Study 1 found that information in hard-to-read fonts was better remembered than easier to read information in a controlled laboratory setting. Study 2 extended this finding to high school classrooms. The results suggest that superficial changes to learning materials could yield significant improvements in educational outcomes.

The lab study used Comic Sans and Bodoni Italic in a smaller size (12pt) and 60% grey compared with 16pt Arial in full black, and tested recall of fairly simple facts. The school study used teachers’ own existing learning materials – worksheets and PowerPoint slides – and used two classes for each teacher to give a per-teacher control (there was a good effort to make the study ecologically valid).  “The fonts of the learning material in the disfluent condition were either changed to Haettenschweiler [a heavy Gothicy font], Corsiva [light and flowing script-style] or Comic Sans italics [ugh], if the material was on PowerPoint, or were copied disfluently (by moving the paper up and down during copying) when electronic documents were unavailable.” I don’t quite understand the last bit – motion-smeary photocopies?

The children who had the disfluent presentations scored better in “exams”/”classroom tests” (I think these mean the same: no details of the tests are given ) in English (at various levels), Physics (at various levels) and History, but not in Chemistry. There weren’t significant differences between the disfluent fonts.

Diemand-Yauman & al conclude:

This study demonstrated that student retention of material across a wide range of subjects (science and humanities classes) and difficulty levels (regular, Honors and Advanced Placement) can be significantly improved in naturalistic settings by presenting reading material in a format that is slightly harder to read. While disfluency appears to operate as a desirable difficulty, presumably engendering deeper processing strategies (c.f. Alter et al., 2007), the effect is driven by a surface feature that prima facie has nothing to do with semantic processing.

Interesting – and suggests that all the effort I put into my PowerPoints – allowing room for uncrowded text and reasonable point sizes, breaking lines for meaning, trying to find simple, clear, sentence structures….  – might be wasted or counterproductive. It’s worth noting that D-Y&Al were careful to avoid illegibility. They just wanted to add some slight difficulty, and they speculate that the disfluency effect might be U-shaped, and so interfere with learning at higher levels of disfluency.

I picked this up from an article by Matha Gill (a distant relative of Eric Gill, she points iout) in New Statesman. Thanks Martha. The article is headed How Comic Sans got useful. Useful maybe; acceptable, no. In particular, anyone who uses Comic Sans to suggest anything to do with children and their writing should have to read Finnegan’s Wake in condensed Haettenschweiler, or better still Wingdings – and take a test on the content.  That’s what I’d call disfluency.

*There is no excuse for Comic Sans

This is one of those cases, like  Rind, Tromovich & Bauserman (1998), discussed in Garrison & Kobor (2002) [this is a Schools of Thought reference], where science has come up with an unacceptable result.

Alter, A. L., Oppenheimer, D. M., Epley, N., & Eyre, R. (2007). Overcoming intuition: Metacognitive difficulty activates analytic reasoning. Journal of Experimental Psychology, 136(4), 569–576.

Diemand-Yauman, Connor, Daniel M. Oppenheimer & Erikka B. Vaughan. (2011) Fortune favors the bold (and the italic): Effects of disfluency on educational outcomes. Cognition, 118 (1),111-115

Garrison, Ellen & Kobor, Patricia (2002) Weathering a Political Storm: a contextual perspective on a psychological research controversy American Psychologist57 (3), 165-175

Rind, Bruce, Tromovich, Philip & Bauserman, Robert (1998) A Meta-analytic Examination of Assumed Properties of Child Sexual Abuse Using College Samples Psychological Bulletin, 124 (1), 22-53

it deosn’t mttaer in waht oredr the ltteers in a wrod are

it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae… it doesn’t matter in what order the letters in a word are, the only important thing is that the first and last letter be at the right place

Lots of people have seen this, and it’s fun – but what does it really show?

It’s not actually true that it’s ‘Aoccdrnig to rscheearch at Cmabrigde Uinervtisy’, as some versions have it, but a researcher at Cambridge University (Matt Davis at the MRC Cognition and Brain Sciences Unit) has been thinking about it, and published a fascinating page taking the meme apart:

This has versions in many languages: Hebrew, Czech, Russian, Icelandic…. (he’d like to know if it works in Thai or Chinese). It does vary from language to language; it’s fine in French and Spanish (even I, with basic French and very little Spanish, can read it), but apparently not in Hebrew (no vowels) or Finnish (long complex words, and all those vowels can pile up a bit).

Davis has traced some previous research by Graham Rawlinson in 1976, and also shows that the ‘first and last letters’ thing doesn’t necessarily work, even in English, and goes on to take apart the standard version, relating to what we know about reading, to demonstrate that the usual example is quite carefully tailored to be easier than many other passages in English might be.

A fascinating bit of real-life, non-anglocentric research, and then applying standard theories about reading to a unconventional example. Would be the basis of a good theories-of-reading lecture, I think. I don’t teach cognitive psych any more, but it could be an idea for someone else. Thanks, Matt.

(….and also thanks to Bart van Leeuwen who posted the link in the middle of a fairly heated argument about proper spelling and punctuation on a photography discussion group – no, I can’t understand how that got started, either — well, actually, if you know what discussion groups are like sometimes, you can understand it.)

My typing is awful, and I make many mistakes, often reversing the order of letters if one is right-handed (-fingered, actually) and one left-handed. I’ve gone back through this post correcting those errors, as usual, but I need not hvae btoherd, raelly.

You can rewire your brain! Well, maybe

As usual, a psychology story in the press which made me think ‘yes, but…’.

This is in today’s (Weds 13 June) Guardian: How Barbara Arrowsmith-Young rebuilt her own brain:

Barbara Arrowsmith-Young had a phenomenal memory but was ‘living in a fog’. She realised that part of her brain was not functioning properly so she devised a series of cognitive exercises to develop it. The results changed her life – and now she has helped thousands of children with learning disabilities

It looks as though this is a PR-inspired article. The second paragraph has the line: “She has just published a groundbreaking, widely praised and enthralling book called The Woman Who Changed Her Brain”. The online version of the article comes with a link to the book in the Guardian bookshop:
Some quick research turned up various online interviews and articles from various parts of the world in the last month or two, like this Australian book fair video: , and she was on at the Hay Festival on 5 June, so I guess the Guardian article is part of a world tour publicising the book – and her Arrowsmith cognitive program for children with learning disabilities:

So I think it’s important to note that this is a story that promotes a commercial operation from Arrowsmith-Young’s point of view, though that’s presumably not why The Guardian thought it worth publishing. That doesn’t mean it’s not psychologically interesting, or (more important) that there might be something here which really could benefit people with cognitive problems.

This is the story. AY (sorry, I ‘m too lazy to keep on typing Arrowsmith-Young) was a child with multiple cognitive problems: in the Australian video linked to above she describes a wider range of problems than are identified in the Guardian article. The basic point seems to be, though, that although she had a “phenomenal” memory, she “didn’t understand anything. Meaning never crystallised. Everything was fragmented, disconnected.” For example, she couldn’t grasp the relationship between hands of a clock and the time. “I was just not attaching meaning to symbols.” In spite of this, by hard work and memory power, she was able to pass school and university courses.
Then she came across two pieces of psychological research. The first was a case study by Alexander Luria of a Russian soldier who had been shot in the head* and suffered damage to the left occipital-temporal-parietal region:

I recognised somebody describing exactly what I experienced. His expressions were the same: living life in a fog. His difficulties were the same: he couldn’t tell the time from a clock, he couldn’t understand bigger and smaller without drawing pictures, he couldn’t tell the difference between the sentences ‘The boy chases the dog’ and ‘The dog chases the boy.’ I began to see that maybe an area of my brain wasn’t working.” [Luria’s book, The Man With a Shattered World (1972), which describes this case, is still available. There’s a useful, but very basic, summary at]

and then:

She read about the work of Mark Rosenzweig, an American researcher who found that laboratory rats given a rich and stimulating environment, with play wheels and toys, developed larger brains than those kept in a bare cage. Rosenzweig concluded that the brain continues developing, reshaping itself based on life experiences, rather than being fixed at birth: a concept known as neuroplasticity. Arrowsmith-Young decided that if rats could grow bigger and better brains, so could she. [Some details of Rosenzweig’s work further down]
So she started devising brain stimulation exercises for herself that would work the parts of her brain that weren’t functioning. She drew 100 two-handed clockfaces on cards, each one telling a different time, and wrote the time each told on the back of the card. Then she started trying to tell the time from each, checking on the back each time to see if she was right. She did this eight to 10 hours a day. Gradually, she got faster and more accurate. Then she added a third hand, to make the task more difficult. Then a fourth, for tenths of a second, and a fifth, for days of the week.
I was experiencing a mental exhaustion like I had never known,” she says, “so I figured something was happening. And by the time I’d done that for three or four months, it really felt like something had shifted, something had fundamentally changed in my brain, allowing me to process and understand information. I watched an edition of 60 Minutes, with a friend, and I got it. I read a page of Kierkegaard – because philosophy is obviously very conceptual, so had been impossible for me – and I understood it. I read pages from 10 books, and every single one I understood. I was like, hallelujah! It was like stepping from darkness into light.””

After all that (some years ago), AY has moved on to become able to talk “fluently and passionately and with great erudition” about her book and about her program for helping children with cognitive deficits. She has developed a range of mental exercises for helping a range of cognitive functions (The Guardian says 19) to help thousands of children diagnosed with ADD or ADHD over the years in 35 schools in the US and Canada.

OK, that’s the story, and it’s very interesting. But a few things worry me.

The first one was wondering how someone with no clear idea of cause and effect, and not able to understand a television news programme (she gives the ability to understand such a program after her exercises as evidence that they had worked), could understand the ideas and implications of Luria’s and Rosensweig’s work, and then make the conceptual jump from that to the clockface card exercise. I think I need more information to understand how that worked. I guess I should read the book.
The second worrying thing is that I don’t know of any peer-reviewed research to support this. A quick search in Google Scholar shows links to stuff published on her website, but not much else. I do know of research which suggests that ‘muscle-style’ training of cognitive abilities doesn’t seem to do much good. So Melby-Lervåg & Hulme (2012), after a meta-analysis of twenty three studies of working memory training, conclude in their abstract:

Meta-analyses indicated that the programs produced reliable short-term improvements in working memory skills. For verbal working memory, these near-transfer effects were not sustained at follow-up, whereas for visuospatial working memory, limited evidence suggested that such effects might be maintained. More importantly, there was no convincing evidence of the generalization of working memory training to other skills (nonverbal and verbal ability, inhibitory processes in attention, word decoding, and arithmetic). The authors conclude that memory training programs appear to produce short-term, specific training effects that do not generalize.

The third thing is my generalised cynicism about the spurious convincingness of explanations which depend on brain function. Now, I may be being unfair to AW, but there is evidence for this spurious convincingness as a general effect**. Weisberg, Keil, Goodstein, Rawson, and Gray’s (2008) paper The Seductive Allure of Neuroscience Explanations (available at tried out good and bad explanations for psychological phenomena, and found that adding a bit of neuroscience flannel enhanced credibility, at least for non-experts. Here’s their abstract:

Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people’s abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good  explanation vs. bad explanation)  2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on non-experts’ judgments of bad explanations, masking otherwise salient problems in these explanations.

For camparison, here’s an accountof the AY approach from AY’s commercial website,

Recent discoveries in neuroscience have conclusively demonstrated that, by engaging in certain mental tasks or activities, we actually change the structure of our brains–from the cells themselves to the connections between cells. The capability of nerve cells to change is known as neuroplasticity, and Arrowsmith-Young has been putting it into practice for decades. With great inventiveness, after combining two lines of research, Barbara developed unusual cognitive calisthenics that radically increased the functioning of her weakened brain areas to normal and, in some areas, even above normal levels. She drew on her intellectual strengths to determine what types of drills were required to target the specific nature of her learning problems, and she managed to conquer her cognitive deficits.

I’d prefer some empirical evidence for determining “what types of drills were required”, rather than drawing on AY’s “intellectual strengths”, but the main point is that I think the opening statement is only really supportable in a fairly trivial sense: “by engaging in certain mental tasks or activities, we actually change the structure of our brains–from the cells themselves to the connections between cells.” Well, yes: to the extent that we’re cognitively changed by what we do, our brains change. What else could be happening? Those changes can affect our experience qualitatively, even in later life. Some years of struggling with singing in a choir, and trying to cope with big books full of notes, have made me almost able to read music directly and recognise intervals in a way which is experientially quite different from my earlier strictly by-ear experience of music, and I encourage anyone to try it – your brain will work better, and you’ll experience things you didn’t before!! – but I don’t see that as a neurological breakthrough. Is the AY statement a neurologically-enhanced not-much-of-an-explanation? Certainly the Rosenzweig*** studies, while important and fascinating, don’t take us into AY territory. You can read an original 1964 Bennet, Diamond, Kreech & Rosenzwieg paper here: (It’s always really valuable to read the originals), and a later 1996 summary (Rosenzweig and Bennett, 1996) here:
R&B were mainly concerned with increase in brain size and connectivity, and later on with improvements in memory and learning (obvious things to look at in rats). I always took that research as being more of a warning about the damaging effects of deprivation more than the enhancing effects of stimulation (though the B&Al paper does distinguish between non-deprivation and extra stimulation). I’m not up-to-date on this stuff, so I’d be interested to hear of more recent evidence which might suggest changes in more advanced cognitive functioning as a result of changed experience (apart from the non-result of M-L&H, cited above).

Am I being too sceptical here?

*People getting shot in the head is a valuable source for psychological/neurological research. If we ever run out of wars (unfortunately, not likely) we’ll have to make do with motorcyclists (note to my friend John: be careful out there).

**I got this reference from one of Ben Goldacre’s blogs about Mind Gym. Goldacre is wonderfully scathing, and funny, about Brain Gym, which also has some neurological explanations which don’t convince me (actually he’s wonderfully funny and scathing about lots of Bad Science – read the book, follow the blog, follow him on Twitter [for an interesting example of one way of using Twitter, including crowdsourcing advice about what to eat in your fridge]). This particular blogpost was
(The link G gives at the end to the Weisberg & al paper doesn’t work, but the ones I give here are OK – in June 2012, anyway)

***I can’t resist pointing out that Rosenzweig’s grandparents were asylum seekers (the people formerly known as refugees) or economic migrants (as with many valuable contributors to their new host society) – and no, not ‘bogus asylum seekers’ – what’s the point of seeking bogus asylum? Or even not really (bogusly) seeking asylum?: ‘Oh, thanks for giving me refugee status, but I don’t really want it: it was just a windup, actually.” Anyone who uses that phrase needs to take a (non-subsidised) course to improve their understanding of English and logic, and then be deported (to whatever planet they came from) if they fail. [Mild trolling here]

Bennet, Diamond, Kreech &, Rosenzwieg (1964) Chemical and Anatomical Plasticity of Brain Science 146, 610-619

Melby-Lervåg, M., & Hulme, C. (2012). Is Working Memory Training Effective? A Meta-Analytic Review. Developmental Psychology. Advance online publication. doi: 10.1037/a0028228
A short writeup about  this paper: No Evidence That Working Memory Training Programs Improve General Cognitive Performance

Rosenzweig, Mark R. and Edward L. Bennett. (1996) Psychobiology of plasticity: effects of training and experience on brain and behavior Behavioural Brain Research 78 57-65

Weisberg, Deena Skolnick, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, and Jeremy R. Gray (2008) The Seductive Allure of Neuroscience Explanations Journal of Cognitive Neuroscience 20:3, pp. 470–477

Which moral values drive your vote, and where’s your starting point?

After you’ve read this, check out the comment by my colleague Thom Baguley below (if you can’t see it, click on the title of this post to go to its ‘permalink’ version, which shows comments at the bottom. Gives a link to a useful debunking post by Andrew Gelman on the Statistical Modeling, Causal Inference, and Social Science blog (no, it’s good and interesting, really).

Interesting piece in this Wednesdays’ Guardian* by Jonathan Haidt (** giving a psychological line on why “working-class people vote for the political right, even when it appears to be against their own interests”. Haidt suggests that political choice is a moral choice as much as an economic one, and that right-wing parties “often serve up a broader, more satisfying moral menu than the left”.

This comes out of research by the group represented at

a group of professors and graduate students in social psychology at the University of Virginia, The University of California (Irvine), and the University of Southern California. Our goal is to understand the way our ‘moral minds’ work.

Haidt’s idea is that there are several dimensions of morality:


…and different political messages appeal to different aspects. So left-leaning messages often promote caringness, and to some extent fairness, while right-leaning messages promote liberty, loyalty, respect for authority and religion, and to some extent fairness. From this point of view, the right-wingers have the advantage of a wider range of values to promote. [psychologically-explainable Guardian errors: Haidt discusses this by analogy with the range of tastes we can detect: sweet, sour, salt, etc, and the article says that ‘conservatives have a broader moral palate than the liberals’: at least it keeps the metaphor unmixed, I guess. This is an association error, not a Freudian slip]

I said ‘to some extent fairness’ for both sides, because both sides focus on unfairness. On the right it’s the unfairness of spongers and benefit cheats (in the UK, anyway. In the US, this seems to include the unfairness of people who haven’t bought health insurance getting cancer treatment for free). On the left, it’s the unfairness of enormous rewards for people who aren’t seen as being useful to society. Quite a lot of the online discussions/slanging matches about the occupy movement and the 99% vs the 1% show these different orientations clearly.

Haidt claims that a good deal of people’s political are influenced by how strongly they feel about these different ‘flavours’ of morality, and the yourmorals group that he is associated with have quite a bit of research to show that people who identify themselves as being on the right and on the left do show different sensibilities to the different aspects. You can check that for yourself on the yourmorals site: after registering (anonymously, but with some demographic information), you can take lots of their tests, including the moral preferences questionnaire. When I did it, I came out higher on the care/harm dimension than 102,000 liberals and much higher than 21,000 conservatives (note that one personality difference between liberals and conservatives is that he latter are less inclined to waste their time with academic tomfoolery like this, by a ratio of 6 to 1), and much lower than either on sanctity/degradation – and I’m probably well to the left of most US liberals (Haidt explains that ‘liberal’ means something different in the US, not spineless, snivelling, selling-out posh-boy tuition-fee raisers, though he puts it more politely. The yourmorals site also introduces the word ‘socialist’ very gently, explaining that in some countries it is a respectable term for some people with left-wing views, presumably hoping that US respondents won’t be put off by it). Go & try it yourself, along with several other interesting measures on the site.

Haidt concludes:

When working-class people vote conservative, as most do in the US, they are not voting against their self-interest; they are voting for their moral interest. They are voting for the party that serves to them a more satisfying moral cuisine. The left in the UK and USA should think hard about their recipe for success in the 21st century.

All this makes sense to me, and fits broadly with traditional psychological approaches to political choice, like the authoritarian personality ( [Yes, I know there’s a lot more to it, but this post is long enough already]. In the few political arguments I get into (never seems to do any good), it often comes down to ‘position X leads to this moral wrong’, to which the other person responds ‘So? That’s not what’s important: it’s this moral wrong which we should be worried about’ – and the differences in the moral wrongs and rights people think significant do seem to correspond with the kind of dimensions Haidt is proposing. Maybe it would be a good idea to try to show how socialist policies do support those other moral dimensions. That might require some thought and ingenuity, or maybe just some lying.

But is there another factor, which you could call something like adaptive level? What’s accepted as a basic, obvious, taken-for-granted level of things like caringness, loyalty, sanctity? Probably everyone agrees that stamping on babies is unacceptable, but Haidt gives the example of cruelty to animals:

For example, how much would someone have to pay you to kick a dog in the head? Nobody wants to do this, but liberals say they would require more money than conservatives to cause harm to an innocent creature.

What about denying care to someone who is ill or injured unless they can pay for it? From my UK perspective, requiring payment for basic health care seems a bit like charging people for oxygen: it’s just unacceptable – and the UK political debate is around how health care – free at the point of delivery – should be organised, and how far it should extend, not about whether some people should be denied health care. The way I write about is shows my bias: it feels more appropriate to write ‘some people denied health care’ than ‘some people given health care’: the second phrase doesn’t seem to carry any information, like ‘some people have bodies’. ‘Some people denied champagne and Rolex watches’ and ‘some people have champagne and Rolexes’ works the other way.

From this side of the Atlantic, the idea that people should want to block access to free care seems perverse, and going beyond moral issues – but in the US, that’s a definitely debateable issue of freedom and fairness. In the same way, my Finnish friend was surprised that we dared to charge little children for food while they were in school: “Finnish people would find that just unacceptable” (Finland doesn’t charge tuition fees at universities for EU students, either – they have some idea about education being freely available to all – nutcases). It seems reasonable to suggest that moral issues are debated about some fixed, arbitrary start point, and this start point is culturally variable, but then we need an explanation and mechanism for his start point. Is it just custom and practice? Maybe if read some more of the papers from this group I’d get some idea about that.

Small methodological point: when I was doing the questionnaires on the yourmorals site, I found that they had comments boxes at the end like:

Was anything on this page unclear, or do you need to explain anything about your answers?

Was anything unclear in this study, or is there something we should know about your answers before we analyze your data?

Isn’t that sensible? I often use any comment box I can find to point out unclear things, or why the answers the questionnaire allows me to give misrepresent my position, because usually questionnaires don’t seem to have any interest in how respondents think about things like this. I suspect that means I get identified as some kind of contrarian weirdo whose responses should be junked. Nice to see researchers having the courtesy to ask – and probably improving their measures as a result (though it’s always possible that the analysis says IF ‘textincommentbox’ THEN ‘dumpresponses’)

*I don’t get all my psychology from The Guardian, though it may look that way, but my ‘Psychology & the Media’ option group found that there’s a great deal of psychology discussed in the everyday press, often with enough information to enable you to trace the publications (or at least the press releases) behind it, and this is a good example.

** Haidt will email you copies of quite a few of his papers from this site. He also helpfully tells you that it’s pronounced ‘Height’. Thanks for both of those things, Jonathan.

Scientists can read your thoughts!!!!! Yeah, right

There have been two recent sets of reports on the ‘scientists can read your thoughts’ theme.
The Guardian reports:

Mind-reading program translates brain activity into words

The research paves the way for brain implants that would translate the thoughts of people who have lost power of speech (31 January 2012)

This is about the paper by Pasley & al (2012) in PLoS Biology ‘Reconstructing Speech from Human Auditory Cortex.’
Here’s the original press release (as always, it’s a university press release which produces all the news coverage):– which includes a video showing the original stimuli and the reconstructions.

The Guardian story says:

In a series of new experiments, scientists have been able to use a computer to decipher brain activity. So what, huh? Well, the computer can reconstruct those signals into the actual words the participants are thinking about. It can read your mind.
OK, so sometimes the words were difficult to recognise, but that’s not the point: it means that people unable to speak could generate a voice just by thinking in sentences.
“Potentially, the technique could be used to develop an implantable prosthetic device to aid speaking, and for some patients that would be wonderful,” Robert Knight, a senior member of the team and director of the Helen Wills Neuroscience Institute at the University of California, Berkeley, told the Guardian. “Perhaps in 10 years it will be as common as grandmother getting a new hip.”

Well, that would make sense if they were recording brain activity of people who are speaking these words, or even better intending to speak these words – but that’s not what’s happening here. They’re recording the activity of people listening to these words, so if there’s any mind reading going on here, it is reading what people are hearing, not what they’re thinking or intending.

The other similar story concerns the work of Jack Gallant and his team at U. C. Berkeley, published in Current Biology (Nishimoto & al, 2011): for the abstract.

The Economist says:


It is now possible to scan someone’s brain and get a reasonable idea of what is going through his mind. For the second paper of the trio [Gallant & al, 2011], published in Current Biology in September, shows that it is now possible to make a surprisingly accurate reconstruction, in full motion and glorious Technicolor, of exactly what is passing through an awake person’s mind.

Well, not really*.
The Discovery News account is more realistic:

What if scientists could peer inside your brain and then reconstruct what you were thinking, playing the images back like a video?
Science and technology are not even remotely at that point yet, but a new study from the University of California Berkeley marks a significant, if blurry, step in that direction.
Gallant wants to be clear about his lab’s research goal. “We’re trying to understand how the brain works,” he said. “We’re not trying to build a brain-decoding device.”

In the study, activity in the brain while watching the target video was matched with activity while watching a very large number of other random video clips, with ingenious software matching the target activity with whatever appeared in the brain activity while watching the other clips.

Here’s a shorter, but more precise, online press account:  Mind-Reading Tech Reconstructs Videos From Brain Images, by Dan Nosowitz, though as is often the case, the headline is not backed up by the information in the article. It’s a very short article, but is quite clear that what is happening is that Gallant is “attempting to reconstruct a video by reading the brain scans of someone who watched that video–essentially pulling experiences directly from someone’s brain”, and points out that this is really “what the researchers would really prefer we call ‘brain decoding’ rather than “mind-reading.”

That’s the point for of these studies: they’re picking up input signals at some level of decoding, and this isn’t really very different from the kind of event recording in the optic nerve or the visual cortex carried out by people like Hubel and Weisel all those years ago. Certainly, H & W were given the Nobel Prize, quite rightly, for their work, and this work takes the analysis deeper into the brain and at a much higher level of complexity and so is a considerable advance – but it’s not ‘reading our thoughts’. The Gallant paper from U.C. Berkeley puts it nice and clearly: “These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.”
The results are really impressive. Here’s a demo video of the video inputs and the computer reconstruction:

That’s a great technical advance, but we already have a ‘reading your thoughts’ example using EEG. This is the ‘readiness potential’ which Libet (1985) used in the well-known study which shows that brain activity showing decision to act seems to anticipate conscious awareness of that decision. Actually, the readiness potential was discovered a long time ago, first reported by Kornhuber and Deecke in 1965. I first heard of it in a talk by W. Grey Walter in 1968, and Grey Walter had been able to use readiness potential to allow people to use ‘mind control’ of the world nearly 50 years ago. He had set up a system to detect the readiness potential, and use that signal to do things like switching a light off and on. All you had to do was to decide to switch the light and the system would pick up your decision and do the action for you. I don’t think there was any differentiation of readiness potentials, so the system could only be set up to do one thing at a time, and probably deciding to do anything would activate it, so that’s not really mind reading, either. I remember Grey Walter saying that the easy way of doing this was to actually reach out for the switch, when the sytem would turn on th elight before you got there, but he did find it possible to activate the system without actually making the movement, just by forming the intention. He said it was a weird sensation. I think Grey Walter is an under-remembered scientist. His EEG work is fascinating, and he also did important early work in robotics. He does have a Wikipedia page:

(You need to be aware that the account I just given is an unsubstantiated memory of a rather informal talk nearly 50 years ago, when I was a young physiologist just beginning to learn about psychology. I’m sure I haven’t made it all up, but the account of what Grey Walter had been able to do may be more complete and coherent than the actual research. From all we know about memory some changes in that direction are likely.)

Actually, while following the press stories on the research above, I came across something which does look a bit more like mind reading, and is maybe more encouraging, or more frightening, depending on your point of view.

Here’s the Discovery News story:

A simple slide show could be the next weapon against terrorists. Using a brain-electrode cap and imagery, scientists at Northwestern University can pick the date, location and means of a future terrorist attack from the minds of America’s enemies.

Well, no it can’t, but if you read on there is some interesting stuff happening:

The electrodes measure the P300 brain wave, an involuntary response to stimuli that starts in the temporoparietal junction and spreads across the rest of the brain. When the wave hits the surface of the brain, the electrodes detect the signal. The stronger the reaction of the subject to a particularly stimuli, the stronger the P300 brain wave.
Rosenfeld and his co-author, graduate student John Meixner, divided 29 Northwestern University students into two groups. One group planned a vacation while the other group planned a terrorist attack. The students then had electrodes placed on their scalp, and were shown a series of images of various cities, such as Boston and Houston, and various means of attack, along with other related, but irrelevant, images as controls.
As the slide show advanced, the electrodes recorded the P300 waves. When, for instance, the mock terrorists saw an image of the city they planned to attack, the electrodes recorded strong P300 brain waves. The Northwestern scientist then compared the strength of all the brain waves to find out who was planning at attack on which city, when they were planning it and how they meant to carry out the attack.
The Northwestern scientists correlated the strongest brain waves with “guilty knowledge” every time. Weaker P300 waves were seen when subjects saw images not associated with their planned attack. Scientists also examined P300 waves from the students in the group that was planning vacations, and did not falsely identify any of them as terrorists.

Here’s an actual paper on the research (Rosenfeld & al, 2008):

If you’ve read my previous posts, you’ll know exactly what I’m going to say here. Brilliant research, doing complicated stuff, with fascinating possibilities, but greatly overhyped by the headlines, and slightly misrepresented by the text, with the clearest remarks about the true scope of the research right at the end of the article. I think the overall result of this is to make the reader cynical about any possibility of progress – “I read about the same thing five years ago, and it never happened: these scientists are always making fanciful claims” – and to underrepresent the complexity (and interest) of the research that is actually going on.

*To be fair to The Economist, the article also describes two other interesting studies which are a little bit nearer to the ‘mind reading’ headline**.

**But to be pedantic (and maybe unfair) no-one uses Technicolor nowadays, and you have to be pretty old to even remember the phrase ‘in glorious Technicolor’.


Grey Walter, W (1964) Contingent negative variation: An electrical sign of sensorimotor association and expectancy in the human brain Nature 203, 380-384

Libet, B. (1985) Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavior & Brain Science 8, 529–566

Nishimoto, Shinji, Vu, An T., Naselaris, Thomas, Benjamini, Yuval, Yu, Bin, Gallant Jack L. (2011) Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies Current Biology, 21(19), 1641-1646

Pasley, Brian N., Stephen V. David, Nima Mesgarani, Adeen Flinker, Shihab A. Shamma, Nathan E. Crone, Robert T. Knight, Edward F. T PLoS Biology
Available at:

Rosenfeld, J. Peter, Elena Labkovsky, Michael Winograd, Ming A. Lui, Catherine Vandenboom and Erica Chedid (2008) The Complex Trial Protocol (CTP): A new, countermeasure-resistant, accurate, P300-based method for detection of concealed information Psychophysiology, 45, 906–919.
Available at: