The British are Incredibly Sad

Or so says Oliver James(*) on this BBC radio show in which he also says things like "I absolutely embraces the credit crunch with both arms".

Oliver James is a British psychologist best known for his theory of "Affluenza". This is his term for unhappiness and mental illness caused, he thinks, by an obsession with money, status and possessions. Affluenza, James thinks, is especially prevanlent in English-speaking countries, because we're more into free-market capitalism than the people of mainland Europe. In fact, he regularly makes the claim that we in Britain, the U.S., Australia etc. are today twice as likely to be mentally ill as "the Europeans". This is because rates of mental illness supposedly surged in the English-speaking world due to 1980s Reagan/Thatcher free market policies. Hence why he welcomes the current economic unpleasantness.

Were all of this true, it would be incredibly important. Certainly important enough to justify writing three books about it and seemingly endless articles for the Guardian. But is it true? Well, this is Neuroskeptic, so you can probably guess. Also, bear in mind that James is someone who is on record as thinking that

[The Tears for Fears song] Mad World. With the chilling line "The dreams in which I'm dying are the best I've ever had", in some respects it is up there with TS Eliot's Prufrock as a poetic account of bourgeois despair.
Obviously poetic taste is entirely subjective etc., but honestly.

Anyway, where did James get the twice-as-bad-as-Europe (or, in some articles, three times as bad) idea from? He says the World Health Organization. Presumably he is referring to one of the World Health Organization's World Mental Health Surveys, such as the analysus presented in this JAMA paper.

At first glance, you can see what he means. This paper reports that the % of people reporting suffering from at least one mental illness over the last year was far higher in the US (26.4%) than in say Italy (8.2%), or Nigeria (4.7%). But on closer inspection, even this data includes some incongruous numbers. Why is Beijing (9.1%) twice as bad as Shanghai (4.3%)? Worse, why does France have a rate of 18.4% while across the border in Germany it's just 9.1%? Are the French twice as materialistic as the Germans? The answer, of course, is that these numbers are more complicated than they appear. In fact, if you believe those figures at face value, you are...well, you're probably Oliver James.

These numbers come from structured interviews, conducted by trained lay researchers, of a random sample of the population. In other words, some guy asked some random people a series of fairly personal questions, reading them off a list, and if they said "Yes" to questions like "Have you ever in your life had a period lasting several days or longer when most of the day you felt sad, empty or depressed?" they might get a tick for "depression". We know this because the interviews used the WHO-CIDI screening questionaire, the first part of which is here.

As part of my own research, I have been that guy asking the questions (in a slightly different context). At some point I'll write about this in more detail, but suffice to say that it's hard to trying to retrospectively diagnose mental illness in someone you've never met before. The potential for denial, mis-remembering, malingering, forgetting or just plain failure to understand the questions is enormous, although it doesn't come across in the final data, which looks lovely and neat.

The authors of the JAMA paper are well aware of this which is why they're skeptical of the apparantly large cross-national differences. In fact, most of their comment section consists of caveats to that effect. Just a few (edited, emphasis mine - see the full paper for more, it's free):
An important limitation of the WMH surveys is their wide variation in response rate. In addition, some of the surveys had response rates below normally accepted standards [i.e. many people refused to participate]... performance of the WMH-CIDI could be worse in other parts of the world either because the concepts and phrases used to describe mental syndromes are less consonant with cultural concepts than in developed Western countries [almost certainly they are] or because absence of a tradition of free speech and anonymous public opinion surveying causes greater reluctance to admit emotional or substance-abuse problems than in developed Western countries. [again, almost certainly, and Europeans are generally more reserved than Americans in this regard.] ... some patterns in the data (e.g. the much lower estimated rate of alcoholism in Ukraine than expected from administrative data documenting an important role of alcoholism in mortality in that country) raise concerns about differential validity.
There's another, more fundamental problem with this data. On any meaningful criterion of "mental illness", a society in which 25% people were mentally ill in any given year would probably collapse. The WHO survey, however, is based on the DSM-IV criteria of mental illness. These are are increasingly regarded as very broad; for example, DSM-IV does not distinguish between feeling miserable & down for two weeks because your boyfriend leaves you, and spending a month in bed hardly eating for no apparant reason. Both are classed as "depression", and hence a "mental illness", although 50 years ago, only the second would have been considered a disease. For someone who styles himself a rebel in the mould of R. D. Laing, it's baffling that James accepts the American Psychiatric Association's dubious criteria.

What other data could we look at? Ideally, we want a measure of mental illness which is meaningful, objective and unambigious. Well, there aren't any, but suicide rates might be the next best thing - they're nice hard numbers which are difficult to fudge (although in cultures in which suicide is strongly taboo, suicides may be reported as deaths from other causes.) Although not everyone who commits suicide is mentally ill, it is fair to say that if Britain really were twice as unhappy as the rest of Europe, we would have a relatively high suicide rate.

What's the data? Well, according to Chishti et. al. (2003) Suicide Mortality in the European Union, we don't.
In fact suicide rates in the UK are boringly middle of the road. They're higher than in places like Greece and Spain, but well below rates in France, Sweden and Germany. Suicide rates are not a direct measure of rates of mental illness, because not everyone who commits suicide is mentally ill, and the rate of succesful suicide depends upon access to lethal means. But does this data look compatible with James's claim that rates of "mental illness" are twice as high in Britain as on "the Continent"? - or indeed with James's implicit assumption that "the Continent" is monolithic?

What's odd is that James clearly knows a bit about suicide, or at least he does now, because just today he wrote a remarkably sensible article about suicide statistics for the Guardian. So he really ought to know better.

Drug sales are another nice, hard number. Of course, medication rates do not equal illness rates - in any field of medicine, but especially psychiatry. Doctors in some countries may be more willing to use drugs, or patients may be more willing to take them. With that in mind, the fact that population-adjusted (source, also here) British sales of antidepressants drugs are twice those of Ireland and Italy, equal to those of Spain, and half those of France, Norway and Sweden does not necessarily mean very much. But it hardly supports James's theory either.

Interestingly, although James holds up Denmark as an example of the kind of happy, "unselfish capitalism" that we should aspire to, the Danes take 50% more antidepressants than we do! (They also have a much higher suicide rate.) True, sales of anxiety drugs and sleeping pills are relatively high in the UK, but still less than Denmark's. Most interestingly, sales of antipsychotics are very low in the UK - roughly the same as in Germany and Italy but less than a quarter of the sales in Ireland and Finland!

So cheer up, Anglos. We're not twice as sad as the French. More likely, we are just more open about talking our problems in the interests of scientific research. However, the French, to their credit, didn't give the world Oliver James.

[BPSDB]

(*) This is Oliver James, psychologist. Not to be confused with: Oliver James, heartthrob actor; Oliver James, Fleet Foxes song, and Oliver James, Ltd.

ResearchBlogging.orgThe WHO World Mental Health Survey Consortium (2004). Prevalence, Severity, and Unmet Need for Treatment of Mental Disorders in the World Health Organization World Mental Health Surveys JAMA: The Journal of the American Medical Association, 291 (21), 2581-2590 DOI: 10.1001/jama.291.21.2581

Autism, Testosterone and Eugenics

The media's all too often shabby treatment of neuroscience and psychology research doesn't just propagate bad science - it means that the really interesting and important bits go unreported. This is what's just happened with the controversy surrounding a paper from the Autism Research Center (ARC) at Cambridge University - Bonnie Aeyeung et. al.'s Fetal Testosterone and Autistic Traits. For research published in a journal with an impact factor of 1.538 (i.e. not good), it's certainly attracted plenty of attention - but for all the wrong reasons.


The Autism Research Center is headed by the dashing Simon Baron-Cohen, also one of the authors on the paper. He's probably the world's best-known autism researcher, and the author of some excellent books on the subject including the classic Mindblindness and The Essential Difference. Mindblindness, in particular, probably deserves a lot of the credit for interesting a generation of psychologists in autism. A big cheese, in other words. Surely his greatest achievement, however, is being Borat's cousin.

Baron-Cohen is famous for his theory that the characteristic features of autism are exaggerated versions of the allegedly characteristic features of male, as opposed to female, cognition. Namely, autistic people have difficulties understanding the emotions and behaviour of other people ("empathizing"), but may show excellent rote memory and understanding of abstract, mathematical or mechanical systems ("systematizing"). He and his colleagues have also hypothesised that an excess of the well-known masculinizing hormone testosterone, could be responsible for the hyper-male brains of autistics, just as testosterone is responsible for the development of masculine traits in boys. Amongst other things this would explain why rates of diagnosed autistic spectrum disorders are several times higher in boys than in girls.

Now, this is one of those wide-ranging theories which serves to drive research, rather than strictly following from the evidence. It's a bold idea, but there is, at the moment, not enough data to confirm or reject this idea. The simple view that testosterone = maleness = autism is almost certainly wrong, but it's a neat theory, there's clearly something to it, and, as one of the commentators on the paper puts it

To date, no theory of autism has provided such a connecting thread linking etiology, neuropsychology and neural bases of autism.
Anyway, the paper reports on an association between testosterone levels in the womb and later "autistic traits" in childhood. 235 healthy children were studied; for all of these kids, the levels of testosterone in the womb during pregnancy were known, because their mothers had had amniocentesis, collecting a sample of fluid from the womb. Amniocentesis is not risk-free and it can't be done for research purposes, but the mothers here got amniocentesis for medical reasons and then agreed to take part in research as well. Testosterone levels in the amniotic fluid were measured; notably, this probably represents testosterone produced by the fetus itself, rather than the mother.

The headline finding was that fetal testosterone (fT) levels were correlated with later "autistic traits", as judged by the mothers, who filled out questionaires about their kid's behaviour at the age of about 8. Here's a nice plot showing the correlation. The vertical axis, "AQ-child total", is the parent's total reported score on the "Autism Quotient" questionaire. Higher scores are meant to indicate autism-like traits (although see below). You'll also notice that fT levels are much higher in the boy fetuses than in the girl fetuses - not surprisingly. That's it - a statistically significant association, but there is still a lot of scatter on the plot. The correlation was still significant if the very high-scoring children were ignored. A similar pattern emerged using a different autism rating scale, but was less significant - probably because many scores were very low.
So, this was a perfectly decent study with an interesting result, but it's only a correlation, and not an especially strong one. How did this get written up? New research brings autism screening closer to reality puffed the Guardian's front page! They suggested that measuring fetal testosterone levels might be a way of testing for autism pre-natally, thus sparking off an entirely formulaic debate about the ethics of selective abortion, the usual denunciations of "eugenics", etc. Long story short - Catholics are against it, the National Autistic Society say it's a dilemma, while a family doctor on Comment is Free is unsure about the "test" because she can't read the article: she doesn't have access to the journal.

Lest it be said that the ethical debate is important in itself, even if the details of the testosterone-based screening test might be inaccurate, bear in mind that "testing for autism" is likely to raise unique issues. Are we talking about a test which could distinguish "low-functioning autism" - which can leave children unable to lead anything like a normal life - from "high-functioning autism", sometimes associated with incredible intellectual achievement? Would the test distinguish classical high-functioning autism from Asperger's? When and if a test is developed, these will be crucial questions. You cannot simply speculate about "a test for autism" in the abstract.

Anyway, after a few days of this nonsense Baron-Cohen rightly protested that the paper had nothing to do with prenatal testing, and that such testing isn't on the horizon yet.
The new research was not about autism screening; the new research has not discovered that a high level of testosterone in prenatal tests is an indicator of autism; autism spectrum disorder has not been linked to high levels of testosterone in the womb; and tests (of autism) in the womb do not allow termination of pregnancies.
Most importantly, there were no autistic kids in the study - all of the children were "normal", although some were rated highly on the autism measures. Moreover, as the plot above shows, any testosterone-based screening test would be very inaccurate. Which is why no experts proposed one.

Just like last time. Back in 2007 the Observer (the Sunday version of the Guardian) ran a front-page article about Simon Baron-Cohen's work on the epidemiology of autism. They said that he'd found that autism rates in Britain were "surging"; they probably aren't, and Baron-Cohen's data didn't show that they were, but despite this the Observer took weeks to clarify the issue (for details of the saga, see Bad Science.) In both cases, some important research about autism from Cambridge ended up on the front page of the newspaper, but the debate which followed completely missed the real point. It would have been better for all concerned if the research had never caught the attention of journalists at all.

The actual study in this case is very interesting, as are the three academic commentaries and a response from the authors published alongside it. I can't cover all of the nuances of the debate, but some of the points of interest include: the question of whether the Autism Quotient (AQ) questionaire actually measures autistic behaviours, or just male behaviours; the point that it may be testosterone present in baby boys shortly after birth, not in the womb, which is most important; and the interesting case of children suffering from Congenital Adrenal Hyperplasia, a genetic disorder leading to excessive testosterone levels; Baron-Cohen et. al. suggest that girls with this disorder show some autism-like traits, but this is controversial. Clearly, this is a crucial point.

Overall, while it's too soon to pass judgement on the extreme male brain theory or the testosterone hypothesis, both must be taken seriously. As for autism prenatal testing, I suspect that this will only come when more of the genetic causes of autism are identified. There is no single "gene for autism"; currently a couple of genes responsible for a small % of autism cases are known: CNTNAP2, for example.

Once we have a good understanding of the many genes which can lead to the many different forms of autistic-spectrum disorders, genetic testing for autism will be possible; I doubt that testosterone levels or anything else will serve as a non-genetic marker, because autism almost certainly has many different causes, and many different associated biochemical abnormalities. Maybe I'm wrong, but even so, if you're worried about hypothetical people aborting hypothetical autistic fetuses, you don't have to worry quite yet. Actual children are dying in Zimbabwe - worry about them.

[BPSDB]

ResearchBlogging.orgBonnie Auyeung, Simon Baron-Cohen, Emma Ashwin, Rebecca Knickmeyer, Kevin Taylor, Gerald Hackett (2009). Fetal testosterone and autistic traits British Journal of Psychology, 100 (1), 1-22 DOI: 10.1348/000712608X311731

Biases, Fallacies and other Distractions

One of the pitfalls of debate is the temptation to indulge in tearing down an opponent's arguments. It's fun, if you're stuck behind a keyboard but still feeling the primal urge to bash something's head in with a rock. Yet if you're interested in the truth about something, the only thing that should concern you is the facts, not the arguments that happen to be made about them.

Plenty has been written about arguments and how they can be bad: sins against good sense are called "fallacies" and there are many lists of them. Some of the more popular fallacies have become household names - ad hominem attacks, the appeal to authority, and everyone's favorite the
straw man argument.

Likewise, cognitive psychologists have done much to name and catalogue the various ways in which our minds can decieve us. Under the blanket name of "biases" many of these are well known - there's confirmation bias, cognitive dissonance, rationalization, and so on.

There's a reason why so much has been said about fallacies and biases. They're out there, and they're a problem. When you set your mind to it, you can find them almost anywhere - no matter who you are. This, for example, is written by someone who believes that HIV does not cause AIDS. By most standards, this makes him a kook. And he probably is a kook, about AIDS, but he’s not stupid. He makes some perfectly sensible points about cognitive dissonance and the psychology of science. And here, he offers further words of wisdom:

I have no satisfactory answer to offer, unfortunately, for how AIDStruthers could be brought to useful mutual discussion.
...
Here’s a criterion for whether a discussion is genuinely substantive or not, directed at clarification and increased understanding: no personal comments adorn the to-and-fro. If B appears not to understand what A is saying, then A looks for other ways of presenting the case, A doesn’t simply keep repeating the same assertions spiced with “Why can’t you…?”, and the like. [Added 28 December: Another hallmark of the non-substantive comments is that the commentator not only keeps harping on the same thing but does so by return e-mail, leaving no time to consider what s/he is replying to; see Burun's admission of suffering from that failing.]
...
One lesson from experience is that the aim of Rethinkers cannot be to convince the AIDStruthers. It soon becomes a sheer waste of time to attempt to argue substance with them; a waste of time because you can’t learn anything from them, and they are incapable of learning anything from you. Rethinkers and Skeptics should address the bystanders, onlookers, the unengaged “silent majority”. There seem always to be with us some people who cheerfully continue to believe that the Earth is only about 6,000-10,000 years old, and many other things that most of us judge to be utterly disproved by factual evidence.
That could have come straight from the pen of such pillars of scientific respectability as Carl Sagan or Orac - until you remember that by "Rethinkers" and "Skeptics" he means people who don't believe that HIV causes AIDS, while "AIDStruthers" is his term for those who do, that is, almost every medical and scientific professional.

The lesson here is that you don't have to be right in order to notice that people who disagree with you are irrational, or that much of the opposition to your belief is dogmatic. The sad fact is that stubborness and a tendency to dogmatism are a part of human nature and it's very hard to escape from them; likewise, it's very hard to make a complex argument without saying something at least technically fallacious (that witty aside? Ad hominem attack!)

The point is that none of this matters. If something is true, then it's true even if everyone who believes it is a dogmatic maniac. So it's certainly true even if the only people you know who believe it are idiots. What's the chance that you've argued with the smartest Christian ever, or the best informed opponent of homeopathy? In which case - the fallacies and biases of the people you have argued with certainly don't matter. In an argument, the only thing of importance is what the facts are, and the way to find out is to look at the evidence.

If you're taking the time to name and shame the fallacies in someone's reasoning or to diagnose their biases, then you're not talking about the evidence - you're talking about your opponent(s). Why are you so fascinated by him...? To spend time lamenting the irrationality of your opponents is unhealthy. The only people who have a reason to care about other people’s fallacies and biases are psychologists. Daniel Kahneman got half a Nobel Prize for his work on cognitive biases - it's his thing. But if your thing is HIV/AIDS, or evolution, or vaccines and autism, or whatever, then it's far from clear that you have any legitimate interest in your opponent's flaws. In all likelihood, they are no more flawed than anyone else - or even if they are, their real problem is not that they're making ad hominem attacks (or whatever), but that they're wrong.

So when barely-coherent columnist Peter Hitchens writes in the Daily Mail about wind farms

If visitors from another galaxy really are going round destroying wind turbines, then it is the proof we have been waiting for that aliens are more intelligent than we are.

The swivel-eyed, intolerant cult, which endlessly shrieks – without proof – that global warming is man-made, has produced many sad effects.

The point is not that people who believe that global warming is man made are not a cult. They're not, but even if they were, it wouldn't matter. The swiveliness of their eyes or the pitch of their voice is not obviously relevant either.

Of course, if you're out to have fun bashing heads, or writing columns for the Daily Mail, then go ahead. Learn the names of as many fallacies and biases as you can (including the Latin names if possible - that's always extra impressive) and go nuts. But if you're serious about establishing or discussing the truth about something, then there is only one set of biases and fallacies you ought to care about – your own.

[BPSDB]

Dorothy Rowe Wronged, also Wrong

(Via Bad Science) Here's the curious story of what happened when clinical psychologist Dorothy Rowe was interviewed for a BBC radio show about religion. She gave a 50 minute interview in which she said that religion was bad. The BBC, in their wisdom, edited this down to 2 minutes of audio which made her sound as if she was saying religion was good. She was annoyed, and complained. The BBC admitted that they'd misrepresented her and apologized. Naughty.

But that's not the point of this post. Because the BBC not only offered Rowe an apology, they also agreed to let her write about what she really believes and put it up on bbc.co.uk. Here is the result. Oh dear. It's, well, it's confused.

"Neuroscience proves the existence of free will" would be an extraordinary media headline, and, perhaps even more extraordinary, it would be true.
No it wouldn't Rowe - it wouldn't even mean anything. It gets worse from there on in. Read it if you can, but it's pretty bad. Not Bono-bad, but bad, especially in the way that she inserts references to the brain and to neuroscience seemingly at random which add literally nothing to her argument. Her argument being that we interpret reality, rather than directly percieving it. Which is true enough, but that idea's been around since the time of ancient Greece, where the cutting edge of neuroscience was the theory that the brain was made of semen. It's philosophy, not neuroscience.

This kind of neuro-fetishism happens a lot nowadays, but what's really weird is that Rowe is one of those psychologists who is convinced that depression (and indeed all mental illness) is not a "brain problem". Even one such as she clearly isn't immune to the lure of neuroscience explanations.

[BPSDB]

Critiquing a Classic: "The Seductive Allure of Neuroscience Explanations"


One of the most blogged-about psychology papers of 2008 was Weisberg et. al.'s The Seductive Allure of Neuroscience Explanations.

As most of you probably already know, Weisberg et. al. set out to test whether adding an impressive-sounding, but completely irrelevant, sentence about neuroscience to explanations for common aspects of human behaviour made people more likely to accept those explanations as good ones. As they noted in their Introduction:
Although it is hardly mysterious that members of the public should find psychological research fascinating, this fascination seems particularly acute for findings that were obtained using a neuropsychological measure. Indeed, one can hardly open a newspaper’s science section without seeing a report on a neuroscience discovery or on a new application of neuroscience findings to economics, politics, or law. Research on nonneural cognitive psychology does not seem to pique the public’s interest in the same way, even though the two fields are concerned with similar questions.
They found that the pointless neuroscience made people rate bad psychological "explanations" as being better. The bad psychological explanations were simply descriptions of the phenomena in need of explanation (something like "People like dogs because they have a preference for domestic canines"). Without the neuroscience, people could tell that the bad explanations were bad, compared to other, good explanations. The neuroscience blinded them to this. This confusion was equally present in "normal" volunteers and in cognitive neuroscience students, although cognitive neuroscience experts (PhDs and professors) seemed to be immune.

But is this really true?

This kind of research - which claims to provide hard, scientific evidence for the existence of a commonly believed in psychological phenomenon, usually some annoyingly irrational human quirk - is dangerous; it should always be read with extra care. The danger is that the results can seem so obviously true ("Well of course!") and so important ("How many times have I complained about this?") that the methodological strengths and weaknesses of the study go unnoticed. People see a peer-reviewed paper which seemingly confirms the existence of one of their pet peeves, and they believe it - becoming even more peeved in the process.(*)

In this case, the peeve is obvious: the popular media certainly seem to inordinately keen on neuroimaging studies, and often seem to throw in pictures of brain scans and references to brain regions just to make their story seem more exciting. The number of people who confuse neural localization with explanation is depressing. Those not involved in cognitive neuroscience must find this rather frustrating. Even neuroimagers roll their eyes at it (although some may be secretly glad of it!)

So Weisberg et al. struck a chord with most readers, including most of the potentially skeptical ones - which is exactly why it needs to be read very carefully critiqued. Personally, having done so, I think that it's an excellent paper, but the data presented only allow fairly modest conclusions to be drawn, so far. The authors have not shown that neuroscience, specifically, is seductive or alluring.

Most fundamentally, the explanations including the dodgy neuroscience differed from the non-neurosciencey explanations in more than just neuroscience. Most obviously, they were longer, which may have made them seem "better" to the untrained, or bored, eye; indeed the authors themselves cite a paper, Kikas (2003), in which the length of explanations altered how people perceived them. Secondly, the explanations with added neuroscience were more "complex" - they included two separate "explanations", a psychological one and a neuroscience one. This complexity, rather than the presence of neuroscience per se, might have contributed to their impressiveness.

Perhaps the authors should have used three conditions - psychology, "double psychology" (with additional psychological explanations or technical terminology), and neuroscience (with additional neuroscience). As it stands, the authors have strictly shown is that longer, more jargon-filled explanations are rated as better - which is an interesting finding, but is not necessarily specific to neuroscience.

In their discussion (and to their credit) the authors fully acknowledge these points (emphasis mine)
Other kinds of information besides neuroscience could have similar effects. We focused the current experiments on neuroscience because it provides a particularly fertile testing ground, due to its current stature both in psychological research and in the popular press. However, we believe that our results are not necessarily limited to neuroscience or even to psychology. Rather, people may be responding to some more general property of the neuroscience information that encouraged them to find the explanations in the With Neuroscience condition more satisfying.
But this is rather a large caveat. If all the authors have shown is that people can be "Blinded with Science" (yes...like the song) in a non-specific manner, that has little to do with neuroscience. The authors go on to discuss various interesting, and plausible, theories about what might make seemingly "scientific" explanations seductive, and why neuroscience might be especially prone to this - but they are, as they acknowledge, just speculations. At this stage, we don't know, and we don't know how important this effect is in the real world, when people are reading newspapers and looking at pictures of brain scans.

Secondly, the group differences - between the "normal people", the neuroscience students, and the neuroscience experts - are hard to interpret. There were 81 normal people, mean age 20, but we don't know who they were or how they were recruited - were they students, internet users, the authors' friends? (10 of them didn't give their age and for 2 gender was "unreported" -?) We don't know whether their level of education, their interests, or values were different from the cognitive neuroscience students in the second group (mean age 20), who may likewise have been different in terms of education, intelligence and beliefs from the expert neuroscientists in the third group (mean age 27). Maybe such personal factors, rather than neuroscience knowledge, explained the group similarities and differences?

Finally, the effects seen in this paper were, on the face of it, small - people rated the explanations on a 7 point scale from -3 (bad) to +3 (excellent), but the mean scores were all between -1 and +1. The dodgy neuroscience added about 1 point on a 7 point scale of satisfactoriness. Is that "a lot" or "a little"? It's impossible to say.

All of that said - this is still a great paper, and the point of this post is not to criticize or "debunk" Weisberg et. al.'s excellent work. If you haven't read their paper, you should read it, in full, right now, and I'm looking forward to further stuff from the same group. What I'm trying to do is to warn against another kind of seductive allure, probably the oldest and most dangerous of all - the allure of that which confirms what we already thought we knew.

(*)Or do they? Or is this just one of my pet peeves? Maybe I need to do an experiment about the allure of psychology papers confirming the allure of psychologist's pet peeves...


ResearchBlogging.orgDeena Skolnick Weisberg, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, Jeremy R. Gray (2008). The Seductive Allure of Neuroscience Explanations Journal of Cognitive Neuroscience, 20 (3), 470-477 DOI: 10.1162/jocn.2008.20040

Lessons from the Video Game Brain

See also Lessons from the Placebo Gene. Also, if you like this kind of thing, see my other fMRI-curmudgeonry(1, 2)

The life of a neurocurmudgeon is a hard one, but once in a while, fate smiles upon us. This article in the Daily Telegraph neatly embodies several of the mistakes that people make about the brain, all in one bite-size portion.

The article is about a recent fMRI study published in the Journal of Psychiatric Research. 22 healthy Stanford student volunteers (half of them male) played a "video game" while being scanned. The game wasn't an actual game like Left 4 Dead(*), but rather a kind of very primitive cross between Pong and Risk, designed specifically for the purposes of the experiment:

Balls appeared on one-half of the screen from the side at 40 pixel/s, and 10 balls were constantly on the screen at any given time. One’s own space was defined as the space behind the wall and opposite side to where the balls appeared. The ball disappeared whenever clicked by the subject. Anytime a ball hit the wall before it could be clicked, the ball was removed and the wall moved at 20 pixel/s, making the space narrower. Anytime all the balls were at least 100 pixels apart from the wall ... the wall moved such that the space became wider.
Essentially they had to click on balls to stop them moving a line. This may not sound like much fun, but the author's justification for using this task was that it allowed them to have a control condition in which the instructions and were the same (click on the balls) but there was no "success" or "failure" because the line defining the "territory" was always fixed. That's actually a pretty good idea. The students did the task 40 times during the scan for 24s at a time, alternating between the two conditions, "no success" (line fixed) and "game with success/failure" (line moves).

The results: While men & women were equally good at clicking balls, men were more successful at gaining "territory" than the women. In both genders, doing the task vs. just resting in the scanner activated various visual and motor-related areas - no surprise. Playing the game vs. doing the control task in which there was no success or failure produced more activation in a handful of areas but only "at a more liberal threshold" i.e. this activation was not statistically reliable. A region-of-interest analysis found activation in the left nucleus accumbens and right orbitofrontal cortex, which are "reward-related" areas. In males, the game-specific activation was greater than in females in the right nucleus accumbens, the orbitofrontal cortex, and the right amygdala.

These areas are indeed "neural circuitries involved in reward and addiction" as the authors put it, but they're also activated whenever you experience anything pleasant or enjoyable, such as drinking water when you're thirsty. Water is not known to be addictive. So whether this study is relevant to video-game "addiction" is anyone's guess. As far as I can tell, all it shows is that men are more interested in simple, repetitive, abstract video games. But that's hardly news: in 2007 there was an International Pac-Man Championship with 30,000 entrants; the top 10 competitors were all male. (If anything in that last sentence surprises you, you haven't spent enough time on the internet.)

Anyway, that's the study. This is what the Telegraph made of it:
Playing on computer consoles activates parts of the male brain which are linked to rewarding feelings and addiction, scans have shown. The more opponents they vanquish and points they score, the more stimulated this region becomes. In contrast, these parts of women's brains are much less likely to be triggered by sessions on the Sony PlayStation, Nintendo Wii or Xbox.
Well, not quite. No opponents were vanquished and no Wii's were played. But so far this is just another fMRI study that attracted the attention of a journalist who knew how to spin a good story. Readers of Neuroskeptic will know this is not uncommon. However, it doesn't end there. Here's the really instructive bit:
Professor Allan Reiss of the Centre for Interdisciplinary Brain Sciences Research at Stanford University, California, who led the research, said that women understood computer games just as well as men but did not have the same neurological drive to win.
"These gender differences may help explain why males are more attracted to, and more likely to become 'hooked' on video games than females," he said.
"I think it's fair to say that males tend to be more intrinsically territorial. It doesn't take a genius to figure out who historically are the conquerors and tyrants of our species – they're the males.
"Most of the computer games that are really popular with males are territory and aggression-type games."
Now this is a theory - men like video games because we're intrinsically drawn to competition, conquest and territory-grabbing. This may or may not be true; personally, in the light of what I know of history and anthropology, I suspect it is, but even if you disagree, you can see that this is an important theory: it makes a big difference whether it's true or not.

However, the fMRI results have nothing to do with this theory. They neither support nor refute it, and nor could they; this experiment is essentially irrelevant to the theory in question. Prof. Allan Reiss is simply stating his personal opinions about human nature - however intelligent & informed these opinions may be. (Just to be clear, it's quite possible that Reiss didn't expect to be quoted in the way he was; he may have, not unreasonably, thought that he was just giving his informal opinion.) The Telegraph's sub-headline?
Men's passion for computer games stems from a deep-rooted urge to conquer, according to research
There are some lessons here.

1. If you want to know about something, study it.

If you want to learn about human behaviour, study human behaviour. Stanley Milgram discovered important things about behaviour; if he had never even heard about the brain, it wouldn't have stopped him from doing that.

Neuroscience can tell us about how behaviour happens. We get thirsty when we haven't drunk water for a while. Neuroscience, and only neuroscience, will tell you how. Some people get depressed or manic. One day, I hope, neuroscience will tell us the complete story of how - maybe mania will turn out to be caused by hyper-stimulation of a certain dopamine receptor - and we'll be able to stop it happening with some pill with a 100% success rate.

However, neuroscience can't tell you what human behaviour is: it cannot describe behaviour, it can only explain it. People know about thirst and depression and mania long before they knew anything about the brain. More importantly, and more subtly, neuroscience can only explain behaviour in the "how" sense; only rarely can it tell you why behaviour is the way that it is.

If someone is behaving in a certain way because of brain damage or disease, that's one of these rare cases. In that case "damage to area X caused by disease Y" is "why". But in most cases, it's not. To say that men like video games because their reward systems are more sensitive to video games is not a "why" explanation. It's a "how" explanation, and it leaves completely open the question of why the male brain is more sensitive to video games. The answer might be "innate biological differences due to evolution", or it might be "sexist upbringing", or "paternalistic culture", or anything else.

(This is often overlooked in discussions about psychiatry. Some people object to the idea that clinical depression is a neuro-chemical state, pointing out that depression can be caused by stress, rejection and other events in life. This is confused; there is no reason why stress or rejection could not cause a state of low serotonin. By extension, saying that someone has "low serotonin" always leaves open the question of why.)

2. Brains are people too

This leads on to a more subtle point. Some people understand the difference between how and why explanations, but feel that if the "how" is something to do with the brain, the "why" must be to do with the brain too. They look at brain scans showing that people behave in a certain way because their brain is a certain way (e.g. men like games because their reward system is more activated by games), and they think that there must be a "biological" explanation for why this is.

There might be, but there might not be. Brains are alive; they see and hear; they think; they talk; they feel. Your brain does everything you do, because you are "your" brain. The astonishing thing about brains is that they are both material, biological objects, and concious, living people, at the same time.

Your brain is not your liver, which is only affected by chemical and biological influences, like hormones, toxins, and bacteria. Your liver doesn't care whether you're a Christian or a Muslim, it cares about whether you drink alcohol. Your brain does care about your religion because some pattern of connections in your brain gives you the religion that you have.

Brain scans, by confronting us with the biological, material nature of the brain, make us look for biological, material why explanations. We forget that the brain might be the way it is because of cultural or historical or psychological or sociological or economic factors, because we forget that brains are people. We tend to think of people as being something beyond and above their brains. Ironically, it's this primitive dualism that leads to the most crude materialistic explanations for human behaviour.

3. Beware neuro-fetishists

There's a doctoral thesis in "Science Studies" to be written about how it came to happen, but that we fetishize the brain is obvious. For much of the 20th century, psychology was seen in the same way. Freud joined Nietschze, Marx and Heidegger in the ranks of Germanic names that literary theorists and lefty intellectuals loved to drop.

Then the bottom fell out of psychoanalysis, Prozac and fMRI arrived and the Decade of the Brain was upon us. Today, neuroscience is the new psychology - or perhaps psychology is becoming a branch of neuroscience. (If I asked you to depict psychology visually, you'd probably draw a brain - if you do a Google image search for "psychology", 10 out of the 21 front page hits depict either a brain or a head; this might not surprise you but it would have seemed odd 50 years ago.) There's a presumption that neuroscience is key to answering both how and why questions about the mind.

Neuroscience is now hot, but what people are mostly interested in are psychological and philosophical questions. People care about The Big Questions like -

"Is there life after death? Do we have free will? Is human nature fixed? Are men smarter/more aggressive/more promiscuous/better drivers than women? Why do people become criminals/geniuses/mad?"

These are good questions - but neuroscience has little to say about them, because they're not questions about the brain. They're questions for philosophers, or geneticists, or psychologists. No brain scan is going to tell you whether men are better drivers than women. It might tell you something about the processes by which make decisions while driving, but only a neuroscientist is likely to find that interesting.

P.S It turns out that people were saying similar things about this research back in Feburary. A blogger who writes about research on video games (neat) wrote about it way back then. So why did the Telegraph decide to resurrect the story as if it were new? That's just another one of life's mysteries.

[BPSDB]

(*) Which is so awesome.

ResearchBlogging.orgF HOEFT, C WATSON, S KESLER, K BETTINGER, A REISS (2008). Gender differences in the mesocorticolimbic system during computer game-play Journal of Psychiatric Research, 42 (4), 253-258 DOI: 10.1016/j.jpsychires.2007.11.010

 
powered by Blogger