Aripiprazole, Dopamine, and Well-Being - Science or Selling Point?

Suppose you were a drug company, and you've invented a new drug. It's OK, but it's no better than the competition. How do you convince people to buy it?

You need a selling point - something that sets your product apart. Fortunately, with drugs, you have plenty of options. You could look into the pharmacology - the chemistry of how your drug works in the body - and find something unique there. Then, all you need to do is to spin a nice story to explain how the pharmacological properties of your drug make it brilliant.On an entirely unrelated note, aripiprazole (Abilify) is an antipsychotic marketed in the US by Bristol Meyers-Squibb. A Cochrane meta-analysis finds that it's about as good as any other antipsychotic in terms of efficacy and side effects. As good, but no better. However, uniquely, aripiprazole is a D2 receptor partial agonist. Other antipsychotics work by blocking D2 receptors in the brain, switching them off (full antagonism). Aripiprazole also blocks D2 receptors, but it activates them slightly in the process (partial agonism).

Is that a good thing? A paper just published says yes - The relationship between subjective well-being and dopamine D2 receptors in patients treated with a dopamine partial agonist and full antagonist antipsychotics. The research in question was funded, by the way, by Bristol Meyers-Squibb. Let's see if it holds up.

The authors got 22 patients with schizophrenia who were taking an established antipsychotic, either olanzapine or risperidone. These are both D2 antagonists. Incidentally, neither of them is made by Bristol Meyers-Squibb. 11 of the patients were switched to aripiprazole, while 11 stayed on their original drug. There was no blinding, and no randomization. (The dose of aripiprazole was randomized, although still unblinded, but the assignment to aripiprazole itself wasn't).
Lo and behold, the patients who switched reported improved "well-being". Because there was no randomization and no blinding, and because the outcome was entirely subjective, this could be entirely explained as a placebo effect (or an experimental demand effect.) Especially when you consider that the patients were most likely convinced to take part in the study by being told that aripiprazole would make them feel better than their original drug.

That's not all, though. They also did some brain scanning, using PET to measure D2 receptor occupancy. On average aripiprazole blocked more D2 receptors than the other antipsychotics, which is what you'd expect, as it has a very high affinity for that receptor. But it's a partial agonist, remember - it binds to D2 receptors without switching them "off" entirely.

The paper suggests that this is a good thing because it doesn't make people feel horrible, which is what normally happens when you block almost all of someone's D2 receptors (they're rather important). By switching on the receptors as well as blocking them, it makes you feel OK.
Nice story, and scientifically it's not unreasonable. And as you can see on this plot, in the non-aripiprazole patients (triangles), D2 occupancy in the ventral striatum was negatively correlated with well-being, but in the aripiprazole patients (circles), it wasn't.

Great - except that the range of D2 occupancies in the aripiprazole group is so narrow that no correlation would be apparent even if there was one. The occupancies in the aripiprazole group are all extremely high, 80-95%. There's just no room for a correlation to appear. (Think about it this way - in children, age is strongly correlated with height, but if you only looked at a bunch of 7 year olds, you wouldn't know that.) This is high-school statistics.

This scatter-plot is in fact exactly what you'd expect assuming that a) aripiprazole strongly blocks D2 receptors, and makes people feel awful, just like any other strong D2 blocker and b) the placebo effect made some of the aripiprazole group feel (or at least say that they feel) a bit better than they otherwise would.

You'll note also that the aripiprazole group reported feeling no better than the other antipsychotic group, and that the single most miserable patient was on aripiprazole. The paper concludes on an optimistic note -

The present data suggests that aripiprazole may be associated with early and sustained improvement in subjective well-being, notwithstanding the very high D2 occupancy. This may be related to its partial agonist profile at D2 receptors.
I leave it to the reader to evaluate this claim, and to consider how likely we are to progress in our understanding of the brain when so much of the research is funded by organisations with a direct financial interest in certain theories.

[BPSDB]

ResearchBlogging.orgMizrahi, R., Mamo, D., Rusjan, P., Graff, A., Houle, S., & Kapur, S. (2009). The relationship between subjective well-being and dopamine D2 receptors in patients treated with a dopamine partial agonist and full antagonist antipsychotics The International Journal of Neuropsychopharmacology, 12 (05) DOI: 10.1017/S1461145709000327

Antidepressants - No Good In Autism?

Children with autism often shown repetitive behaviours, ranging from repeated movements to compulsively collecting or arranging objects and desiring that daily routines are always done in the exact same way. Repetitive behaviour is often considered one of the three core features of autistic disorders (alongside difficulties in social interaction, and difficulties in communication).

SSRI antidepressants are often used to try to treat repetitive behaviours. Unfortunately, they don't work, at least according to a new study - Lack of Efficacy of Citalopram in Children With Autism Spectrum Disorders and High Levels of Repetitive Behavior.

The trial included 149 American children with autism aged from 5 to 17 years old, all of whom had moderate or severe repetitive behaviours. They were randomly assigned to get either citalopram, an SSRI, or placebo, and were followed up for 12 weeks to see if it had any effect on their repetitive behaviours. The dose of citalopram started at 2.5 mg and gradually increased to, in most cases, 20 mg, which is the dose that an adult person with depression would most commonly take - for a kid, this is a high dose.

The results were unequivocal - citalopram had absolutely no benefit over placebo. Zilch. On the other hand, it did cause side effects in some children - gastrointenstinal problems like diarrehea, skin rashes, and, most worryingly, hyperactivity - "increased energy levels", insomnia, "Attention and concentration decreased", and so forth. (Two children in the citalopram group also experienced seizures, but it's not clear that this was related to the drug, as citalopram is not known for causing seizures in adults.)

So, citalopram was not just useless, but actually harmful, in these children. This is the largest trial of an SSRI for repetitive behaviours in autism so far; there have been a few others, including one double-blind study of Prozac finding some benefit, but this is by far the most compelling.

But there's a big question here - why would anyone think that citalopram would work? Citalopram was designed to treat adults with... depression. Hence why it's called an antidepressant. Depression in adults is no more like compulsive behaviour in autistic children than is having a broken leg or heart disease. They're completely different conditions.

The main reason why SSRIs are used to try to treat repetitive behaviour is that they also work quite well against obsessive-compulsive disorder (OCD). People with OCD have repetitive behaviours, "compulsions". They might wash their hands ten times after going to the toilet. Or check that the fridge door is closed and the oven is switched off every time they leave the kitchen. Or count up to one hundred in their head whenever they see the number 13. And so forth.

SSRIs do work against OCD. Does this mean that they ought to also work against the repetitive behaviors in autism? Only if you think all repetitive behaviours are the same, with the same causes.

People with OCD feel compelled to perform their ritualistic behaviours as a way of coping with their "obsessions" - intrusive, unpleasant thoughts that they can't otherwise get out of their heads. Someone might be obsessed with the thought of germs and disease whenever they go to the toilet, and the only way to feel clean is to wash their hands 10 times. They might be obsessed with the idea that their family will die whenever they see the unlucky number 13, unless they "cancel it out" by counting to 100. The repetitive behaviours, in other words, are a consequence of the obsessions, which are unwanted, anxiety-provoking thoughts. SSRIs probably work by making the obsessions seem less troubling, so there is less need for the compulsions.

People with autism are often described as having "obsessions" too, but in the sense of "Things they are very interested in", not "Thoughts they cannot get rid of". Likewise, autistics may show "compulsive behaviours", but not as a way of dealing with obsessions. The words are the same, but the reality is different.

Maybe autistic people just like sameness and routine. That's part of who they are, and it's not something that can be treated with drugs. People with OCD hate having it - they don't like their obsessions or compulsions, they feel stuck with them. The compulsions are a coping mechanism. But in autism, at least most of the time, that's not how it works. An autistic child "compulsively" playing with the same toy over and over, or reading yet another book about their "obsession", dinosaurs, may be perfectly happy. In which case, why give them happy pills? And this is what the authors of the paper eventually suggest -

It may be that the repetitive behavior in children with ASDs is fundamentally different from what is observed among children with obsessive-compulsive disorder in its behavioral picture and in its biologic underpinnings.
ResearchBlogging.orgBryan H. King, MD; Eric Hollander, MD; Linmarie Sikich, MD; James T. McCracken, MD; Lawrence Scahill, MSN, PhD; Joel D. Bregman, MD; Craig L. Donnelly, MD; Evdokia Anagnostou, MD; Kimberly Dukes, PhD; Lisa Sullivan, PhD; Deborah Hirtz, MD; Ann Wagner, PhD (2009). Lack of Efficacy of Citalopram in Children With Autism Spectrum Disorders and High Levels of Repetitive Behavior Arch Gen Psychiatry, 66 (6), 583-590

Genes, Brains and the Perils of Publication

Much of science, and especially neuroscience, consists of the search for "positive results". A positive result is simply a correlation or a causal relationship between one thing and another. It could be an association between a genetic variant and some personality trait. It could be a brain area which gets activated when you think about something.


It's only natural that "positive results" are especially interesting. But "negative" results are still results. If you find that one thing is not correlated with another, you've found a correlation. It just happens to have a value of zero.

For every gene which causes bipolar disorder, say, there will be a hundred which have nothing to do with it. So, if you find a gene that doesn't cause bipolar, that's a finding. It deserves to be treated just as seriously as finding that a gene does cause it. In particular, it deserves to be published.

Sadly, negative results tend not to get published. There are lots of reasons for this and much has been written about it, both on this blog and in the literature, most notably by John Ionnidis (see this and this, for starters). A paper just published in Science offers a perfect example of the problem: Neural Mechanisms of a Genome-Wide Supported Psychosis Variant.

The authors, a German group, report on a genetic variant, rs1344706, which was recently found to be associated with a slightly raised risk of psychotic illness in a genome-wide association study. (Genome-wide studies can and do throw up false positives so rs1344706 might have nothing to do with psychosis - but let's assume that it does.)

They decided to see whether the variant had an effect on the brains of people who have never suffered from psychosis. That's an extremely reasonable idea, because if a certain gene causes an illness, it could well also cause subtle effects in people who don't have the full-blown disease.

So, they took 115 healthy people and used fMRI to measure neural activity while they were doing some simple cognitive tasks, such as the n-back task, a fairly tricky memory test. People with schizophrenia and other psychotic disorders often have difficulties on this test. They also used a test which involves recognizing people's emotions from pictures of their faces.
They found that -

Regional brain activation was not significantly related to genotype...Rs1344706 genotype had no impact on performance.
In other words, the gene didn't do anything. The sample size was large - with 115 people, they had an excellent chance to detect any effect, if there was one, and they didn't. That's a perfectly good finding, a useful contribution to the scientific record. It was reasonable to think that rs1344706 might affect cognitive performance or brain activation in healthy people, and it didn't.
But that's not what the paper is about. These perfectly good negative findings were relegated to just a couple of sentences - I've just quoted almost every word they say about them - and the rest of the article concerns a positive result.The positive result is that the variant was associated with differences in functional connectivity. Functional connectivity is the correlation between activity in different parts of the brain; if one part of the brain tends to light up at the same time as another part they are said to be functionally connected.
In risk-allele carriers, connectivity both within DLPFC (same side) and to contralateral DLPFC was reduced. Conversely, the hippocampal formation was uncoupled from DLPFC in non–risk-allele homozygotes but showed dose-dependent increased connectivity in risk-allele carriers. Lastly, the risk allele predicted extensive increases of connectivity from amygdala including to hippocampus, orbitofrontal cortex, and medial prefrontal cortex.
And they conclude, optimistically:
...our findings establish dysconnectivity as a core neurogenetic mechanism, where reduced DLPFC connectivity could contribute to disturbed executive function and increased coupling with HF to deficient interactions between prefrontal and limbic structures ... Lastly, our findings validate the intermediate phenotype strategy in psychiatry by showing that mechanisms underlying genetic findings supported by genome-wide association are highly penetrant in brain, agree with the pathophysiology of overt disease, and mirror candidate gene effects. Confirming a century-old conjecture by combining genetics with imaging, we find that altered connectivity emerges as part of the core neurogenetic architecture of schizophrenia and possibly bipolar disorder, identifying novel potential therapeutic targets.
I have no wish to criticize these findings as such. But the way in which this paper is written is striking. The negative results are passed over as quickly as possible. This despite the fact that they are very clear and easy to interpret - the rs1344706 variant has no effect on cognitive task performance or neural activation. It is not a cognition gene, at least not in healthy volunteers.

By contrast, the genetic association with connectivity is modest (see the graphs above - there is a lot of overlap), and very difficult to interpret, since it is clearly not associated with any kind of actual differences in behaviour.

And yet this positive result got the experiment published in no less a journal than Science! The negative results alone would have struggled to get accepted anywhere, and would probably have ended up either unpublished, or published in some rubbish minor journal and never read. It's no wonder the authors decided to write their paper in the way they did. They were just doing the smart thing. And they are perfectly respectable scientists - Andreas Meyer-Lindenberg, the senior author, has done some excellent work in this and other fields.

The fault here is with a system which all but forces researchers to search for "positive results" at all costs.

[BPSDB]

ResearchBlogging.orgEsslinger, C., Walter, H., Kirsch, P., Erk, S., Schnell, K., Arnold, C., Haddad, L., Mier, D., Opitz von Boberfeld, C., Raab, K., Witt, S., Rietschel, M., Cichon, S., & Meyer-Lindenberg, A. (2009). Neural Mechanisms of a Genome-Wide Supported Psychosis Variant Science, 324 (5927), 605-605 DOI: 10.1126/science.1167768

Help! There's an Epidemic of Anxiety! (Part II)

In my last-post-but-one I slammed the claim that the British are suffering from an epidemic of anxiety disorders. I declared it a myth pushed by the Mental Health Foundation and echoed uncritically by British newspapers (although The Economist has since run a kind-of skeptical piece on it.) But I also promised that there are important lessons to be learned here. So, here we go:

The Mental Health Foundation produced a report, In The Face of Fear, which contains various interesting thoughts about the role of fear in public debates. Here's just one:

Individually we experience both rational and irrational fears that drive our behaviour and fear also drives communities and social policies... Excessive fear poses an enormous burden on our society directly through anxiety related illness, which can be physical as well as mental, and indirectly through inappropriate behaviours such as excessive supervision of children or failure to invest. It also paralyses long term rational planning to deal with key future threats such as global warming by diverting attention to more immediate but less important fears.
This is true. Everyone should be scared of global warming. Most people aren't. They're scared of... well, it varies. Cervical cancer was scary a few weeks ago, before that it was the crisis in child protection services, right now it's the Mexican swine flu crisis - not to mention the economic crisis, the knife crime "crisis" - and that's just England.

I'm not saying that we shouldn't care about these things. I'm worried about Mexican swine flu, and so should you be. Especially you, Simon "The Armchair Virologist" Jenkins. But in the face of crisis after crisis after crisis, it becomes hard to take the really crucial crises, such as global warming, seriously. There's a temptation to see every apparant crisis as just another piece of overblown nonsense in need of "debunking" as Ben Goldacre has just discovered. One could call this "crisis fatigue", but that's not exactly right. We're too fond of crises. There are just too many of them.

This is why the MHF felt the need to be so "creative" with the data. As I explained in Part I of this post, the best available figures show that the prevalence of anxiety disorders in Britain has remained boringly level since at least 2000. The MHF simply ignored those numbers in order to make it look as though we're currently facing an epidemic of anxiety. A crisis.

I wish they hadn't. But I don't really blame them for what they did. They did it because they knew that if they didn't, no-one would care about anything they had to say. In an ideal world they would have said: Although British anxiety and depression levels are probably not rising, and although they're not as high as in some countries, they're still higher than in other countries, so we can and should try harder to reduce them. That's the truth. But the truth doesn't involve a crisis, so it wouldn't have made the headlines, or if it did, no-one would have cared. Thus it is that a report warning (inter alia) about the dangers of scaremongering ended up becoming a prime example of scaremongering.

This is the point where, conventionally, one blames "the media" for only publishing "sensationalist" stories in order to "sell papers". Well, that's all true. But the media don't behave that way just for fun. A sensational story is a good story. People want sensationalist stories. Nothing wrong with that, as such. And there's nothing wrong with caring more about a crisis than about a mere problem. A crisis, by definition, is something that deserves urgent attention.

But the result of this is that today, in order to get attention, a problem has to be a crisis - something which is bad and getting worse, fast. Just being a problem in need of a solution isn't enough. There are too many problems - no-one can possibly care about them all. Whereas if something is a crisis, it might just get a little attention. Hence why the MHF had to do what they did. They needed a crisis, so they created one.

If I were a humanities graduate, I would now start explaining how it's all the fault of our postmodern, "post-historical" condition in which there are no grand narratives or central moral authorities to tell us what to care about, leaving every political or moral cause (and organization) to fend for itself in a Darwinian (or market) struggle for attention (and money) in which the only way to survive is to adopt the language of panic, crisis, and emergency thereby devauling that very discourse in a cultural tragedy-of-the-commons. But I'm a science graduate, so I wouldn't dream of doing that.

[BPSDB]

More Brain Voodoo, and This Time, It's Not Just fMRI

Ed Vul et al recently created a splash with their paper, Puzzlingly high correlations in fMRI studies of emotion, personality and social cognition (better known by its previous title, Voodoo Correlations in Social Neuroscience.) Vul et al accused a large proportion of the published studies in a certain field of neuroimaging of committing a statistical mistake. The problem, which they call the "non-independence error", may well have made the results of these experiments seem much more impressive than they should have been. Although there was no suggestion that the error was anything other than an honest mistake, the accusations still sparked a heated and ongoing debate. I did my best to explain the issue in layman's terms in a previous post.

Now, like the aftershock following an earthquake, a second paper has appeared, from a different set of authors, making essentially the same accusations. But this time, they've cast their net even more widely. Vul et al focused on only a small sub-set of experiments using fMRI to examine correlations between brain activity and personality traits. But they implied that the problem went far beyond this niche field. The new paper extends the argument to encompass papers from across much of modern neuroscience.

The article, Circular analysis in systems neuroscience: the dangers of double dipping, appears in the extremely prestigious Nature Neuroscience journal. The lead author, Dr. Nikolaus Kriegeskorte, is a postdoc in the Section on Functional Imaging Methods at the National Institutes of Health (NIH).

Kriegeskorte et al's essential point is the same as Vul et al's. They call the error in question "circular analysis" or "double-dipping", but it is the same thing as Vul et al's "non-independent analysis". As they put it, the error could occur whenever

data are first analyzed to select a subset and then the subset is reanalyzed to obtain the results.
and it will be a problem whenever the selection criteria in the first step are not independent of the reanalysis criteria in the second step. If the two sets of criteria are independent, there is no problem.


Suppose that I have some eggs. I want to know whether any of the eggs are rotten. So I put all the eggs in some water, because I know that rotten eggs float. Some of the eggs do float, so I suspect that they're rotten. But then I decide that I also want to know the average weight of my eggs . So I take a handful of eggs within easy reach - the ones that happen to be floating - and weigh them.

Obviously, I've made a mistake. I've selected the eggs that weigh the least (the rotten ones) and then weighed them. They're not representative of all my eggs. Obviously, they will be lighter than the average. Obviously. But in the case of neuroscience data analysis, the same mistake may be much less obvious. And the worst thing about the error is that it makes data look better, i.e. more worth publishing:
Distortions arising from selection tend to make results look more consistent with the selection criteria, which often reflect the hypothesis being tested. Circularity is therefore the error that beautifies results, rendering them more attractive to authors, reviewers and editors, and thus more competitive for publication. These implicit incentives may create a preference for circular practices so long as the community condones them.
To try to establish how prevalent the error is, Kriegeskorte et al reviewed all of the 134 fMRI papers published in the highly regarded journals Science, Nature, Nature Neuroscience, Neuron and the Journal of Neuroscience during 2008. Of these, they say, 42% contained at least one non-independent analysis, and another 14% may have done. That leaves 44% which were definitely "clean". Unfortunately, unlike Vul et al who did a similar review, they don't list the "good" and the "bad" papers.

They then go on to present the results of two simulated fMRI experiments in which seemingly exciting results emerge out of pure random noise, all because of the non-independence error. (One of these simulations concerns the use of pattern-classification algorithms to "read minds" from neural activity, a technique which I previously discussed). As they go on to point out, these are extreme cases - in real life situations, the error might only have a small impact. But the point, and it's an extremely important one, is that the error can creep in without being detected if you're not very careful. In both of their examples, the non-independence error is quite subtle and at first glance the methodology is fine. It's only on closer examination that the problem becomes apparent. The price of freedom from the error is eternal vigilance.

But it would be wrong to think that this is a problem with fMRI alone, or even neuroimaging alone. Any neuroscience experiment in which a large amount of data is collected and only some of it makes it into the final analysis is equally at risk. For example, many neuroscientists use electrodes to record the electrical activity in the brain. It's increasingly common to use not just one electrode but a whole array of them to record activity from more than brain one cell at once. This is a very powerful technique, but it raises the risk the non-independence error, because there is a temptation to only analyze the data from those electrodes where there is the "right signal", as the author's point out:
In single-cell recording, for example, it is common to select neurons according to some criterion (for example, visual responsiveness or selectivity) before applying
further analyses to the selected subset. If the selection is based on the same dataset as is used for selective analysis, biases will arise for any statistic not inherently independent of the selection criterion.
In fact,
Kriegeskorte et al praise fMRI for being, in some ways, rather good at avoiding the problem:
To its great credit, neuroimaging has developed rigorous methods for statistical mapping from its beginning. Note that mapping the whole measurement volume avoids selection altogether; we can analyze and report results for all locations equally, while accounting for the multiple tests performed across locations..
With any luck, the publication of this paper and Vul's so close together will force the neuroscience community to seriously confront this error and related statistical weaknesses in modern neuroscience data analysis. Neuroscience can only emerge stronger from the debate.

ResearchBlogging.orgKriegeskorte, N., Simmons, W., Bellgowan, P., & Baker, C. (2009). Circular analysis in systems neuroscience: the dangers of double dipping Nature Neuroscience DOI: 10.1038/nn.2303

Help! There's an Epidemic of Anxiety! (Part I)

All British journalists are psychotic. Pathologically obsessed with "mental health issues", and suffering from grandiose delusions of their competence to discuss them, these demented maniacs...

Sorry. I got a bit carried away there. But you'll forgive me, because I was just following the example of seemingly everyone in the British media these past couple of weeks. If you believe the headlines, we're in the grip of an epidemic of anxiety:

BBC: UK society 'increasingly fearful'
The Telegraph: Britons 'living in fear' as record numbers suffer from anxiety

The Independent: Britain is becoming a more fearful place – and the economy is paying the price. The Indie also ran a comment by Janet Street-Porter - "The main reason people feel anxious is loneliness.", thanks Janet, qualifications: none, career path: fashion journalist - and a piece by a clinically anxious person - "I reckon a root cause of my anxiety is the modern notion that we can do away with risk by anticipating every imaginable danger."
It all started with a report by the Mental Health Foundation called In The Face Of Fear. The Mental Health Foundation are a perfectly decent charity organization, although they have a prior history of endorsing slightly dodgy research. One of their previous reports, Feeding Minds: The Impact of Food on Mental Health, presented a simplistic and overblown account of the effects of nutrition upon mood and drew heavily on the "work" of Patrick Holford, vitamin pill peddler and well-documented crank. Parts of the present report are, unfortunately, dodgy as well, as you'll see below.

In The Face of Fear is actually quite thought-provoking piece of writing, but you wouldn't know that from reading the newspapers. The headlines are all about the supposed surge in anxiety amongst the British population. This, however, is the dodgiest part of the report. Firstly, the report's authors surveyed 2246 British adults in January 2009. 37% said that they get frightened or anxious more often than they used to, 28% disagreed, and 33% neither agreed nor disagreed.

That's it. That's the finding. It's really not very impressive, because quite apart from anything else, it relies upon the respondent's ability to remember how anxious they were in the past. You just can't trust people to do hard stuff like that. I know exactly what I'm worried about today - I can't remember very well what I worried about ten years ago - so I must be more worried today! Of course, this could also work in reverse, and people might forget their past lack of anxiety and wrongly say that they are less anxious today.

The survey also found that 77% of people said that "people in general" are more anxious than they used to be, while just 3% disagreed. But remember that only (at most) 37 out of those 77% said that they themselves were actually more anxious. Hmm. So the real finding here seems to be that there is a widespread perception that other people are becoming more anxious, though it's anyone's guess whether this is in fact true. The report itself does note that
more than twice as many of us agree that people in general and the world itself are becoming more frightened and frightening as agree that they themselves are more frightened and anxious
This was rather too subtle for the newspapers, though, who reported... that people are becoming more anxious.

In The Face of Fear also cites a government study on the mental health of the British population, the Adult Psychiatric Morbidity Survey. Their use of this data, however, is selective to the point of being deception. This was a household survey of a weighted sample of the British population. That section of the population who live in houses and don't mind being interviewed about their mental health, that is. Diagnoses were made on the basis of the CIS-R interview, which scores each person on a number of symptoms (including "worry", "fatigue", and "depressive ideas"). Each person is then given a total score; a total score of 12 or more is (arbitrarily) designated to indicate a "neurotic disorder".

This was done in 1993, 2000 and 2007. The 2007 report notes that overall, levels of neurotic disorders increased between 1993 and 2000, but then stayed level in 2007. In terms of anxiety disorders, there was a very small increase in "generalized anxiety disorder" (from 4.4% to 4.7%), which mostly happened between 1993 and 2000; there was an increase in phobias, from 1993 2.2% to 2007 2.6%, but rates peaked at 2.8% in 2000; and "mixed anxiety and depressive disorder" increased from 7.5% in 1993 to 9.4% in 2000 to 9.7% in 2007.

What to make of that? It's hard to know, but it's clear that any worsening in anxiety levels occured some time between 1993 and 2000. Mysteriously, while the Mental Health Foundation report cites the 1993 and the 2007 figures, and makes much of the increase, it simply ignores and does not mention the 2000 figures, which show that any increase has long since stopped. It's history, not current events. Back in 2000, you might recall, the twin towers were still standing, The Simpsons was still funny, and Who Let The Dogs Out was top of the charts.

Overall, the evidence that people in Britian are actually feeling more and more anxious is extremely thin. In fact, I would say that it's a myth. It's a very popular myth, however: 77% of the population believe it. Why? Well, the fact that the Mental Health Foundation seem determined to make the data fit that story can't be helping matters. The newspapers, not to be outdone, focussed entirely on the scariest and most pesimissitic aspects of the report.

A poor show all round, but - as always on Neuroskeptic - there are some important lessons here about how we think about threats, social change, and "crisis". Stay tuned for the good stuff next post.

[BPSDB]

The Voodoo Strikes Back

Just when you thought it was safe to compute a correlatation between a behavioural measure and a cluster mean BOLD change...

The fMRI voodoo correlations controversy isn't over. Ed Vul and collegues have just responded to their critics in a new article (pdf). The critics appear to have scored at least one victory, however, since the original paper has now been renamed. So it's goodbye to "Voodoo Correlations in Social Neuroscience" - now it's "Puzzlingly high correlations in fMRI studies of emotion, personality and social cognition" by Vul et. al. 2009. Not quite as catchy, but then, that's the point...

Just in case you need reminding of the story so far: A couple of months ago, MIT grad student Ed Vul and co-authors released a pre-publication manuscript, then titled Voodoo Correlations in Social Neuroscience. This paper reviewed the findings of a number of fMRI studies which reported linear correlations between regional brain activity and some kind of measure of personality. Vul et. al. argued that many (but by no means all) of these correlations were in fact erroneous, with the reported correlations being much higher than the true ones. Vul et. al. alleged that the problem arose due to a flaw in the statistical analysis used, the "non-independence error". For my non-technical explanation of the issue, see my previous post, or go read the original paper (it really doesn't require much knowledge of statistics).

Vul's paper attracted a lot of praise and also a lot of criticism, both in the blogosphere and in the academic literature. Many complained that it was sensationalistic and anti-fMRI. Others embraced it for the same reasons. My view was that while the paper's style was certainly journalistic, and while many of those who praised the paper did so for the wrong reasons, the core argument was both valid and important. While not representing a radical challenge to social neuroscience or fMRI in general, Vul et. al. draws attention to a widespread and potentially serious technical issue with the analysis of fMRI data, one which all neuroscientists should be aware of.

That's still my opinion. Vul et. al.'s response to their critics is a clearly worded and convincing defense. Interestingly, their defense is in many ways just a clarificiation of the argument. This is appropriate, because I think the argument is pretty much just common sense once it is correctly understood. As far as I can see the only valid defence against it is to say that a particular paper did not in fact commit the error - while not disputing that the error itself is a problem. Vul et. al. say that to their knowledge no accused papers have turned out to be innocent - although I'm sure we haven't heard the last of that.

Vul et. al. also now make explicit something which wasn't very clear in their original paper, namely that the original paper made accusations of two completely seperate errors. One, the non-independence error, is common but probably less serious than the second, the "Forman error", which is pretty much fatal. Fortunately, so far, only two papers are known to have fallen prey to the Forman error - although there could be more. Go read the article for more details on what could be Vul's next bombshell...

ResearchBlogging.orgEDWARD VUL, CHRISTINE HARRIS, PIOTR WINKIELMAN, AND, & HAROLD PASHLER (2009). Reply to comments on “Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition” Perspectives in Psychological Science

 
powered by Blogger