Update: See also Lessons from the Video Game Brain
The Journal of Neuroscience has published a Swedish study which, according to New Scientist (and the rest) is something of a breakthrough:
First 'Placebo Gene' DiscoveredI rather like the idea of a dummy gene made of sugar, or perhaps a gene for being Brian Moloko, but what they're referring to is a gene, TPH2, which allegedly determines susceptibility to the placebo effect. Interesting, if true. Genetic Future was skeptical of the study because of its small sample size. It is small, but I'm not too concerned about that because there are, unfortunately, other serious problems with this study and the reporting on it. I should say at the outset, however, that most of what I'm about to criticize is depressingly common in the neuroimaging literature. The authors of this study have done some good work in the past and are, I'm sure, no worse than most researchers. With that in mind...
The study included 25 people diagnosed with Social Anxiety Disorder (SAD). Some people see the SAD diagnosis as a drug company ploy to sell pills (mainly antidepressants) to people who are just shy. I disagree. Either way, these were people who complained of severe anxiety in social situations. The 25 patients were all given placebo pill treatment for 8 weeks.
Before and after the treatment they each got an [H2
15O] PET scan, which measures regional blood flow (rCBF) in the brain, something that is generally assumed to correlate with neural activity. It's a bit like fMRI, although the physics are different. During the scans the subjects had to make a brief speech in front of 6 to 8 people. This was intended to make them anxious, as it would do. The patient's self-reported social anxiety in everyday situations was also assessed every 2 weeks by questionaires and clinical interviews.
The patients were then split into two groups based upon their final status: "placebo responders" were those who ended up with a "CGI" rating of 1 or 2 - meaning that they reported that their anxiety had got a lot better - and "placebo nonresponders" who didn't. (You may take issue with this terminology - if so, well done, and keep reading). Brain activation during the public speaking task was compared between these two groups. The authors also looked at two genes, 5HTTLPR and TPH2. Both are involved in serotonin signalling and both have been associated (in some studies) with vulnerability to anxiety and depression.
The results: The placebo responders reported less anxiety following treatment - unsurprisingly, because this is why they were classed as responders. On the PET scans, the placebo responders showed reduced amygdala activity during the second public speaking task compared to the first one; the non-responders showed no change. This is consistent with the popular and fairly sensible idea that the amygdala is active during the experience of emotion, especially fear and anxiety. However, in fact, this effect was marginal, and it was only significant under a region-of-interest analysis i.e. when they specifically looked at the data from the amygdala; in a more conservative whole-brain analysis they found nothing (or rather they did, but they wrote it off as uninteresting, as cognitive neuroscientists generally do when they see blobs in the cerebellum and the motor cortex):
PET data: whole-brain analysesAs for the famous "placebo gene", they found that two genetic variants, 5HTTLPR ll and TPH2 GG, were associated with a bigger drop in amygdala activity from before treatment to after treatment. TPH2 GG was also associated with the improvement in anxiety over the 8 weeks.
Exploratory analyses did not reveal significantly different treatment-induced patterns of change in responders versus nonresponders. Significant within-group alterations outside the amygdala region were noted only in nonresponders, who had increased (pre < post) rCBF in the right cerebellum ... and in a cluster encompassing the right primary motor and somatosensory cortices...
In a logistic regression analysis, the TPH2 polymorphism emerged as the only significant variable that could reliably predict clinical placebo response (CGI-I) on day 56, homozygosity for the G allele being associated with better outcome. Eight of the nine placebo responders (89%), for whom TPH2 gene data were available, were GG homozygotes.You could call this a gene correlating with the "placebo effect", although you'd probably be wrong (see below). There are a number of important lessons to take home here.
1. Dr Placebo, I presume? - Be careful what you call the placebo effect
This study couldn't have discovered a "placebo gene", even if there is one. It didn't measure the placebo effect at all.
You'll recall that the patients in this study were assessed before and after 8 weeks of placebo treatment (sugar pills). Any changes occuring during these 8 weeks might be due to a true "placebo effect" - improvement caused by the patient's belief in the power of the treatment. This is the possibility that gets some people rather excited: it's mind over matter, man! This is why the word "placebo" is often preceded by words like "Amazing", "Mysterious", or even "Magical" - as if Placebo were the stage-name of a 19th century conjuror. (As opposed to the stage name of androgynous pop-goth Brian Moloko ... I've already done that one.)
But, as they often do, more prosaic explanations suggest themselves. Most boringly, the patients might have just got better. Time is the greater healer, etc., and two months is quite a long time. Maybe one of the patients hooked up with a cute guy and it did wonders for their self-confidence. Maybe the reason why the patients volunteered for the study when they did was because their anxiety was especially bad, and by the time of the second scan it had returned to normal (regression towards the mean). Maybe the study itself made a difference, by getting the patients talking about their anxiety with sympathetic professionals. Maybe the patients didn't actually feel any better at all, but just said they did because that's what they thought were expected to say. I could go on all day.
In my opinion most likely, the patients were just less anxious having their second PET scan, once they had survived the first one. PET scans are no fun: you get a catheter inserted into your arm, through which you're injected with a radioactive tracer compound. Meanwhile, your head is fixed in place within big white box covered in hazard signs. It's not hard to see that you'd probably be much more anxious on your first scan than on your second time around.
So, calling the change from baseline to 8 weeks a "placebo response", and calling the people who got better "placebo responders", is misleading (at least it misled every commentator on this study so far). The only way to measure the true placebo effect is to compare placebo-treated people with people who get no treatment at all. This wasn't done in this study. It rarely is. This is something which confuses an awful lot of people. When people talk about the placebo effect, they're very often referring to the change in the placebo group, which as we've seen is not the same thing at all, and has nothing even vaguely magical or mysterious about it. (For example, some armchair psychiatrists like to say that since patients in the placebo group in antidepressant drug trials often show large improvements, sugar pills must be helpful in depression.) Although that said there was another study in the same issue of the same journal which did measure an actual placebo effect.
2. Beware Post Hoc-us Pocus
From the way it's been reported, you would probably assume that this was a study designed to investigate the placebo effect. However, in the paper we read:
Patients were taken from two previously unpublished RCTs that evaluated changes in regional cerebral blood flow after 56 d of pharmacological treatment by means of positron emission tomography. ... The clinical PET trials ... included a total of 108 patients with SAD. There were three treatment arms in the first study and six arms in the second. ... Only the pooled placebo data are included herein, whereas additional data on psychoactive drug treatment will be reported separately.Personally, I find this odd. Why have so many groups if you're interested in just one of them? Even if the data from the drug groups are published, it's unusual to report on some aspect of the placebo data in a seperate paper before writing up the main results of an RCT. To me it seems likely that when this study was designed, no-one intended to search for genes associated with the placebo effect. I suspect that the analysis the authors report on here was post-hoc; having looked at the data, they looked around for any interesting effects in it.
To be clear, there's no proof that this is what happened here, but anyone who has worked in science will know that it does happen, and to my jaded eyes it seems probable that this is a case of it. For one thing, if this was a study intended to investigate the placebo effect, it was poorly designed (see above).
There's nothing wrong with post-hoc findings. If scientists only ever found what they set out to look for, science wouldn't have got very far. However, unless they are clearly reported as post-hoc the problem of the Texas Sharpshooter arises - findings may appear to be more significant than they otherwise would. In this case, the TPH2 gene was only a significant predictor of "placebo response" with p=0.04, which is marginal at the best of times.
The reason researchers feel the need to do this kind of thing is because of the premium the scientific community (and hence scientific publishing) places on getting "positive results". Plus, no-one wants to PET scan over 100 people (they're incredibly expensive) and report that nothing interesting happened. However, this doesn't make it right (rant continues...)
3. Science Journalism Is Dysfunctional
Sorry to go on about this, but really it is. New Scientist's write up of this study was, relatively speaking, quite good - they did at least include some caveats ("The gene might not play a role in our response to treatment for all conditions, and the experiment involved only a small number of people.") Although, they had a couple of factual errors such as saying that "8 of the 10 responders had two copies [of the TPH2 G allele], while none of the non-responders did" - actually 8 of the 15 non-responders did - but anyway.
The main point is that they didn't pick up on the fact that this experiment didn't measure the placebo effect at all, which makes their whole article misleading. (The newspapers generally did an even worse job.) I was able to write this post because I had nothing else on this weekend and reading papers like this is a major part of my day job. Ego aside, I'm pretty good at this kind of thing. That's why I write about it, and not about other stuff. And that's why I no longer read science journalism (well, except to blog about how rubbish it is.)
It would be wrong to blame the journalist who wrote the article for this. I'm sure they did the best they could in the time available. I'm sure that I couldn't have done any better. The problem is that they didn't have enough time, and probably didn't have enough specialist knowledge, to read the study critically. It's not their fault, it's not even New Scientist's fault, it's the fault of the whole idea of science journalism, which involves getting non-experts to write, very fast, about complicated issues and make them comprehensible and interesting to the laymen even if they're manifestly not. I used to want to be a science journalist, until I realised that that was the job description.
T. Furmark, L. Appel, S. Henningsson, F. Ahs, V. Faria, C. Linnman, A. Pissiota, O. Frans, M. Bani, P. Bettica, E. M. Pich, E. Jacobsson, K. Wahlstedt, L. Oreland, B. Langstrom, E. Eriksson, M. Fredrikson (2008). A Link between Serotonin-Related Gene Polymorphisms, Amygdala Activity, and Placebo-Induced Relief from Social Anxiety Journal of Neuroscience, 28 (49), 13066-13074 DOI: 10.1523/JNEUROSCI.2534-08.2008
No Response to "Lessons from the Placebo Gene"
Posting Komentar