Tampilkan postingan dengan label 5HTT. Tampilkan semua postingan
Tampilkan postingan dengan label 5HTT. Tampilkan semua postingan

Blue Morning

Recently, I wrote about diurnal mood variation: the way in which depression often waxes and wanes over the course of the day. Mornings are generally the worst.

A related phenomenon is late insomnia, or "early morning waking".

But this phrase is rather an understatement. Everyone's woken up early. Maybe you had a flight to catch. Or you were drunk and threw up. Or you just needed a pee. That's early morning waking, but not the depressive kind. When you're depressed, the waking up is the least of your problems.

Suddenly, you are awake, more awake than you've ever been. And you know something terrible has happened, or is about to happen, or that you've done something terribly wrong. It feels like a Eureka moment. You can be a level-headed person, not given to jumping to conclusions, but you will be convinced of this.

In a panic attack, you think you're going to die. Your heart is beating too fast, your breathing's too deep: your body is exploding, you can feel it too closely. With this, With this, you think you should die or even, in some sense, already have. It feels cold: you can no longer feel the warmth of your own body.

The moment passes; the terrible truth that you were so certain of five minutes ago becomes a little doubtful. Maybe it's not quite so bad. At this point, the wakefulness goes too, and you become, well, as tired as you ought to be at 3 am. You try to go back to sleep. If you're lucky, you succeed. If not, you lie awake until morning in a state of miserable contemplation.

While it's happening, you think that you're going to feel this way forever; bizarrely, you think you always have felt this way. In fact, this is the darkest hour.

*

Why does this happen? There has been almost no research on early morning waking. Presumably, because it's so hard to study. To observe it, you would have to get your depressed patients to spend all night in your brain scanner (or, if you prefer, on your analyst's couch), and even then, it doesn't happen every night.

But here's my theory: the key is the biology of sleep. There are many stages of sleep; at a very rough approximation there's dreaming REM, and dreamless slow-wave. Now, REM sleep tends to happen during the second half of the night - the early morning.

During REM sleep, the brain is, in many respects, awake. This is presumably what allows us to have concious dreams. Whereas in slow wave sleep, the brain really is offline; slow waves are also seen in the brain of people in comas, or under deep anaesthesia.

When we're awake, the brain is awash with modulatory neurotransmitters, such as serotonin, norepinephrine, and acetylcholine. During REM, acetylcholine is present, while in slow-wave sleep it's not; indeed acetylcholine may well be what stops slow waves and "wakes up" the cortex.

But unlike during waking, serotonin and norepinephrine neurons are entirely inactive during REM sleep - and only during REM sleep. This fact is surprisingly little-known, but it seems to me that it explains an awful lot.

For one thing, it explains why drugs which increase serotonin levels, such as SSRI antidepressants, inhibit REM sleep. Indeed, high doses of MAOi antidepressants prevent REM entirely (without any noticeable ill-effects, suggesting REM is dispensable). SSRIs only partially suppress it.

Ironically, SSRIs can make dreams more vivid and colourful. I've been told by sleep scientists that this is because they delay the onset of REM so the dreams are "shifted" later into the night making you more likely to remember them when you wake up. But there could be more to it than that.

The fact that REM is a serotonin-free zone also explains wet dreams. Serotonin is well known to suppresses ejaculation; that's why SSRIs delay orgasm, one of their least popular side effects, although it's useful to treat premature ejaculation: every cloud has a silver lining.

So, having said all that: could this also explain the terror of early-morning waking? Suppose that, for whatever reason, you woke up during REM sleep, but your serotonin cells didn't wake up quick enough, leaving you awake, but with no serotonin (a situation which never normally occurs, remember). How would that feel?

Using a technique called acute tryptophan depletion (ATD), you can lower someone's serotonin levels. In most people, this doesn't do very much, but in some people with a history of depression, it causes them to relapse. Here's what happened to one patient after ATD:
[her] previous episodes of clinical depression were associated with the loss of important friendships had, while depressed, been preoccupied with fears that she would never be able to sustain a relationship. She had not had such fears since then.

She had been fully recovered and had not taken any medication for over a year. About 2 h after drinking the tryptophan-free mixture she experienced a sudden onset of sadness, despair, and uncontrollable crying. She feared that a current important relationship would end.
We don't know why tryptophan depletion does this to some people, or why it doesn't affect everyone the same way, and it's pure speculation that early morning waking has anything to do with this. But having said that, the pieces do seem to fit.

The Hunt for the Prozac Gene

One of the difficulties doctors face when prescribing antidepressants is that they're unpredictable.

One person might do well on a certain drug, but the next person might get no benefit from the exact same pills. Finding the right drug for each patient is often a matter of trying different ones until one works.

So a genetic test to work out whether a certain drug will help a particular person would be really useful. Not to mention really profitable for whoever patented it. Three recent papers, published in three major journals, all claim to have found genes that predict antidepressant response. Great! The problem is, they were different genes.

First up, American team Binder et al looked at about 200 variants in 10 genes involved in the corticosteroid stress response pathway. They found one, in a gene called CRHBP, that was significantly associated with poor response to the popular SSRI antidepressant citalopram (Celexa), using the large STAR*D project data set. But this was only true of African-Americans and Latinos, not whites.

Garriock et al used the exact same dataset, but they did a genome-wide association study (GWAS), which looks at variants across the whole genome, unlike Binder et al who focussed on a small number of specific candidate genes. Sadly no variants were statistically significantly correlated with response to citalopram, although in a GWAS, the threshold for genome-wide significance is very high due to multiple comparisons correction. Some were close to being significant, but they weren't obviously related to CRHBP, and most weren't anything to do with the brain.

Uher et al did another GWAS of response to escitalopram and nortriptyline in a different sample, the European GENDEP study. Escitalopram is extremely similar to citalopram, the drug in the STAR*D studies; nortriptyline however is very different. They found one genome-wide significant hit. A variant in a gene called UST was associated with response to nortriptyline, but not escitalopram. No variants were associated with response to escitalopram, although one in the gene IL11 was close. There were some other nearly-significant results, but they didn't overlap with either of the STAR*D studies.

Finally, one of the STAR*D studies found a variant significantly linked to tolerability (side effects) of citalopram. GENDEP didn't look at this.

*

The UST link to nortriptyline finding is the strongest thing here, but for citalopram / escitalopram, no consistent pharmacogenetic results emerged at all. What does this mean? Well, it's possible that there just aren't any genes for citalopram response, but that seems unlikely. Even if you believe that antidepressants only work as placebos, you'd expect there would be genes that alter placebo responses, or at the very least, that affect side-effects and hence the active placebo improvement.

The thing is that the "antidepressant response" in these studies isn't really that: it's a mix of many factors. We know that a lot of the improvement would have happened even with placebo pills, so much of it isn't a pharmacological effect. There are probably genes associated with placebo improvement, but they might not be the same ones that are associated with drug improvement and a gene might even have opposite effects that cancel out (better drug effect, worse placebo effect). Some of the recorded improvement won't even be real improvement at all, just people saying they feel better because they know they're expected to.

If I were looking for the genes for SSRI response, not that I plan to, here's what I'd do. To stack the odds in my favour, I'd forget people with an moderate or partial response, and focus on those who either do really well, or those who get no benefit at all, with a certain drug. I'd also want to exclude people who respond really well, but not due to the specific effects of the drug.

That would be hard but one angle would be to only include people whose improvement is specifically reversed by acute tryptophan depletion, which reduces serotonin levels thus counteracting SSRIs. This would be a hard study to do, though not impossible. (In fact there are dozens of patients on record who meet my criteria, and their blood samples are probably still sitting in freezers in labs around the world... maybe someone should dig them out).

Still, even if you did find some genes that way, would they be useful? We'd have had to go to such lengths to find them, that they're not going to help doctors decide what to do with the average patient who comes through the door with depression. That's true, but they might just help us to work out who will respond to SSRIs, as opposed to other drugs.

ResearchBlogging.orgBinder EB, Owens MJ, Liu W, Deveau TC, Rush AJ, Trivedi MH, Fava M, Bradley B, Ressler KJ, & Nemeroff CB (2010). Association of polymorphisms in genes regulating the corticotropin-releasing factor system with antidepressant treatment response. Archives of general psychiatry, 67 (4), 369-79 PMID: 20368512

Uher, R., Perroud, N., Ng, M., Hauser, J., Henigsberg, N., Maier, W., Mors, O., Placentino, A., Rietschel, M., Souery, D., Zagar, T., Czerski, P., Jerman, B., Larsen, E., Schulze, T., Zobel, A., Cohen-Woods, S., Pirlo, K., Butler, A., Muglia, P., Barnes, M., Lathrop, M., Farmer, A., Breen, G., Aitchison, K., Craig, I., Lewis, C., & McGuffin, P. (2010). Genome-Wide Pharmacogenetics of Antidepressant Response in the GENDEP Project American Journal of Psychiatry DOI: 10.1176/appi.ajp.2009.09070932

Garriock, H., Kraft, J., Shyn, S., Peters, E., Yokoyama, J., Jenkins, G., Reinalda, M., Slager, S., McGrath, P., & Hamilton, S. (2010). A Genomewide Association Study of Citalopram Response in Major Depressive Disorder Biological Psychiatry, 67 (2), 133-138 DOI: 10.1016/j.biopsych.2009.08.029

Life Without Serotonin

Via Dormivigilia, I came across a fascinating paper about a man who suffered from a severe lack of monoamine neurotransmitters (dopamine, serotonin etc.) as a result of a genetic mutation: Sleep and Rhythm Consequences of a Genetically Induced Loss of Serotonin


Neuroskeptic readers will be familiar with monoamines. They're psychiatrists' favourite neurotransmitters, and are hence very popular amongst psych drug manufacturers. In particular, it's widely believed that serotonin is the brain's "happy chemical" and that clinical depression is caused by low serotonin while antidepressants work by boosting it.

Critics charge that there is no evidence for any of this. My own opinion is that it's complicated, but that while there's certainly no simple relation between serotonin, antidepressants and mood, they are linked in some way. It's all rather mysterious, but then, the functions of serotonin in general are; despite 50 years of research, it's probably the least understood neurotransmitter.

The new paper adds to the mystery, but also provides some important new data. Leu-Semenescu et al report on the case of a 28 year old man, with consanguineous parents, who suffers from a rare genetic disorder, sepiapterin reductase deficiency (SRD). SRD patients lack an enzyme which is involved, indirectly, in the production of the monoamines serotonin and dopamine, and also melatonin and noradrenaline which are produced from these two. SRD causes a severe (but not total) deficiency of these neurotransmitters.

The most obvious symptoms of SRD are related to the lack of dopamine, and include poor coordination and weakness, very similar to Parkinson's Disease. An interesting feature of SRD is that these symptoms are mild in the morning, worsen during the day, and improve with sleep. Such diurnal variation is also a hallmark of severe depression, although in depression it's usually the other way around (better in the evening).

The patient reported on in this paper suffered Parkinsonian symptoms from birth, until he was diagnosed with dystonia at age 5 and started on L-dopa to boost his dopamine levels. This immediately and dramatically reversed the problems.

But his serotonin synthesis was still impaired, although doctors didn't realize this until age 27. As a result, Leu-Semenescu et al say, he suffered from a range of other, non-dopamine-related symptoms. These included increased appetite - he ate constantly, and was moderately obese - mild cognitive impairment, and disrupted sleep:

The patient reported sleep problems since childhood. He would sleep 1 or 2 times every day since childhood and was awake during more than 2 hours most nights since adolescence. At the time of the first interview, the night sleep was irregular with a sleep onset at 22:00 and offset between 02:00 and 03:00. He often needed 1 or 2 spontaneous, long (2- to 5-h) naps during the daytime.
After doctors did a genetic test and diagnosed SRD, they treated him with 5HTP, a precursor to serotonin. The patient's sleep cycle immediately normalized, his appetite was reduced and his concentration and cognitive function improved (although that may have been because he was less tired). Here's his before and after hypnogram:

Disruptions in sleep cycle and appetite are likewise common in clinical depression. The direction of the changes in depression varies: loss of appetite is common in the most severe "melancholic" depression, while increased appetite is seen in many other people.

For sleep, both daytime sleepiness and night-time insomnia, especially waking up too early, can occur in depression. The most interesting parallel here is that people with depression often show a faster onset of REM (dreaming) sleep, which was also seen in this patient before 5HTP treatment. However, it's not clear what was due to serotonin and what was due to melatonin because melatonin is known to regulate sleep.

Overall, though, the biggest finding here was a non-finding: this patient wasn't depressed, despite having much reduced serotonin levels. This is further evidence that serotonin isn't the "happy chemical" in any simple sense.

On the other hand, the similarities between his symptoms and some of the symptoms of depression suggest that serotonin is doing something in that disorder. This fits with existing evidence from tryptophan depletion studies showing that low serotonin doesn't cause depression in most people, but does re-activate symptoms in people with a history of the disease. As I said, it's complicated...

ResearchBlogging.orgSmaranda Leu-Semenescu et al. (2010). Sleep and Rhythm Consequences of a Genetically Induced Loss of Serotonin Sleep, 33 (03), 307-314

Predicting Antidepressant Response with EEG

One of the limitations of antidepressants is that they don't always work. Worse, they don't work in an unpredictable way. Some people benefit from some drugs, and others don't, but there's no way of knowing in advance what will happen in any particular case - or of telling which pill is right for which person.

As a result, drug treatment for depression generally involves starting with a cheap medication with relatively mild side-effects, and if that fails, moving onto a series of other drugs until one helps. But since it can take several weeks for any new drug to work, this can be a frustrating process for patients and doctors alike.

Some means of predicting the antidepressant response would thus be very useful. Many have been proposed, but none have entered widespread clinical use. Now, a pair of papers(1,2) from UCLA's Andrew Leuchter et al make the case for prediction using quantitative EEG (QEEG).

EEG, electroencephalography, is a crude but effective way of recording electrical activity in the brain via electrodes attached to the head. "Quantitative" EEG just means using EEG to precisely measure the level of certain kinds of activity in the brain.

Leuchter et al's system is straightforward: it uses six electrodes on the front of the head. The patient simply relaxes with their eyes closed for a few minutes while neural activity is recorded.

This procedure is performed twice, once just before antidepressant treatment begins and then again a week later. The claim is that by examining the changes in the EEG signal after one week of drug treatment, the eventual benefit of the drug can be predicted. It's not an implausible idea, and if it did work, it would be rather helpful. But does it?

Leuchter et al say: yes! The first paper reports that in 73 depressed patients who were given the antidepressant escitalopram 10mg/day, QEEG changes after one week predicted clinical improvement six weeks later. Specifically, people who got substantially better at seven weeks had a higher "Antidepressant Treatment Response Index" (ATR) at one week than people who didn't: 59.0 ± 10.2 vs 49.8 ± 7.8, which is highly significant (
p less than 0.001).

In the companion paper, the authors examined patients who started on escitalopram and then either kept taking it or switched to a different antidepressant, bupropion. They found that patients who had a high ATR after a week of escitalopram tended to do well if they stayed on it, while patients who had a low ATR to escitalopram did better when they switched to the other drug.

These are interesting results, and they follow from ten years of previous work (mostly, but not exclusively, from the same group) on the topic. Because the current study didn't include a placebo group, we can't say that the QEEG predicts antidepressant response as such, only that it predicts improvement in depression symptoms. But even this is pretty exciting, if it really works.

In order to verify that it does, other researchers need to replicate this experiment. But they may find this a little difficult. What is the Antidepressant Treatment Response Index use in this study? It's derived from an analysis of the EEG signal, and we're told that you get it from this formula:

Some of the terms here are common parameters that any EEG expert will understand. But "A", "B", and "C" are not. They're constants, which are not given in the paper. They're secret numbers. Without knowing what those numbers are, no-one can calculate the "ATR" even if they have an EEG machine.

Why
keep them secret? Well...

"Financial support of this project was provided by Aspect Medical Systems. Aspect participated in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation and review of the manuscript."
Aspect is a large medical electronics company who developed the system used here. Presumably, they want to patent it (or already have). We're told that
"To facilitate independent replication of the work reported here, Aspect intends to make available a limited number of investigational systems for academic researchers. Please contact Scott Greenwald, Ph.D... for further information."
All very nice of them, but if they'd told us the three magic numbers, academics could start trying to independently replicate these results tomorrow. As it is, anyone who wants to do so will have to get Aspect's blessing, which, with the best will in the world, means they will not be entirely "independent".

[BPSDB]


ResearchBlogging.orgLeuchter AF, Cook IA, Gilmer WS, Marangell LB, Burgoyne KS, Howland RH, Trivedi MH, Zisook S, Jain R, Fava M, Iosifescu D, & Greenwald S (2009). Effectiveness of a quantitative electroencephalographic biomarker for predicting differential response or remission with escitalopram and bupropion in major depressive disorder. Psychiatry research PMID: 19709754

Leuchter AF, Cook IA, Marangell LB, Gilmer WS, Burgoyne KS, Howland RH, Trivedi MH, Zisook S, Jain R, McCracken JT, Fava M, Iosifescu D, & Greenwald S (2009). Comparative effectiveness of biomarkers and clinical indicators for predicting outcomes of SSRI treatment in Major Depressive Disorder: Results of the BRITE-MD study. Psychiatry research PMID: 19712979

St John's Wort - The Perfect Antidepressant, If You're German

The herb St John's Wort is as effective as antidepressants while having milder side effects, according to a recent Cochrane review, St John's wort for major depression.

Professor Edzard Ernst, a well-known enemy of complementary and alternative medicine, wrote a favorable review of this study in which he comments that given the questions around the safety and effectiveness of antidepressants, it is a mystery why St John's Wort is not used more widely.

When Edzard Ernst says a herb works, you should take notice. But is St John's Wort (Hypericum perforatum) really the perfect antidepressant? Curiously, it seems to depend whether you're German or not.

The Cochrane review included 29 randomized, double-blind trials with a total of 5500 patients. The authors only included trials where all patients met DSM-IV or ICD-10 criteria for "major depression". 18 trials compared St John's Wort extract to placebo pills, and 19 compared it conventional antidepressants. (Some trials did both).

The analysis concluded that overall, St John's Wort was significantly more effective than placebo. The magnitude of the benefit was similar to that seen with conventional antidepressants in other trials (around 3 HAMD points). However, this was only true when studies from German-speaking countries were examined.

Out of the 11 Germanic trials, 8 found that St John's Wort was significantly better than placebo and the other 3 were all very close. None of the 8 non-Germanic trials found it to be effective and only one was close.


Edzard Ernst, by the way, is German. So were the authors of this review. I'm not.

The picture was a bit more clear when St John's Wort was directly compared to conventional antidepressants: it was almost exactly as effective. It was only significantly worse in one small study. This was true in both Germanic and non-Germanic studies, and was true when either older tricyclics or newer SSRIs were considered.

Perhaps the most convincing result was that St John's Wort was well tolerated. Patients did not drop out of the trials because of side-effects any more often than when they were taking placebo (OR=0.92), and were much less likely to drop out versus patients given antidepressants (OR=0.41). Reported side effects were also very few. (It can be dangerous when combined with certain antidepressants and other medications however.)

So, what does this mean? If you look at it optimistically, it's wonderful news. St John's Wort, a natural plant product, is as good as any antidepressant against depression, and has much fewer side effects, maybe no side effects at all. It should be the first-line treatment for depression, especially because it's cheap (no patents).

But from another perspective this review raises more questions than answers. Why did St John's Wort perform so differently in German vs. non-German studies? The authors admit that:

Our finding that studies from German-speaking countries yielded more favourable results than trials performed elsewhere is difficult to interpret. ... However, the consistency and extent of the observed association suggest that there are important differences in trials performed in different countries.
The obvious, cynical explanation is that there are lots of German trials finding that St John's Wort didn't work, but they haven't been published because St John's Wort is very popular in German-speaking countries and people don't want to hear bad news about it. The authors downplay the possibility of such publication bias:
We cannot rule out, but doubt, that selective publication of overoptimistic results in small trials strongly influences our findings.
But we really have no way of knowing.

The more interesting explanation is that St John's Wort really does work better in German trials because German investigators tend to recruit the kind of patients who respond well to St John's Wort. The present review found that trials including patients with "more severe" depression found slightly less benefit of St John's Wort vs placebo, which is the opposite of what is usually seen in antidepressant trials, where severity correlates with response. The authors also note that it's been suggested that so-called "atypical depression" symptoms - like eating too much, sleeping a lot, and anxiety - respond especially well to St John's Wort.

So it could be that for some patients St John's Wort works well, but until studies examine this in detail, we won't know. One thing, however, is certain - the evidence in favor of Hypericum is strong enough to warrant more scientific interest than it currently gets. In most English-speaking psychopharmacology circles, it's regarded as a flaky curiosity.

The case of St John's Wort also highlights the weaknesses of our current diagnostic systems for depression. According to DSM-IV someone who feels miserable, cries a lot and comfort-eats icecream has the same disorder - "major depression" - as someone who is unable to eat or sleep with severe melancholic symptoms. The concept is so broad as to encompass a huge range of problems, and doctors in different cultures may apply the word "depression" very differently.

[BPSDB]

ResearchBlogging.orgErnst, E. (2009). Review: St John's wort superior to placebo and similar to antidepressants for major depression but with fewer side effects Evidence-Based Mental Health, 12 (3), 78-78 DOI: 10.1136/ebmh.12.3.78

Klaus Linde, Michael M Berner, Levente Kriston (2008). St John's wort for major depression Cochrane Database of Systematic Reviews (4)

In Science, Popularity Means Inaccuracy

Who's more likely to start digging prematurely: one guy with a metal-detector looking for an old nail, or a field full of people with metal-detectors searching for buried treasure?

In any area of science, there will be some things which are more popular than others - maybe a certain gene, a protein, or a part of the brain. It's only natural and proper that some things get of lot of attention if they seem to be scientifically important. But Thomas Pfeiffer and Robert Hoffmann warn in a PLoS One paper that popularity can lead to inaccuracy - Large-Scale Assessment of the Effect of Popularity on the Reliability of Research.

They note two reasons for this. Firstly, popular topics tend to attract interest and money. This means that scientists have much to gain by publishing "positive results" as this allows them to get in on the action -

In highly competitive fields there might be stronger incentives to “manufacture” positive results by, for example, modifying data or statistical tests until formal statistical significance is obtained. This leads to inflated error rates for individual findings... We refer to this mechanism as “inflated error effect”.
Secondly, in fields where there is a lot of research being done, the chance that someone will, just by chance, come up with a positive finding increases -
The second effect results from multiple independent testing of the same hypotheses by competing research groups. The more often a hypothesis is tested, the more likely a positive result is obtained and published even if the hypothesis is false. ... We refer to this mechanism as “multiple testing effect”.
But does this happen in real life? The authors say yes, based on a review of research into protein-protein interactions in yeast. (Happily, you don't need to be a yeast expert to follow the argument.)

There are two ways of trying to find out whether two proteins interact with each other inside cells. You could do a small-scale experiment specifically looking for one particular interaction: say, Protein B with Protein X. Or you can do "high-throughput" screening of lots of proteins to see which ones interact: Does Protein A interact with B, C, D, E... Does Protein B interact with A, C, D, E... etc.

There have been tens of thousands of small-scale experiments into yeast proteins, and more recently, a few high-throughput studies. The authors looked at the small-scale studies and found that the more popular a certain protein was, the less likely it was that reported interactions involving it would be confirmed by high-throughput experiments.

The second and the third of the above graphs shows the effect. Increasing popularity leads to a falling % of confirmed results. The first graph shows that interactions which were replicated by lots of small-scale experiments tended to be confirmed, which is what you'd expect.

Pfeiffer and Hoffmann note that high-throughput studies have issues of their own, so using them as a yardstick to judge the truth of other results is a little problematic. However, they say that the overall trend remains valid.

This is an interesting paper which provides some welcome empirical support to the theoretical argument that popularity could lead to unreliability. Unfortunately, the problem is by no means confined to yeast. Any area of science in which researchers engage in a search for publishable "positive results" is vulnerable to the dangers of publication bias, data cherry-picking, and so forth. Even obscure topics are vulnerable but when researchers are falling over themselves to jump on the latest scientific bandwagon, the problems multiply exponentially.

A recent example may be the "depression gene", 5HTTLPR. Since a landmark paper in 2003 linked it to clinical depression, there has been an explosion of research into this genetic variant. Literally hundreds of papers appeared - it is by far the most studied gene in psychiatric genetics. But a lot of this research came from scientists with little experience or interest in genes. It's easy and cheap to collect a DNA sample and genotype it. People started routinely looking at 5HTTLPR whenever they did any research on depression - or anything related.

But wait - a recent meta-analysis reported that the gene is not in fact linked to depression at all. If that's true (it could well be), how did so many hundreds of papers appear which did find an effect? Pfeiffer and Hoffmann's paper provides a convincing explanation.

Link - Orac also blogged this paper and put a characteristic CAM angle on it.

ResearchBlogging.orgPfeiffer, T., & Hoffmann, R. (2009). Large-Scale Assessment of the Effect of Popularity on the Reliability of Research PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0005996

A Very Optimistic Genetics Paper


Saturday saw the Guardian on fine form with a classic piece of bad neuro-journalism which made it all the way onto the front page:

Psychologists find gene that helps you look on the bright side of life
Those unfortunate enough to lack the 'brightside gene' are more likely to suffer from mental health problems such as depression
What the research actually found was nothing to do with looking on the bright side of anything, and was nothing to do with depression either. In fact, it suggests that the gene in question doesn't cause mental health problems. So the headlines are a little misleading, then.

The study comes from Elaine Fox and colleagues from the University of Essex.* They took 111 people, presumably students, and got them to do a "dot-probe" task. Performance on this task was related to the genotype of the 5HTTLPR polymorphism, a variant in the gene which encodes for the serotonin transporter protein. Serotonin is "the brain's main feelgood chemical" as the Guardian put it... except it isn't, although it does have something to do with mood.

What's a "dot probe" task? It's a test which has become popular amongst all kinds of psychologists over the past 10 years or so, having first been used in 1986 by Colin MacLeod et. al. The task involves pressing a button whenever a "probe" - a little dot - appears on a screen. The goal is to press the button as quickly as possible, as soon as the dot appears.

The twist is that as well as the dots, there are other things on the screen. In the 1986 version of the test these were words, while in this experiment they were colour pictures. Some of the images were pleasant: smiling faces, flowers, and other nice things. Some were unpleasant - scary dogs, bloody injuries, etc. And some were neutral objects, like furniture.Pairs of these pictures appeared on the screen for a short time (half a second) immediately before each dot appeared, one on the left of the screen and one on the right. The key is that the dot appeared in the same place as one of the pictures.

The task operates under the assumption that if the viewer's attention is grabbed by one of the pictures, they are likely to be faster to respond to seeing the dot when it appears in the same place as that picture, because they will already be focused on that area of the screen. If, for example, people are on average faster to detect the dot when it appears in the same place as the nice pictures as opposed to the horrible ones, this is described as indicating a "positive attentional bias" i.e. an unconscious tendency to pay attention to pleasant pictures.

Unfortunately, now that you know what a dot probe task is, you can't take part in any psychology experiments which uses one, because once you know how it's supposed to work there's no point in doing it. Sorry. But on the bright side, you now officially know more about psychology than The Economist, whose write-up of this experiment managed to be even worse than the Guardian's. They not only sensationalized the results, but also misunderstand the whole point of the dot-probe task - it's not about "distraction", it's about selective attention-grabbing.

Anyway, that's the task, and the study found that carriers of two "long" variants of the 5HTTLPR gene showed a strong attention bias towards nice pictures and away from nasty ones, while other people showed no biases. Statistically, the result was highly significant, so let's assume it's true. What does it mean? You could take it to mean that carriers of two long variants were more optimistic in that they tend to pay attention to the good stuff. On the other hand you could equally well say they're so squeamish and wussy that they can't bear to look at the bad stuff and have to avert their eyes from it.

And what's this got to do with depression? Well to cut a very long story short the gene in question has been previously linked to depression and also to personality traits such as "neuroticism" - being anxious, worried and generally miserable (see this paper). But in this study they found no such association with neuroticism. Despite the fact that it was a report of this association which got everyone interested in the 5HTTLPR variant in the first place back in 1996! Brilliantly, they spin their negative finding as a good thing -
The fact that our genotype groups were matched on a range of self-report measures, including neuroticism can be seen as a major strength.
Hope springs eternal. Overall, while this paper is a fine contribution to the psychology literature on the dot-probe task (and the results genuinely do seem to be very significant - there's probably something going on here) it's got nothing to do with optimism and little to do with anything that the average newspaper reader cares about. Luckily, we have journalists to make science interesting on the cheap and on the quick - at the cost of accuracy. There's a lot of really interesting, really thought-provoking popular science writing to be done about the dot-probe, and about the 5HTTLRP gene. But none of it has yet made it into the British papers.

[BPSDB]

*Fox, my PubMed search reveals, also does work on so-called "electromagnetic sensitivity". The upshot of her work is that lots of people sincerely believe that signals from mobile phones and other sources make them feel unwell, but actually, it's all the placebo effect. Now that really is something that everyone should find fascinating - much more so than this study, anyway.

ResearchBlogging.orgElaine Fox, Anna Ridgewell and Chris Ashwin (2009). Looking on the bright side: biased attention and the human serotonin transporter gene Proc. R. Soc. B

Lessons from the Placebo Gene

Update: See also Lessons from the Video Game Brain



The Journal of Neuroscience has published a Swedish study which, according to New Scientist (and the rest) is something of a breakthrough:

First 'Placebo Gene' Discovered
I rather like the idea of a dummy gene made of sugar, or perhaps a gene for being Brian Moloko, but what they're referring to is a gene, TPH2, which allegedly determines susceptibility to the placebo effect. Interesting, if true. Genetic Future was skeptical of the study because of its small sample size. It is small, but I'm not too concerned about that because there are, unfortunately, other serious problems with this study and the reporting on it. I should say at the outset, however, that most of what I'm about to criticize is depressingly common in the neuroimaging literature. The authors of this study have done some good work in the past and are, I'm sure, no worse than most researchers. With that in mind...



The study included 25 people diagnosed with Social Anxiety Disorder (SAD). Some people see the SAD diagnosis as a drug company ploy to sell pills (mainly antidepressants) to people who are just shy. I disagree. Either way, these were people who complained of severe anxiety in social situations. The 25 patients were all given placebo pill treatment for 8 weeks.



Before and after the treatment they each got an [H2

15O] PET scan, which measures regional blood flow (rCBF) in the brain, something that is generally assumed to correlate with neural activity. It's a bit like fMRI, although the physics are different. During the scans the subjects had to make a brief speech in front of 6 to 8 people. This was intended to make them anxious, as it would do. The patient's self-reported social anxiety in everyday situations was also assessed every 2 weeks by questionaires and clinical interviews.



The patients were then split into two groups based upon their final status: "placebo responders" were those who ended up with a "CGI" rating of 1 or 2 - meaning that they reported that their anxiety had got a lot better - and "placebo nonresponders" who didn't. (You may take issue with this terminology - if so, well done, and keep reading). Brain activation during the public speaking task was compared between these two groups. The authors also looked at two genes, 5HTTLPR and TPH2. Both are involved in serotonin signalling and both have been associated (in some studies) with vulnerability to anxiety and depression.



The results: The placebo responders reported less anxiety following treatment - unsurprisingly, because this is why they were classed as responders. On the PET scans, the placebo responders showed reduced amygdala activity during the second public speaking task compared to the first one; the non-responders showed no change. This is consistent with the popular and fairly sensible idea that the amygdala is active during the experience of emotion, especially fear and anxiety. However, in fact, this effect was marginal, and it was only significant under a region-of-interest analysis i.e. when they specifically looked at the data from the amygdala; in a more conservative whole-brain analysis they found nothing (or rather they did, but they wrote it off as uninteresting, as cognitive neuroscientists generally do when they see blobs in the cerebellum and the motor cortex):

PET data: whole-brain analyses

Exploratory analyses did not reveal significantly different treatment-induced patterns of change in responders versus nonresponders. Significant within-group alterations outside the amygdala region were noted only in nonresponders, who had increased (pre < post) rCBF in the right cerebellum ... and in a cluster encompassing the right primary motor and somatosensory cortices...
As for the famous "placebo gene", they found that two genetic variants, 5HTTLPR ll and TPH2 GG, were associated with a bigger drop in amygdala activity from before treatment to after treatment. TPH2 GG was also associated with the improvement in anxiety over the 8 weeks.
In a logistic regression analysis, the TPH2 polymorphism emerged as the only significant variable that could reliably predict clinical placebo response (CGI-I) on day 56, homozygosity for the G allele being associated with better outcome. Eight of the nine placebo responders (89%), for whom TPH2 gene data were available, were GG homozygotes.
You could call this a gene correlating with the "placebo effect", although you'd probably be wrong (see below). There are a number of important lessons to take home here.



1. Dr Placebo, I presume? - Be careful what you call the placebo effect



This study couldn't have discovered a "placebo gene", even if there is one. It didn't measure the placebo effect at all.



You'll recall that the patients in this study were assessed before and after 8 weeks of placebo treatment (sugar pills). Any changes occuring during these 8 weeks might be due to a true "placebo effect" - improvement caused by the patient's belief in the power of the treatment. This is the possibility that gets some people rather excited: it's mind over matter, man! This is why the word "placebo" is often preceded by words like "Amazing", "Mysterious", or even "Magical" - as if Placebo were the stage-name of a 19th century conjuror. (As opposed to the stage name of androgynous pop-goth Brian Moloko ... I've already done that one.)



But, as they often do, more prosaic explanations suggest themselves. Most boringly, the patients might have just got better. Time is the greater healer, etc., and two months is quite a long time. Maybe one of the patients hooked up with a cute guy and it did wonders for their self-confidence. Maybe the reason why the patients volunteered for the study when they did was because their anxiety was especially bad, and by the time of the second scan it had returned to normal (regression towards the mean). Maybe the study itself made a difference, by getting the patients talking about their anxiety with sympathetic professionals. Maybe the patients didn't actually feel any better at all, but just said they did because that's what they thought were expected to say. I could go on all day.



In my opinion most likely, the patients were just less anxious having their second PET scan, once they had survived the first one. PET scans are no fun: you get a catheter inserted into your arm, through which you're injected with a radioactive tracer compound. Meanwhile, your head is fixed in place within big white box covered in hazard signs. It's not hard to see that you'd probably be much more anxious on your first scan than on your second time around.



So, calling the change from baseline to 8 weeks a "placebo response", and calling the people who got better "placebo responders", is misleading (at least it misled every commentator on this study so far). The only way to measure the true placebo effect is to compare placebo-treated people with people who get no treatment at all. This wasn't done in this study. It rarely is. This is something which confuses an awful lot of people. When people talk about the placebo effect, they're very often referring to the change in the placebo group, which as we've seen is not the same thing at all, and has nothing even vaguely magical or mysterious about it. (For example, some armchair psychiatrists like to say that since patients in the placebo group in antidepressant drug trials often show large improvements, sugar pills must be helpful in depression.) Although that said there was another study in the same issue of the same journal which did measure an actual placebo effect.



2. Beware Post Hoc-us Pocus



From the way it's been reported, you would probably assume that this was a study designed to investigate the placebo effect. However, in the paper we read:

Patients were taken from two previously unpublished RCTs that evaluated changes in regional cerebral blood flow after 56 d of pharmacological treatment by means of positron emission tomography. ... The clinical PET trials ... included a total of 108 patients with SAD. There were three treatment arms in the first study and six arms in the second. ... Only the pooled placebo data are included herein, whereas additional data on psychoactive drug treatment will be reported separately.
Personally, I find this odd. Why have so many groups if you're interested in just one of them? Even if the data from the drug groups are published, it's unusual to report on some aspect of the placebo data in a seperate paper before writing up the main results of an RCT. To me it seems likely that when this study was designed, no-one intended to search for genes associated with the placebo effect. I suspect that the analysis the authors report on here was post-hoc; having looked at the data, they looked around for any interesting effects in it.



To be clear, there's no proof that this is what happened here, but anyone who has worked in science will know that it does happen, and to my jaded eyes it seems probable that this is a case of it. For one thing, if this was a study intended to investigate the placebo effect, it was poorly designed (see above).



There's nothing wrong with post-hoc findings. If scientists only ever found what they set out to look for, science wouldn't have got very far. However, unless they are clearly reported as post-hoc the problem of the Texas Sharpshooter arises - findings may appear to be more significant than they otherwise would. In this case, the TPH2 gene was only a significant predictor of "placebo response" with p=0.04, which is marginal at the best of times.



The reason researchers feel the need to do this kind of thing is because of the premium the scientific community (and hence scientific publishing) places on getting "positive results". Plus, no-one wants to PET scan over 100 people (they're incredibly expensive) and report that nothing interesting happened. However, this doesn't make it right (rant continues...)



3. Science Journalism Is Dysfunctional



Sorry to go on about this, but really it is. New Scientist's write up of this study was, relatively speaking, quite good - they did at least include some caveats ("The gene might not play a role in our response to treatment for all conditions, and the experiment involved only a small number of people.") Although, they had a couple of factual errors such as saying that "8 of the 10 responders had two copies [of the TPH2 G allele], while none of the non-responders did" - actually 8 of the 15 non-responders did - but anyway.



The main point is that they didn't pick up on the fact that this experiment didn't measure the placebo effect at all, which makes their whole article misleading. (The newspapers generally did an even worse job.) I was able to write this post because I had nothing else on this weekend and reading papers like this is a major part of my day job. Ego aside, I'm pretty good at this kind of thing. That's why I write about it, and not about other stuff. And that's why I no longer read science journalism (well, except to blog about how rubbish it is.)



It would be wrong to blame the journalist who wrote the article for this. I'm sure they did the best they could in the time available. I'm sure that I couldn't have done any better. The problem is that they didn't have enough time, and probably didn't have enough specialist knowledge, to read the study critically. It's not their fault, it's not even New Scientist's fault, it's the fault of the whole idea of science journalism, which involves getting non-experts to write, very fast, about complicated issues and make them comprehensible and interesting to the laymen even if they're manifestly not. I used to want to be a science journalist, until I realised that that was the job description.



ResearchBlogging.orgT. Furmark, L. Appel, S. Henningsson, F. Ahs, V. Faria, C. Linnman, A. Pissiota, O. Frans, M. Bani, P. Bettica, E. M. Pich, E. Jacobsson, K. Wahlstedt, L. Oreland, B. Langstrom, E. Eriksson, M. Fredrikson (2008). A Link between Serotonin-Related Gene Polymorphisms, Amygdala Activity, and Placebo-Induced Relief from Social Anxiety Journal of Neuroscience, 28 (49), 13066-13074 DOI: 10.1523/JNEUROSCI.2534-08.2008

 
powered by Blogger