More Antidepressant Debates

Six months ago, I asked What's The Best Antidepressant?, and I discussed a paper by Andrea Cipriani et al. The paper claimed that of the modern antidepressants, escitalopram (Lexapro) and sertraline (Zoloft) offer the best combination of effectiveness and mild side effects, and that sertraline has the advantage of being much cheaper.

The Cipriani paper was a meta-analysis of trials comparing one drug against another. With a total of over 25,000 patients, it boasted an impressively large dataset, but I advised caution. Their method of crunching the numbers (indirect comparisons) was complex, and rested on a lot of assumptions.

I wasn't the only skeptic. Cipriani et al has attracted plenty of comments in the medical literature, and they make for some fascinating reading. Indeed, they amount to crash-course in the controversies surrounding antidepressants today - a whole debate in microcosm. So here's the microcosm, in a nutshell:


In The Lancet, the original paper was accompanied by glowing praise by one Sagar Parikh:
Free of any potential funding bias... Now, the clinician can identify the four best treatments... A new gold standard of reliable information has been compiled for patients to review.
But critical comments swiftly appeared in the Lancet's letters pages. While not accusing Cipriani and colleagues themselves of bias or conflicts-of-interest, Tom Jefferson noted that way back in 2003, David Healy drew attention to:
documents that a communications agency acting on behalf of the makers of sertraline were forced to make available by a US court. Among them was a register of completed sertraline studies awaiting to be assigned to authors. This practice (rent-a-key-opinion-leader) is of unknown prevalence but it undermines any attempt at reviewing the evidence in a meaningful way.
This is what's known as medical ghostwriting, and it is indeed a scandal. However, by itself, ghostwriting doesn't distort evidence as such. It's what's published - or not published - that counts. Almost all antidepressant trials are run and funded by drug companies. All too often, they just don't publish data showing their products in an unfavourable light. The fearsome John Ioannidis - known for writing papers with titles like Why most published research findings are false - pulled no punches in reminding readers of this, in his letter:
Among placebo controlled antidepressant trials registered with the US FDA, most negative results are unpublished or published as positive. Take sertraline, which Cipriani and colleagues recommend as the best ... of five FDA-registered trials, the only positive trial was published, one negative trial was published as positive, and three negative trials were unpublished. Head-to-head comparisons can suffer worse bias, since regulatory registration is uncommon. Meta-analysis of published plus industry-furnished data could spuriously suggest that the best drugs are those with the most shamelessly biased data ...
Ioannidis also noted that Cipriani did not include placebo-controlled trials in their analysis. He helpfully provided a table showing that if you do include these trials, the ranking of antidepressants is very different:

Of course, Ioannidis was not saying that the drug-vs-placebo data is better than the drug-vs-drug trials. After all, he had just declared it to be biased. But neither is it necessarily worse, and there's no good reason not to consider it.

Cipriani et al's response to their critics was a little light on detail. In response to concerns of industrial publication bias, they said that:
we contacted the original authors and pharmaceutical companies to obtain further data or to confirm reported figures.
But of course the pharmaceutical companies were under no obligation to play ball. They could just have chosen not to reveal embarrassing data. Rather more reassuring is the fact that the original paper did look for correlations between the drug company running each trial, and the results of the trial; they didn't find any. Rather cheekily, Cipriani et al then went on to suggest that they were the ones who were sticking it to Big Pharma:
The standard thinking has become that most antidepressants are of similar average efficacy and tolerability ... In some ways, this is a comfortable position for industry and its hired academic opinion leaders—it sets a low threshold for the introduction of new agents which can initially be marketed on the basis of small differences in specific adverse effects rather than on clear advantages in terms of overall average efficacy and acceptability.
They certainly have a point here. If aspiring antidepressants had to be proven better than existing ones in order to be sold, instead of just as good, there would probably have been no new antidepressants since Prozac in 1990. (And Prozac is only "better" than the drugs available in 1960 in that it's safer and has fewer side effects; it's no more effective.)

But this is not really relevant to whether the Cipriani analysis is valid. And in The Lancet letters, the authors did not address some of the criticisms, such as Ioannidis's point about including placebo-controlled trials, at all. They do point out that their raw data is available online for anyone to play around with.

The debate continued in the pages of Evidence Based Mental Health. In 2008, Gerald Gartlehner and Bradley Gaynes conducted a rather similar meta-analysis, but they reached very different conclusions. They declared that all post-1990 antidepressants are equally effective (or ineffective).

In their comments on the Cipriani paper, Gartlehner and Gaynes say that they were just more cautious in interpreting the results of a complex and problematic statistical process:
Ranking sertraline and escitalopram higher than other drugs conveys a precision
and existence of clinically important differences that is not reflected in the body of evidence. ...for sertraline and escitalopram the range of probabilities actually extends from the first to the eighth rank for both efficacy and acceptability... the validity of results of indirect comparisons depends on various assumptions, some of which are unverifiable ... We simply took underlying uncertainties into greater consideration and interpreted findings more cautiously than Cipriani and colleagues.
They also accuse Cipriani et al of various technical shortcomings - and in a meta-analysis, such 'technicalities' can often greatly the skew the results:
they included studies with very different populations such as frail elderly, patients with accompanying anxiety and inpatients as well as outpatients ... the effect measure of choice was odds ratios rather than relative risks. Odds ratios have mathematical advantages that statisticians value. Practitioners, however, frequently overestimate their clinical importance...
Cipriani et al respond to some of these technical criticisms, while admitting that their analysis has limitations. But, they say, even an imperfect ranking of antidepressants is better than none at all:
We have a choice. We may either make the best use of the available randomised evidence or we essentially ignore it. We believe that it is better to have a set of criteria based on the available evidence than to have no criteria at all... We believe that, despite the likely biases of the included trials, and the limitations of our approach, our analysis makes the best use of the randomised evidence, providing clinicians with evidence based criteria that can be used to guide treatment choices.
What are we to make of all this? Here's my two cents. It's implausible that all antidepressants are truly equally effective. They affect the brain in different ways. The pharmacological differences between SSRIs such as Prozac, Zoloft and Lexapro are minimal at best but mirtazapine and reboxetine, say, target entirely different systems. They work differently, so it would be odd if they all worked equally well.

The search phrase that most often leads people to this blog is "best antidepressant". People really want to know which antidepressant is most likely to help them. In truth, everyone responds differently to every drug, so there is no one best treatment. But Cipriani et al are quite right that even a roughly correct ranking could help improve the treatment of people with depression, even if the differences are tiny. If Drug X helps 1% more people than Drug Y on average, that's a lot of people when 30 million Americans take antidepressants every year.

So, what is the best antidepressant, on average? I don't know. But maybe it's escitalopram or sertraline. Stranger things have happened.

ResearchBlogging.orgIoannidis JP (2009). Ranking antidepressants. Lancet, 373 (9677) PMID: 19465221

Gartlehner, G., & Gaynes, B. (2009). Are all antidepressants equal? Evidence-Based Mental Health, 12 (4), 98-100 DOI: 10.1136/ebmh.12.4.98

Deep Brain Stimulation for Depressed Rats

Deep-brain stimulation (DBS) is probably the most exciting emerging treatment in psychiatry. DBS is the use of high-frequency electrical current to alter the function of specific areas of the brain. Originally developed for Parkinson's disease, over the past five years DBS has been used experimentally in severe clinical depression, OCD, Tourette's syndrome, alcoholism, and more.

Reports of the effects have frequently been remarkable, but there have been few scientifically rigorous studies, and the number of psychiatric patients treated to date is just dozens. So the true usefulness of the technique is unclear. How DBS works is also a mystery. Even the most basic questions - such as whether high-frequency stimulation switches the brain "on" or "off" - are still being debated.

Recent data from rodents sheds some important light on the issue: Antidepressant-Like Effects of Medial Prefrontal Cortex Deep Brain Stimulation in Rats. The authors took rats, and implanted DBS electrodes in the infralimbic cortex. This area is part of the vmPFC. It's believed to be the rat equivalent of the human region BA25, the subgenual cingulate cortex, which is the most common target for DBS in depression. The current settings (100 microA, 130 Hz, 90 microsec) were chosen to be similar to the ones used in humans.

In a standard rat model of depression, the forced-swim test, infralimbic DBS exerted antidepressant-like effects. DBS was equally as effective as imipramine, a potent antidepressant, in terms of reducing "depression-like" behaviours, namely immobility.

This is not all that surprising. Almost everything which treats depression in humans also reduces immobility in this test (along with few things which don't treat it). Much more interesting is what did and did not block the effects of DBS in these rats.

First off, DBS worked even when the rat's infralimbic cortex had been destroyed by the toxin ibotenic acid. This strongly suggests that DBS does not work simply by activating the infralimbic cortex, even though this is where the electrodes were implanted.

Crucially, infralimbic lesions did not have an antidepressant effect per se, which also rules out the theory that DBS works by inactivating this region. (Infralimbic lesions produced by other methods did have a mild antidepressant effect, but it was smaller than the effect of DBS. This may still be important, however.)

What did block the effects of DBS was the depletion of serotonin (5HT). Serotonin is known to its friends as the brain's "happy chemical", although it's a bit more complicated than that. Most antidepressants target serotonin. And rats whose serotonin systems had been lesioned got no benefit from DBS in this study.

So this suggests that DBS might work by affecting serotonin, and indeed, DBS turned out to greatly increase serotonin release, even in a distant part of the brain (the hippocampus). Interestingly this lasted for nearly two hours after the electrodes were switched off.

Depletion of another neurotransmitter, noradrenaline, did not alter the effects of DBS.

Overall, it seems that infralimbic DBS works by increasing serotonin release, but that this is not because it activates or inactivates the infralimbic cortex itself. Rather, nearby structures must be involved. The most likely explanation is that DBS affects nearby white-matter tracts carrying signals between other areas of the brain; the infralimbic cortex might just happen to be "by the roadside". Many researchers believe that this is how DBS works in humans, but this is the first hard evidence for this.

Of course, evidence from rats is never all that hard when it comes to human mental illness. We need to know whether the same thing is true in people. As luck would have it, you can temporarily reduce human serotonin levels with a technique called acute tryptophan depletion This reverses the effects of antidepressants in many people. If this rat data is right, it should also temporarily reverse the benefits of DBS. Someone should do this experiment as soon as possible - I'd like to do it myself, but I'm British, and all the DBS research happens in America. Bah, humbug, old bean.

There's a couple of others things to note here. In other behavioural tests, infralimbic DBS also had antidepressant-like effects: it seemed to reduce anxiety, and it made rats more resistant to the stress of having electrical shocks (although only slightly.) Finally, DBS in another region, the striatum, had no antidepressant effect at all. That's a bit odd because DBS of the striatum does seem to treat depression in humans - but the part of the striatum targeted here, the caudate-putamen, is quite separate to the one targeted in human depression, the nucleus accumbens.

ResearchBlogging.orgHamani, C., Diwan, M., Macedo, C., Brandão, M., Shumake, J., Gonzalez-Lima, F., Raymond, R., Lozano, A., Fletcher, P., & Nobrega, J. (2009). Antidepressant-Like Effects of Medial Prefrontal Cortex Deep Brain Stimulation in Rats Biological Psychiatry DOI: 10.1016/j.biopsych.2009.08.025

Antidepressant Sales Rise as Depression Falls

Antidepressant sales are rising in most Western countries, and they have been for at least a decade. Recently, we learned that the proportion of Americans taking antidepressants in any given year nearly doubled from 1996 to 2005.

The situation has been thought to be similar in the UK. But a hot-off-the-press paper in the British Medical Journal reveals some surprising facts about the issue: Explaining the rise in antidepressant prescribing.

The authors examined medical records from 1.7 million British patients in primary care (General Practice, i.e. family doctors.) They found that antidepressant sales rose strongly between 1993 and 2005, not because more people are taking these drugs, but entirely because of an increase in the duration of treatment amongst the antidepressant users. It's not that more people are taking them, it's that people are taking them for longer.

In fact, the number of people being diagnosed with depression and prescribed antidepressants has actually fallen over time. The rate of diagnosed depression remained steady from 1993 to about 2001, and then fell markedly, by about a third, up to 2005. This trend was seen in both men and women, but there were age differences. In 18-30 year olds, there was a gradual increase in diagnoses before the decrease. (Note that these graphs show the number of people getting their first ever diagnosis of depression in each year.)
The likelihood of being given antidepressants for a diagnosis of depression stayed roughly constant, at about 75-80% across the years. However, the average duration of treatment increased over time -

The change doesn't look like much, but remember that even a small change in the number of long-term users translates into a large effect on the total number of sales, because each long-term user takes a lot of pills. The authors conclude

Antidepressant prescribing nearly doubled during the study period—the average number of prescriptions issued per patient increased from 2.8 in 1993 to 5.6 in 2004. ... the rise in antidepressant prescribing is mainly explained by small changes in the proportion of patients receiving long term treatment.
Wow. I didn't see that coming, I'll admit. A lot of people, myself included, had assumed that rising antidepressant use was caused by people becoming more willing to seek treatment for depression. Or maybe that doctors were becoming more eager to prescribe drugs. Others believed that rates of clinical depression were rising.

There's no evidence for either of these theories in this British data-set. The recent fall in clinical depression diagnoses, following an increase in young people over the course of the 1990s, is especially surprising. This conflicts with the only British population survey of mental health, the APMS. The APMS found that rates of depression and mixed anxiety/depression increased between 1993 and 2000 in most age groups but least of all in the young, and little change 2000 to 2007. I trust this new data more, because population surveys almost certainly overestimate mental illness.

How does this result compare to elsewhere? In the USA, the average number of antidepressant prescriptions per patient per year rose from "5.60 in 1996 to 6.93 in 2005" according to a recent estimate. In this study yearly "prescriptions issued per patient increased from 2.8 in 1993 to 5.6 in 2004." So there's a major trans-Atlantic difference. In Britain, the length of use increased greatly, while in the US it only rose slightly, but from a higher baseline.

Finally, why has this happened? We can only speculate. Maybe doctors have become more keen on long-term treatment to prevent depressive relapse. Or maybe users have become more willing to take antidepressants long-term. Modern drugs generally have milder side effects than older ones, so this makes sense, although some people would say that this is just further proof that modern antidepressants are "addictive"...

ResearchBlogging.orgMoore M, Yuen HM, Dunn N, Mullee MA, Maskell J, & Kendrick T (2009). Explaining the rise in antidepressant prescribing: a descriptive study using the general practice research database. BMJ (Clinical research ed.), 339 PMID: 19833707

Deconstructing the Placebo

Last month Wired, announced that Placebos Are Getting More Effective. Drugmakers Are Desperate to Know Why.

The article's a good read, and the basic story is true, at least in the case of psychiatric drugs. In clinical trials, people taking placebos do seem to get better more often now than in the past (paper). This is a big problem for Big Pharma, because it means that experimental new drugs often fail to perform better than placebo, i.e. they don't work. Wired have just noticed this, but it's been being discussed in the academic literature for several years.

Why is this? No-one knows. There have been many suggestions - maybe people "believe in" the benefits of drugs more nowadays, so the placebo effect is greater; maybe clinical trials are recruiting people with milder illnesses that respond better to placebo, or just get better on their own. But we really don't have any clear idea.

What if the confusion is because of the very concept of the "placebo"? Earlier this year, the BMJ ran a short opinion piece called It’s time to put the placebo out of our misery. Robin Nunn wants us to "stop thinking in terms of placebo...The placebo construct conceals more than it clarifies."

His central argument is an analogy. If we knew nothing about humour and observed a comedian telling jokes to an audience, we might decide there was a mysterious "audience effect" at work, and busy ourselves studying it...
Imagine that you are a visitor from another world. You observe a human audience for the first time. You notice a man making vocal sounds. He is watched by an audience. Suddenly they burst into smiles and laughter. Then they’re quiet. This cycle of quietness then laughter then quietness happens several times.

What is this strange audience effect? Not all of the man’s sounds generate an audience effect, and not every audience member reacts. You deem some members of the audience to be “audience responders,” those who are particularly influenced by the audience effect. What makes them react? A theory of the audience effect could be spun into an entire literature analogous to the literature on the placebo effect.
But what we should be doing is examining the details of jokes and of laughter -
We could learn more about what makes audiences laugh by returning to fundamentals. What is laughter? Why is “fart” funnier than “flatulence”? Why are some people just not funny no matter how many jokes they try?
And this is what we should be doing with the "placebo effect" as well -
Suppose there is no such unicorn as a placebo. Then what? Just replace the thought of placebo with something more fundamental. For those who use placebo as treatment, ask what is going on. Are you using the trappings of expertise, the white coat and diploma? Are you making your patients believe because they believe in you?
Nunn's piece is a polemic and he seems to be conclude by calling for a "post-placebo era" in which there will be no more placebo-controlled trials (although it's not clear what he means by this). This is going too far. But his analogy with humour is an important one because it forces us to analyse the placebo in detail.

"The placebo effect" has become a vague catch-all term for anything that seems to happen to people when you give them a sugar pill. Of course, lots of things could happen. They could feel better just because of the passage of time. Or they could realize that they're supposed to feel better and say they feel better, even if they don't.

The "true" placebo effect refers to improvement (or worsening) of symptoms driven purely by the psychological expectation of such. But even this is something of a catch-all term. Many things could drive this improvement. Suppose you give someone a placebo pill that you claim will make them more intelligent, and they believe it.

Believing themselves to be smarter, they start doing smart things like crosswords, math puzzles, reading hard books (or even reading Neuroskeptic), etc. But the placebo itself was just a nudge in the right direction. Anything which provided that nudge would also have worked - and the nudge itself can't take all the credit.

The strongest meaning of the "placebo effect" is a direct effect of belief upon symptoms. You give someone a sugar pill or injection, and they immediately feel less pain, or whatever. But even this effect encompasses two kinds of things. It's one thing if the original symptoms have a "real" medical cause, like a broken leg. But it's another thing if the original symptoms are themselves partially or wholly driven by psychological factors, i.e. if they are "psychosomatic".

If a placebo treats a "psychosomatic" disease, then that's not because the placebo has some mysterious, mind-over-matter "placebo effect". All the mystery, rather, lies with the psychosomatic disease. But this is a crucial distinction.

People seem more willing to accept the mind-over-matter powers of "the placebo" than they are to accept the existence of psychosomatic illness. As if only doctors with sugar pills possess the power of suggestion. If a simple pill can convince someone that they are cured, surely the modern world in all its complexity could convince people that they're ill.


ResearchBlogging.orgNunn, R. (2009). It's time to put the placebo out of our misery BMJ, 338 (apr20 2) DOI: 10.1136/bmj.b1568

Placebos Have Side Effects Too

The placebo is the most talked-about treatment in medicine.

Everyone's heard of the "placebo effect", by which pills containing no drugs at all, just chalk and sugar, often seem to make people feel better. But if the mere expectation of improvement can produce improvement, then the expectation of unpleasant consequences, such as side effects, should make people feel worse. This is sometimes called the "nocebo" effect.

Two recently published papers tried to measure it. They looked at people who took part in randomized controlled trials of various drugs, and who were given placebos. Because different drugs have different known side effects, if the nocebo effect is real, the side effects reported by the placebo group should depend on the drug they think they might be taking. As the authors of one of the papers put it:

In a typical clinical trial, the subjects know they can receive either the active medication or the placebo and, accordingly, they are informed about the possible adverse events they may experience during the trial. ... Therefore, informing subjects about the possible adverse events they may experience, may have a significant impact on their expectations and experiences of negative effects.
Accordingly, Rief et al compared the side effects reported in the placebo groups of a large number of antidepressant drug trials. At the same time a separate group of researchers, Amanzio et al, did the same thing for trials of migraine drugs, which is a nice coincidence.

Both papers found that reported side effects do indeed depend on the drug being studied. In the antidepressant paper, people who believed they might be on tricyclic antidepressants (TCAs) reported many more "side effects" than those in trials of SSRIs. These included dry mouth, drowsiness, constipation, and sexual problems. This makes sense, because TCAs do have worse side effects than SSRIs.

Likewise, for the migraine trials, the placebo groups in trials of anticonvulsants reported more symptoms associated with those drugs, such as dizziness and sleepiness. Placebo groups in trials of NSAIDs (like aspirin) were more likely to report upset stomachs and so forth. Finally, in trials of triptans, which have very mild side effects, the placebo group reported few problems.

It's also interesting to compare the two papers. None of the migraine trial placebo patients reported experiencing sexual problems, while many of the antidepressant placebo patients did. Some antidepressants can cause sexual problems, while migraine drugs generally don't.

So, was the "nocebo effect" really making people feel worse? It could well have been, although there are other interpretations. People might just be more willing to report symptoms that they believe are drug side effects. Researchers might be more likely to write them down. And different kinds of people end up in trials of different drugs: some people might be more likely to report certain symptoms. Just as with placebos, we shouldn't rush to ascribe incredible mind-over-matter powers to the "force of suggestion" when there are more prosaic explanations.

Nevertheless, there's an important lesson here. Anecdotal evidence about drug's side effects shouldn't be accepted at face value, any more than anecdotes about their benefits. Drugs do, of course, cause adverse effects. But some drugs have worse reputations than they deserve in this regard. In such cases, nocebo effects might account for some of the reported problems...

ResearchBlogging.orgRief W, Nestoriuc Y, von Lilienfeld-Toal A, Dogan I, Schreiber F, Hofmann SG, Barsky AJ, & Avorn J (2009). Differences in Adverse Effect Reporting in Placebo Groups in SSRI and Tricyclic Antidepressant Trials: A Systematic Review and Meta-Analysis. Drug safety : an international journal of medical toxicology and drug experience, 32 (11), 1041-56 PMID: 19810776

ResearchBlogging.orgAmanzio M, Corazzini LL, Vase L, & Benedetti F (2009). A systematic review of adverse events in placebo groups of anti-migraine clinical trials. Pain PMID: 19781854


"Statistically, airplane travel is safer than driving..." "Statistically, you're more likely to be struck by lightning than to..." "Statistically, the benefits outweigh the risks..."

What does statistically mean in sentences like this? Strictly speaking, nothing at all. If airplane travel is safer than driving, then that's just a fact. (It is true on an hour-by-hour basis). There's no statistically about it. A fact can't be somehow statistically true, but not really true. Indeed, if anything, it's the opposite: if there are statistics proving something, it's more likely to be true than if there aren't any.

But we often treat the word statistically as a qualifier, something than makes a statement less than really true. This is because psychologically, statistical truth is often different to, and less real than, other kinds of truth. As everyone knows, Joseph Stalin said that one death is a tragedy, but a million deaths is a statistic. Actually, Stalin didn't say that, but it's true. And if someone has a fear of flying, then all the statistics in the world probably won't change that. Emotions are innumerate.


Another reason why statistics feel less than real is that, by their very nature, they sometimes seem to conflict with everyday life. Statistics show that regular smoking, for example, greatly raises your risk of suffering from lung cancer, emphysema, heart disease and other serious illnesses. But it doesn't guarantee that you will get any of them, the risk is not 100%, so there will always be people who smoke a pack a day for fifty years and suffer no ill effects.

In fact, this is exactly what the statistics predict, but you still hear people referring to their grandfather who smoked like a chimney and lived to 95, as if this somehow cast doubt on the statistics. Statistically, global temperatures are rising, which predicts that some places will be unusually cold (although more will be unusually warm), but people still think that the fact that it's a bit chilly this year casts doubt on the fact of global warming.


Some people admit that they "don't believe in statistics". And even if we don't go that far, we're often a little skeptical. There are lies, damn lies, and statistics, we say. Someone wrote a book called How To Lie With Statistics. Few of us have read it, but we've all heard of it.

Sometimes, this is no more than an excuse to ignore evidence we don't like. It's not about all statistics, just the inconvenient ones. But there's also, I think, a genuine distrust of statistics per se. Partially, this reflects distrust towards the government and "officialdom", because most statistics nowadays come from official sources. But it's also because psychologically, statistical truth is just less real than other kinds of truth, as mentioned above.


I hope it's clear that I do believe in statistics, and so should you, all of them, all the time, unless there is a good reason to doubt a particular one. I've previously written about my doubts concerning mental health statistics, because there are specific reasons to think that these are flawed.

But in general, statistics are the best way we have of knowing important stuff. It is indeed possible to lie with statistics, but it's much easier to lie without them: there are more people in France than in China. Most people live to be at least 110 years old. Africa is richer than Europe. Those are not true. But statistics are how we know that.


A Vaccine For White Line Fever?

A study claims that it's possible to immunize against cocaine: Cocaine Vaccine for the Treatment of Cocaine Dependence in Methadone-Maintained Patients. But does it work? And will it be useful?
The idea of an anti-drug vaccine is not new; as DrugMonkey explains in his post on this paper, monkeys were being given experimental anti-morphine vaccines as long ago as the 1970s. This one has been under development for years, but this is the first randomized controlled trial to investigate whether it helps addicts to use less of the drug.

Martell et al, a Yale-based group, recruited 115 patients. They all used both cocaine and opiates, and were given methadone treatment to try to reduce their opiate use. The reason why the authors chose to focus on these patients is that the methadone keeps people coming back for more and makes them less likely to drop out of the study, or as they put it, "retention in methadone maintenance programs is substantially better than in primary cocaine treatment programs. We also offered subjects $15 per week to enhance retention."

The vaccine consists of a bacterial protein (cholera toxin B-subunit) chemically linked to a cocaine-like molecule, succinylnorcocaine. Like all vaccines, it works by provoking an immune response. The bacterial protein triggers the production of antibodies, proteins which recognize and bind to specific targets.

In this case, the antibodies bind cocaine (anti-cocaine IgG) because of the succinylnorcocaine in the vaccine. Once a molecule of cocaine is bound to the antibody, it's effectively out of commission, as it cannot enter the brain. So, the vaccine should reduce or abolish the effects of the drug. The control group were given a dummy placebo vaccine.

The results? Biologically speaking, the vaccine worked, but in some people more than others. Out of the 55 subjects who were given the active vaccine, all but one produced anti-cocaine IgG. However, the amount of antibodies produced varied widely. Also, the response was short-lived. The vaccine was given 5 times over the first 12 weeks, but antibody levels did not peak until week 16, after which they fell rapidly.
And the key question - did it reduce cocaine use? Well, sort of. The authors measured drug use in terms of the proportion of urine samples which were cocaine-free. In the active vaccine group, the proportion of drug-free urine samples was higher over weeks 9 to 16, when the antibody levels were high, and this was statistically significant (treatment x time interaction: Z=2.4, P=.01). As expected, the benefit was greater in the people who made lots of antibodies (43 μg/mL) (treatment x time interaction: Z=4.8, P less than .001). But the effect was pretty small:

The bottom line was about 10% more urine samples testing negative, and even that was only true in the minority (38%) of people who responded well to the vaccine! Not very impressive, but on the other hand, the number of drug-free urine tests is a very crude measure of cocaine use. It doesn't tell us how much coke the patients used at a time, or how many times they used it per day.

Also, bear in mind that if it works, this vaccine might increase cocaine use in some people, at least at first. By binding and inactivating some of the cocaine in the bloodstream, the vaccine would mean you'd need to take more of the drug in order to feel the effects. It's curious that the authors relied on just one crude outcome measure and didn't ask the patients to describe the effects in more detail.

So, these are some interesting results, but the vaccine clearly needs a lot of work before it becomes clinically useful, as the authors admit - "Attaining high (43 μg/mL) IgG anticocaine antibody levels was associated with significantly reduced cocaine use, but only 38% of the vaccinated subjects attained these IgG levels and they had only 2 months of adequate cocaine blockade. Thus, we need improved vaccines and boosters." Quite an admission given that this study was partially funded by Celtic Pharmaceuticals, who make the vaccine.

It's also questionable whether any vaccine will be truly beneficial in treating cocaine addiction. Such a vaccine would be a way of reducing the temptation to use cocaine. In this sense, it would be just like naltrexone for heroin addicts, which blocks the effects of the drug. Or disulifram (Antabuse) for alcoholics, which makes drinking alcohol cause horrible side effects. Essentially, these treatments are ways of artificially boosting your "self-control", and they work.

But we've had naltrexone and disulifram for many years. They're cheap and safe. But we still have heroin addicts and alcoholics. This is not to say that they're never helpful - some people find them very useful. But they haven't eradicated addiction because addiction is not something that can be cured with a pill or an injection.

Addiction is a pattern of behaviour, and medications might help people to break free of it, but the causes of addiction are social, economic and psychological as well as biological. People turn to drugs and alcohol when there's nowhere else to turn, and unfortunately, there's no vaccine against that.

ResearchBlogging.orgMartell BA, Orson FM, Poling J, Mitchell E, Rossen RD, Gardner T, & Kosten TR (2009). Cocaine vaccine for the treatment of cocaine dependence in methadone-maintained patients: a randomized, double-blind, placebo-controlled efficacy trial. Archives of general psychiatry, 66 (10), 1116-23 PMID: 19805702

Is Freud Back in Fashion? No.

Freudian psychoanalysis is the key to treating depression, especially the post-natal kind (depression after childbirth). That's according to a Guardian article by popular British psychologist and author Oliver James. He says that recent research has proven Freud right about the mind, and that psychoanalysis works better than other treatments, like cognitive-behavioural therapy (CBT).

Neuroskeptic readers have encountered James before. He's the person who thinks that Britain is the most mentally-ill country in Europe. I disagree, but that's at least a debatable point. This time around, James's claims are just plain wrong.

So, some corrections. We've got a lot to cover, so I'll keep it brief:

"10% [of new mothers] develop a full-blown depression...which therapy should you opt for? [antidepressants] rule out breastfeeding" - No, they don't. Breast-feeding mothers are able to use antidepressants when necessary, according to the British medical guidelines and others:

Limited data on effects of SSRI exposure via breast milk on weight gain and infant development are encouraging. If a woman has been successfully treated with a SSRI in pregnancy and needs to continue therapy after delivery, there is no need to change the drug, provided the infant is full term, healthy and can be adequately monitored...
James's statement is a dangerous mistake, which could lead to new mothers worrying unduly, or even stopping their medication.

"People given chalk pills but told they are antidepressants are almost as likely to claim to feel better as people given the real thing."
- This is true in many cases, although it's a little bit more complicated than that, but this refers to trials on general adult clinical depression, not post-natal depression, which might be completely different.

There's actually only one trial comparing an antidepressant to chalk placebo pills in post-natal depression. The antidepressant, Prozac, worked remarkably well, much better than in most general adult trials. This was a small study, and we really need more research, but it's encouraging.

"Regarding the talking therapies, in one study depressed new mothers were randomly assigned to eight sessions of CBT, counselling, or to psychodynamic psychotherapy. Eighteen weeks later, the ones given dynamic therapy were most likely to have recovered (71%, versus 57% for CBT, 54% counselling)."

This is cherry-picking. In the trial in question the dynamic (psychoanalytic) therapy was slightly better than the other two when depression was assessed in one way, which is what James quotes. The difference was not statistically significant. And using another depression measurement scale, it was no better at all. Take a look, it's hardly impressive:

Plus, after 18 weeks, none of the three psychotherapies was any better to the control, which consisted of doing precisely nothing at all.

"Studies done in the last 15 years have largely confirmed Freud's basic theories. Dreams have been proven to contain meaning." - Nope. Freud believed that dreams exist to fulfil our fantasies, often although not always sexual ones. We dream about what we'd like to do. Except we don't actually dream about it, because we'd find much of it shameful, so our minds hide the true meaning behind layers of metaphor and so forth. "Steep inclines, ladders and stairs, and going up or down them, are symbolic representations of the sexual act..."

If you believe that, good for you, and some people still do, but there has been no research over the past 15 years supporting this (although this is quite interesting). There was never any research really, just anecdotes

"Early childhood experience has been shown to be a major determinant of adult character." Nope. The big story over the past decade is that contra Freud, "shared environment", i.e. family life and child rearing make almost no contribution to adult personality, which is determined by a combination of genes and "individual environment" unrelated to family background. One could argue about the merits of this research but to say that modern psychology is moving towards a Freudian view is absurd. The opposite is true.

"And it is now accepted by almost all psychologists that we do have an unconscious and that it can contain material that has been repressed because it is unacceptable to the conscious mind." Nope. Some psychologists do still believe in "repressed memory" theory, but it's highly controversial. Many consider it a dangerous myth associated with "recovered memory therapy" which has led to false accusations of sexual abuse, Satanic rituals, etc. Again, they may be wrong, but to assert that "almost all" psychologists accept it is bizarre.

"Although slow to be tested, the clinical technique [of Freudian psychoanalysis] has now also been demonstrated to work. The strongest evidence for its superiority over cognitive, short-term treatments was published last year..."

First off, the trial referred to was not about post-natal depression, and it didn't test cognitive therapy at all. It compared long-term psychodynamic therapy, vs. short-term psychodynamic therapy, vs. "solution-focused therapy" in the treatment of various chronic emotional problems. No CBT was harmed in the making of this study.

After 1 year, long-term dynamic therapy was the worst of the three. At 2 years, they were the same. At 3 years, long-term dynamic therapy was the best. Although all these differences were small. Short-term dynamic therapy was no better than solution-focused therapy, which is rather a point against psychoanalysis since solution-focused therapy is firmly non-Freudian. And amusingly, the "short-term" dynamic therapy was actually twice as long as the dynamic therapy in the first study discussed above, which James praised! (20 weekly sessions vs 10). (Edit 23.10.09)


James ends by slagging off CBT and its practitioners, and suggesting that we need a "Campaign for Real Therapy", i.e. not CBT, something he has suggested before. This is the key to understanding why James wrote his muddled piece.

The British government is currently pouring hundreds of millions into the IAPT campaign which aims to "implement National Institute for Health and Clinical Excellence (NICE) guidelines for people suffering from depression and anxiety disorders". NICE guidelines essentially only recommend CBT, so this is effectively a campaign to massively expand CBT services. CBT is widely seen as the only psychotherapy which has been proven to work, in Britain and increasingly elsewhere too.

Oliver James, like quite a lot of people, doesn't like this. And in that, he has a point. There are serious debates to be had over whether CBT is really better than other therapies, and whether we really need lots more of it. There are also serious debates to be had over whether antidepressants are really effective and whether they are over-used. But these are all extremely complex questions. There are no easy answers, no short cuts, no panaceas, and James's brand of sectarian polemic is exactly what we don't need.


powered by Blogger