The Fall of Freud

The works of Sigmund Freud were enormously influential in 20th century psychiatry, but they've now been reduced to little more than a fringe belief system. Armed with the latest version of my PubMed history script, and inspired by this classic gnxp post on the death of Marxism, postmodernism, and other stupid academic fads I decided to see how this happened.

As you can see, the number of published scientific papers related to Freud-y search terms like psychoanalytic has flat-lined for the past 50 years. That represents a serious collapse of influence, given the enormous expansion in the amount of research being published over this time.

Since 1960 the number of papers on schizophrenia has risen by a factor of 10 and anxiety by a factor of 80 (sic). The peak of Freud's fame was 1968, when almost as many papers referenced psychoanalytic (721) as did schizophrenia (989), and it was more than half as popular as antidepressants (1372). Today it's just 10% of either. Proportionally speaking, psychoanalysis has gone out with a whimper, though not a bang.

The rise of Cognitive Behavioral Therapy (CBT), however, is even more dramatic. From being almost unheard until the late 80's, it overtook psychoanalytic in 1993, and it's now more popular than antipsychotics and close on the heels of antidepressants.

What's going to happen in the future? If there is to be a struggle for influence it looks set to be fought between CBT and biological psychiatry, if only because they're pretty much the only games left in town. Yet one of the reasons behind CBT's widespread appeal is that it hasn't thus far overtly challenged biology, has adopted the methods of medicine (clinical trials etc.), and has presented itself as being useful as well as medication rather than instead of it.

One of the few exceptions was Richard Bentall's book Madness Explained (2003) in which he criticized psychiatry and presented a cognitive-behavioural alternative to orthodox biological theories of schizophrenia and bipolar disorder. Bentall remains on the radical wing of the CBT community but in the coming decades this kind of thing may become more common. Only time will tell...

When One Neurotransmitter Is Not Enough

Important news from San Francisco neuroscientists Stuber et al: Dopaminergic Terminals in the Nucleus Accumbens But Not the Dorsal Striatum Corelease Glutamate.

The finding's right there in the title: dopamine is a neurotransmitter, and so is glutamate. Stuber et al found (in mice) that many of the cells that release dopamine also simultaneously release glutamate - specifically, almost all of the cells that project to the nucleus accumbens, involved in pleasure and motivation, also release glutamate. By contrast none of the dopaminergic neurons projecting to the nearby dorsal striatum, involved in movement regulation do this.

Previous work had provided some suggestive evidence for some degree of glutamate/dopamine co-release but this is the first hard evidence and the fact that basically all the dopamine input to the nucleus accumbens is also glutamate input is especially striking.

This is important because it overturns the idea that neurons only release one neurotransmitter each. In fact, it's been clear for a while that this isn't strictly true: there are various little-understood peptide transmitters or "neurohormones" that are known to be co-released, but their function is obscure in most cases.

Dopamine and glutamate on the other hand are both extremely well studied neurotransmitters in their own right. Glutamate's the single most common transmitter in the brain while dopamine is famous for its role in motor control, motivation, Parkinson's disease, mental illness and the action of recreational drugs, just for starters.

What exactly the glutamate does in the nucleus accumbens is completely mysterious at present but future work will no doubt shed light on this. More generally, this paper is a reminder of the fact that our knowledge of the brain is still in its infancy...

ResearchBlogging.orgStuber, G., Hnasko, T., Britt, J., Edwards, R., & Bonci, A. (2010). Dopaminergic Terminals in the Nucleus Accumbens But Not the Dorsal Striatum Corelease Glutamate Journal of Neuroscience, 30 (24), 8229-8233 DOI: 10.1523/JNEUROSCI.1754-10.2010

The A Team Sets fMRI to Rights

Remember the voodoo correlations and double-dipping controversies that rocked the world of fMRI last year? Well, the guys responsible have teamed up and written a new paper together. They are...

The paper is Everything you never wanted to know about circular analysis, but were afraid to ask. Our all-star team of voodoo-hunters - including Ed "Hannibal" Vul (now styled Professor Vul), Nikolaus "Howling Mad" Kriegeskorte, and Russell "B. A." Poldrack - provide a good overview of the various issues and offer their opinions on how the field should move forward.

The fuss concerns a statistical trap that it's easy for neuroimaging researchers, and certain other scientists, to fall into. Suppose you have a large set of data - like a scan of the brain, which is a set of perhaps 40,000 little cubes called voxels - and you search it for data points where there is a statistically significant effect of some kind.

Because you're searching in so many places, in order to avoid getting lots of false positives you set the threshold for significance very high. That's fine in itself, but a problem arises if you find some significant effects and then take those significant data points and use them as a measure of the size of the effects - because you have specifically selected your data points on the basis that they show the very biggest effects out of all your data. This is called the non-independence error and it can make small effects seem much bigger.

The latest paper offers little that's new in terms of theory, but it's a good read and it's interesting to get the authors' expert opinion on some hot topics. Here's what they have to say about the question of whether it's acceptable to present results that suffer from the non-independence error just to "illustrate" your statistically valid findings:

Q: Are visualizations of non-independent data helpful to illustrate the claims of a paper?

A: Although helpful for exploration and story telling, circular data plots are misleading when presented as though they constitute empirical evidence unaffected by selection. Disclaimers and graphical indications of circularity should accompany such visualizations.
Now an awful lot of people - and I confess that I've been among them - do this without the appropriate disclaimers. Indeed, it is routine. Why? Because it can be useful illustration - although the size of the effects appears to be inflated in such graphs, on a qualitative level they provide a useful impression of the direction and nature of the effects.

But the A Team are right. Such figures are misleading - they mislead about the size of the effect, even if only inadvertently. We should use disclaimers, or ideally, avoid using misleading graphs. Of course, this is a self-appointed committee: no-one has to listen to them. We really should though, because what they're saying is common sense once you understand the issues.

It's really not that scary - as I said on this blog at the outset, this is not going to bring the whole of fMRI crashing down and end everyone's careers; it's a technical issue, but it is a serious one, and we have no excuse for not dealing with it.

ResearchBlogging.orgKriegeskorte, N., Lindquist, M., Nichols, T., Poldrack, R., & Vul, E. (2010). Everything you never wanted to know about circular analysis, but were afraid to ask Journal of Cerebral Blood Flow & Metabolism DOI: 10.1038/jcbfm.2010.86

Carlat's Unhinged

Well he's not. Actually, I haven't met him, so it's always possible. But what he certainly has done is written a book called Unhinged: The Trouble with Psychiatry.

Daniel Carlat's best known online for the Carlat Psychiatry Blog and in the real world for the Carlat Psychiatry Report. Unhinged is his first book for a general audience, though he's previously written several technical works aimed at doctors. It comes hot on the heels of a number of other recent books offering more or less critical perspectives on modern psychiatry, notably these ones.

Unhinged offers a sweeping overview of the whole field. If you're looking for a detailed examination of the problems around, say, psychiatric diagnosis, you'd do well to read Crazy Like Us as well. But as an overview it's a very readable and comprehensive one, and Carlat covers many topics that readers of his blog, or indeed of this one, would expect: the medicalization of normal behaviour, to over-diagnosis, the controversy over pediatric psychopharmacology, brain imaging and the scientific state of biological psychiatry, etc.

Carlat is unique amongst authors of this mini-genre, however, in that he is himself a practising psychiatrist, and moreover, an American one. This is important, because almost everyone agrees that to the extent that there is a problem with psychiatry, American psychiatry has it worst of all: it's the country that gave us the notorious DSM-IV, where drugs are advertised direct-to-the-consumer, where children are diagnosed with bipolar and given antipsychotics, etc.

So Carlat is well placed to report from the heart of darkness and he doesn't disappoint, as he vividly reveals how dizzying sums of drug company money sway prescribing decisions and even create diseases out of thin air. His confessional account of his own time as a paid "representative" for the antidepressant Effexor (also discussed in the NYT), and of his dealings with other reps - the Paxil guy, the Cymbalta woman - have to be read to be believed. We're left with the inescapable conclusion that psychiatry, at least in America, is institutionally corrupt.

Conflict of interest is a tricky thing though. Everyone in academia and medicine has mentors, collaborators, people who work in the office next door. The social pressure against saying or publishing anything that explicitly or implicitly criticizes someone else is powerful. Of course, there are rivalries and controversies, but they're firmly the exception.

The rule is: don't rock the boat. And given that in psychiatry, all but a few of the leading figures have at least some links to industry, that means everyone's in the same boat with Pharma, even the people who don't, personally, accept drug company money. I think this is often overlooked in all the excitement over individual scandals.

For all this, Carlat is fairly conservative in his view of psychiatric drugs. They work, he says, a lot of the time, but they're rarely the whole answer. Most people need therapy, too. His conclusion is that psychiatrists need to spend more time getting to know their patients, instead of just handing out pills and then doing a 15 minute "med check" - a great way of making money when you're getting paid per patient (4 patients per hour: ker-ching!), but probably not a great way of treating people.

In other words, psychiatrists need to be psychotherapists as well as psychopharmacologists. It's not enough to just refer people to someone else for the therapy: in order to treat mental illness you need one person with the skills to address both the biological and the psychological aspects of the patient's problems. Plus, patients often find it frustrating being bounced back and forth between professionals, and it's a recipe for confusion ("My psychiatrist says this but my therapist says...")

This leads Carlat to the controversial conclusion that psychiatrists should no longer have a monopoly on prescribing medications. He supports the idea of (appropriately trained) prescribing psychologists, an idea which has taken off in a few US states but which is hotly debated.

As he puts it, for a psychiatrist, the years in medical school spent delivering babies and dissecting kidneys are rarely useful. So there's no reason why a therapist can't learn the necessary elements of psychopharmacology - which drugs do what, how to avoid dangerous drug interactions - in say one or two years.

Such a person would be at least as good as a psychiatrist at providing integrated pills-and-therapy care. In fact, he says, an even better option would be to design an entirely new type of training program to create such "integrated" mental health professionals from the ground up - neither doctors nor therapists but something combining the best aspects of both.

There does seem to be a paradox here, however: Carlat has just spent 200 pages explaining how drug companies distort the evidence and bribe doctors in order to push their latest pills at people, many of whom either don't need medication or would do equally well with older, much cheaper drugs. Now he's saying that more people should be licensed to prescribe the same pills? Whose side is he on?

In fact, Carlat's position is perfectly coherent: his concern is to give patients the best possible care, which is, he thinks, combined medication and therapy. So he is not "anti" or "pro-medication" in any simple sense. But still, if psychiatry has been corrupted by drug company money, what's to stop the exact same thing happening to psychologists as soon as they got the ability to prescribe?

I think the answer to this can only be that we must first cut the problem off at its source by legislation. We simply shouldn't allow drug companies the freedom to manipulate opinion in the way that they do. It's not inevitable: we can regulate them. The US leads the world in some areas: since 2007, all clinical trials conducted in the country must be pre-registered, and the results made available on a public website, clinicaltrials.gov.

The benefits, in terms of keeping drug manufacturer's honest, are far too many to explain here. Other places, like the European Union, are just starting to follow suit. But America suffers from a split personality in this regard. It's also one of the only countries to allow direct-to-consumer drug advertising, for example. Until the US gets serious about restraining Pharma influence in all its forms, giving more people prescribing rights might only aggravate the problem.

Flibbin Heck

It's not been a great day for Germany. First, they lost to Serbia in the footy. Then German pharmaceutical company Boehringer Ingelheim suffered an equally vexing setback after their allegedly libido-boosting new drug, flibanserin, failed to get approval to be sold in the US.

The FDA panel's unanimous decision was no surprise to anyone who read their briefing report which came a few days ago (here) as it was pretty scathing about the strength of the evidence that Boehringer submitted in support of the drug's efficacy. Take this bit (from page 38)

Although the two North American trials that used the flibanserin 100 mg dose showed a statistically significant difference between flibanserin and placebo for the endpoint of Sexually Satisfying Events, they both failed to demonstrate a statistically significant improvement on the co-primary endpoint of sexual desire. Therefore, neither study met the agreed-upon criteria for success in establishing the efficacy of flibanserin for the treatment of Hypoactive Sexual Desire Disorder (HSDD).

At issue and a major concern of the Division are the following findings:
  1. The trials did not show a statistically significant difference for the co-primary endpoint, the eDiary sexual desire score.
  2. The Applicant’s request to use the FSFI [a questionnaire] desire items as the alternative instrument to evaluate the co-primary endpoint of sexual desire is not statistically justified and, in fact, was not supported by exploratory data from Study 511.77, which also failed to demonstrate a statistically significant treatment benefit on desire using the FSFI desire items.
  3. The responder rates on the important efficacy endpoints for the flibanserin-treated subjects, intended to demonstrate the clinical meaningfulness, are only 3-15% greater than those in the placebo arm.
  4. There were many significant medical and medication exclusion criteria for the efficacy trials, so it is not clear whether the safety and efficacy data from these trials are generalizable to the target population for the drug.
Ouch. Basically, the FDA concluded that as an aphrodisiac it doesn't work very well, if at all, and hence it can't be considered an efficacious treatment. For more background on flibanserin see my old post here and more recently Petra Boynton's excellent coverage.

But what was flibanserin supposed to treat in the first place? Something called "hypoactive sexual desire disorder" (HSDD). What is hypoactive sexual desi...oh, hang on. I think I can work it out. It's a disorder where you have hypoactive sexual desire. The clue is in the name.

The truth of course is that it's more than a clue: HSDD is nothing more than its name. And in fact, the "disorder" bit is entirely superfluous, and the "hypoactive" is needlessly technical. HSDD is simply a description for low sexual desire.

As such, it is wrong to say that it doesn't exist - clearly some people do have low sexual desire, and some of them (though not all) would prefer to have more. But giving it a fancy name and calling it a disorder is entirely misleading: it gives the impression of depth (i.e. that this is some kind of medical illness) when in fact it is simply describing a surface phenomena, like saying "I'm bored" or "I'm tired".

Psychiatry - or more specifically the DSM-IV textbook of the American Psychiatric Association - is chock full of these the-clue-is-in-the-name disorders. Essentially, if the symptoms of the condition are simply summarized in the name, it's almost certainly of this type. You have "Generalized Anxiety Disorder" if you're... generally anxious. According to the next DSM-5, your kid will have "Temper Regulation Disorder with Dysphoria" if... oh, guess.

Not all psychiatric disorders are like this though. The word "Schizophrenia" is just a name: it describes a cluster of quite diverse symptoms that are not contained in the name (and indeed if you take the name literally you would end up with entirely the wrong idea.) Likewise for "Bipolar Disorder" and "Depression".

These are names for groups of symptoms which tend to go together and saying that someone has "Depression" tells you several different things about them - e.g. that they have low mood, certain kinds of sleep disturbance and appetite disturbance, etc. In fact not everyone shows all of these all the time, but most people show most of them.

The point is that to diagnose someone with, say, schizophrenia, on the basis that, say, they believe an alien is controlling their thoughts through a radio in their head, is to assert something about them; it might be a correct diagnosis, or it might be wrong e.g. they could in fact be bipolar, or it could be a culturally based belief, or they might even be right.

But if you "diagnose" someone with HSDD, you cannot be wrong - assuming they have told you that they have low sexual desire, which is the only possible reason you would make that "diagnosis". HSDD is just a re-description of their complaint. Yet it also smuggles in the implication that behind the complaint is a medical problem which could be treated with drugs.

Now maybe that's right. Maybe it isn't. We just don't know. It doesn't appear to be treatable with flibanserin. But then, maybe that's because it's not a medical issue at all in most cases.

Everybody Expects the Placebo Inquisition

An unexpected gem from last year's Journal of the American Psychoanalytic Association: Mind over medicine.

Surprisingly, it has nothing to do with psychoanalysis. Rutherford and colleagues performed a meta-analysis of lots of clinical trials of antidepressants. Neuroskeptic readers will be all too familiar with these. But they did an interesting thing with the data: they compared the benefits of antidepressants in trials with a placebo condition, vs. trials with no placebo arm, such as trials comparing one drug to another drug.

Why do that comparison? Because the placebo effect is likely to be stronger in trials with no placebo condition. If you volunteer for a placebo controlled trial, you'll know that you've got (say) a 50-50 chance of getting inactive sugar pills. You'll probably be uncertain whether or not you'll get better, maybe even quite worried. On the other hand if you're in a trial where you definitely will get a real drug, you can rest assured that you'll feel better - and that in itself might make your depression improve.

The paper only presents very preliminary results, but they say that:

Our group at Columbia has completed preliminary work involving metaanalyses of randomized controlled trials comparing antidepressant medications to a placebo or active comparator in geriatric outpatients with Major Depressive Disorder (Sneed et al. 2006). In placebo controlled trials, the medication response rate was 48% and the remission rate 33%, compared to a response rate of 62% and remission rate of 43% in the comparator trials (p < .05). The effect size for the comparison of response rate to medications in the comparator and placebo controlled trials was large (Cohen’s d = 1.2).
They only looked at trials of old age patients, but the same probably applies to everyone else.

Why does this matter? The authors suggest one very important implication. There are quite a few trials nowadays comparing the effects of psychotherapy, medication, neither, or both. How it works is that everyone gets pills, 50% of them real drugs and 50% placebos; also, half the people get psychotherapy while the others remain on the waiting list.

These trials often find that medication plus psychotherapy is better than just medication alone. This has led to the idea that therapy and drugs should be combined in clinical practice, a message which goes down really well, because it gives both psychopharmacologists and therapists the feeling that they have an important job to do. An example of this kind of trial is the influential TADS from 2004, finding that Prozac and therapy both work in depressed teens, and combining them is best. Everyone's a winner.

But as Rutherford et al. point out, there's a problem with this reasoning. The people who only get antidepressants don't know that they're getting any treatment, because they might be getting placebo. But the people who get antidepressants and therapy know that they're getting at least one real treatment (therapy). This is likely to improve their outcome through an expectation effect. (In fact, for some reason, in TADS, the people on combination treatment were told that they were getting both - they specifically knew they would never get dummy pills - which will have made this even worse.)

Now you could say that this doesn't matter: TADS and similar studies show that therapy and medication is better than just medication, and it's purely academic whether that's "just a placebo effect". But the key point is that in real life people always get medication knowing that it's real - so, like the therapy plus medication people in the trials, they get the benefit of the certainty that they are getting a real treatment. In the trials the medication-only group don't know that, but in real life they do - so the benefits of adding psychotherapy might be less, or even zero, in real life.

The authors of the TADS study did acknowledge this in their original paper, but only very briefly - here's all they say about it:
Blinding patients in the placebo and fluoxetine alone groups but not in the CBT alone group (participants knew they would not be receiving fluoxetine) and the fluoxetine combined with CBT group (participants knew that they would be receiving fluoxetine) may have interacted with expectancy effects regarding improvement and acceptability of treatment assignment.
Yet this limitation means they, strictly speaking, all TADS showed is that Prozac works in this group. It doesn't prove that adding (very expensive) therapy benefits anyone, in the real world. This is not to say that psychotherapy doesn't work of course, maybe it does, but the point is that therapy + medication trials may be best without a placebo.

ResearchBlogging.orgRutherford, B., Roose, S., & Sneed, J. (2009). Mind Over Medicine: the Influence of Expectations on Antidepressant Response Journal of the American Psychoanalytic Association, 57 (2), 456-460 DOI: 10.1177/00030651090570020909

Monoamine Shock

Electroconvulsive therapy (ECT) is a crude but effective treatment for depression. It consists of applying a brief alternating current to the brain in order to induce a generalized seizure, which usually lasts for less than half a minute. It looks nothing like the picture to the left.

ECT is typically given three times per week, and a dozen sessions are enough to produce a dramatic improvement in depression in most cases. However, how it works is entirely mysterious. There are plenty of theories. An important little study from Duke University psychiatrists Cassidy et al (including Bernard Carroll) has just ruled out one of them.

Monoamines are a class of neurotransmitters: serotonin, dopamine and noradrenaline. They're involved in various aspects of mood, although the picture is very complicated, and almost all antidepressant drugs target one or more monoamines. Could ECT act by increasing monoamine levels? It's as good a theory as any, and there's some evidence for it.

However, Cassidy et al's data suggest it's not the case. They took 9 volunteers who had been severely depressed, but had recently responded well to a course of ECT. They gave them combined serotonin depletion, with the tryptophan depletion method, and dopamine/noradrenaline depletion with the drug AMPT. As a placebo comparison, they used diphenhydramine, aka Benadryl, a mildly sedative antihistamine; this is because AMPT is a sedative, and they wanted to control for active placebo effects. Few psychopharmacology studies are so well controlled.

These depletion techniques, given separately, are known to cause temporary relapses in about 50% of people who've responded to antidepressants targeting the corresponding monoamine, and also in some people who used to be depressed but are no longer taking medication. If monoamines are involved in the response to ECT, depleting all of them at once should definitely cause relapse.

What happened? Nothing. No-one experienced even a partial return of their depression with either the real or the placebo treatments. These depletions don't put levels of the neurotransmitters down to zero, but Cassidy et al. used the same doses that have caused dramatic relapses in susceptible people.

This strongly suggests that monoamines are not required for the clinical response to ECT, at least not in any straightforward more-is-better way. Given that ECT works faster than antidepressants and more often (the controversial side effects are the main reason it's used only as a last resort), this is a blow for the monoamine hypothesis of depression... like I said, it is complicated.

And how does ECT work? We still don't know. This study narrows down the possibilities.

ResearchBlogging.orgCassidy, F., Weiner, R., Cooper, T., & Carroll, B. (2010). Combined catecholamine and indoleamine depletion following response to ECT The British Journal of Psychiatry, 196 (6), 493-494 DOI: 10.1192/bjp.bp.109.070573

Serial Killers

Much of Britain is currently following the trial of Steven Griffiths, or as he'd like you to refer to him, the Crossbow Cannibal.

Serial killers are always newsworthy, and Griffiths has killed at least three women in cold blood. (He did use a crossbow, but I think the newspapers made up the cannibalism.) But it's Griffiths's interests that have really got people's attention.

It turns out that before he became a serial killer, he was a man obsessed with... serial killers. His Amazon wish list was full of books about murder. He has a degree in psychology, and he was working on his PhD, in Criminology. Guess what his research was about.

Griffiths is therefore a kind of real life Hannibal Lecter or Dexter, an expert in murderers who is himself one. He's also a good example of the fact that, unlike on TV, real life serial killers are never cool and sophisticated, nor even charmingly eccentric, just weird and pathetic. Not to mention lazy, given that he was still working on his PhD after 6 years...

Yet there is an interesting question: was Griffiths a good criminologist? Does he have a unique insight into serial killers? We'll probably never know, at least not until (or if) the police release some of his writings. But it seems to me that he might have done.

When the average person hears about the crimes of someone like Griffiths, we are not just shocked but confused - it seems incomprehensible. I can understand why someone would want to rob me for my wallet, because I like money too. I can understand how one guy might kill another in a drunken fight, because I've been drunk too. Of course this doesn't mean I condone either crime, but they don't leave me scratching my head; I can see how it happens.

I cannot begin to understand why Griffiths did what he did. My understanding of humanity doesn't cover him. But he is human, so all that really means is that my understanding is limited. Someone understands people like Griffiths, it can't be impossible; but it may be that the only way to understand a serial killer is to be one.

The same may be true of less dramatic mental disorders. Karl Jaspers believed that the hallmark of severe mental illness is symptoms that are impossible to understand: they just exist. I've experienced depression; I've also read an awful lot about it and published academic papers on it. My own illness taught me much more about depression than my reading. Maybe I've been reading the wrong things. I don't think so.

SSRIs and Suicide

Prozac and suicide: what's going on?

Many people think that SSRI antidepressants do indeed cause suicide, and in recent years this idea has gained a huge amount of attention. My opinion is that, well, it's all rather complicated...

At first glance, it seems as though it should be easy to discover the truth. SSRIs are some of the most studied drugs in the world. We have data from several hundred randomized placebo-controlled trials, totaling tens of thousands of patients. Let's just look and see whether people given SSRIs are more likely to die by suicide than people given placebo.

Unfortunately, that doesn't really work. Actual suicides are extremely rare in antidepressant trials. This is partly because most trials only last 4 to 6 weeks, but also because anyone showing evidence of suicidal tendencies is excluded from the studies at the outset. There just aren't enough suicides to be able to study.

What you can do is to look at attempted suicide, and at "suicidality", meaning suicidal thoughts and self-harming behaviours. Suicidality is more common than actual suicide, so it's easier to research. Here's the bad news: the evidence from a huge number of trials is that compared to placebo, antidepressants do raise the risk of suffering suicidality(1) and of suicide attempts(1) (from 1.1 per 1000 to 2.7 per 1000), when given to people with psychiatric disorders.

There's no good evidence that SSRIs are any worse or any better than other antidepressants, or that any one SSRI stands out as particularly bad(1,2). The risk seems to be worst in younger people: compared to placebo, SSRIs raised suicidality in people below age 25, had no effect in most adults, and lowered it in the oldest age groups(1). This is why SSRIs (and all other antidepressants) now carry a "black box" in the USA, warning about the risk of suicide in young people.

*

This is very troubling. Hang on though. I mentioned that suicidality is an exclusion criterion from pretty much all antidepressant trials. This is for ethical as well as practical reasons: it's considered unethical to give a suicidal person an experimental drug, and it's really impractical to have patients dying during your trial.

Indeed the recorded rate of suicidality in these trials is incredibly tiny: only 0.5% of the psychiatric patients experienced any suicidal ideation or behaviour at all(1). The other 99.5% never so much as thought about it, apparently. If that were representative of the real world it would be great; unfortunately it isn't. Yet what this all means is that antidepressants could not possibly reduce suicidality in these trials, because there's just nothing there to reduce. Even if, in the real world, they prevent loads of suicides, these trials wouldn't show it.

How do you investigate the effects of drugs "in the real world"? By observational studies - instead of recruiting people for a trial, you just look to see what happens to people who are prescribed a certain drug by their doctor. Observational studies have strengths and weaknesses. They're not placebo controlled, but they can be much larger than trials, and they can study the full spectrum of patients.

Observational studies have found very little evidence suggesting that antidepressants cause suicide. Most strikingly, since 1990 when SSRIs were introduced, antidepressant sales have increased enormously, and the suicide rate has fallen steadily; this is true of all Western countries.

More detailed analyses of antidepressant sales vs. suicide rates across time and location have generally either found either no effect, or a small protective effect, of antidepressant sales(1,2,3, many others). In the past few years, concern over suicidality has led to a fall in antidepressant use in adolescents in many countries: but there is no evidence that this reduced the adolescent suicide rate(1,2).

Another observational approach is to see whether people who have actually died by suicide were taking SSRIs at the time of death. Australian psychiatrists Dudley et al have just published a review of the evidence on this question, and they found that out of a total of 574 adolescent suicide victims from the USA, Britain, and Scandinavia, only 9 (1.5%) were taking an SSRI when they died. In other words, the vast majority of youth suicides occur in non-SSRI users. This sets a very low upper limit on the number of suicides that could be caused by SSRIs.


*

So what does all this mean? As I said, it's very controversial, but here's my take, with the standard caveat that I'm just some guy on the internet.

The evidence from randomized controlled trials is clear: SSRIs can cause suicidality, including suicide attempts, in some people, especially people below age 25. The chance of this happening is below 1% according to the trials, but this is still worrying given that lots of people take antidepressants. However, the use of antidepressants on a truly massive scale has not led to any rise in the suicide rate in any age group. This implies that overall, antidepressants prevent at least as many suicides as they cause.

My conclusion is that the clinical trials are not much use when it comes to knowing what will happen to any individual patient. The evidence is that antidepressants could worsen suicidality, or they could reduce it. This is hardly a satisfactory conclusion for people who want neat and tidy answers, but there aren't many of those in psychiatry. For patients, the implication is, boringly, that we should follow the instructions on the packet - be vigilant for suicidality, but don't stop taking them except on a doctor's orders.

ResearchBlogging.orgDudley, M., Goldney, R., & Hadzi-Pavlovic, D. (2010). Are adolescents dying by suicide taking SSRI antidepressants? A review of observational studies Australasian Psychiatry, 18 (3), 242-245 DOI: 10.3109/10398561003681319

 
powered by Blogger