Tampilkan postingan dengan label mental health. Tampilkan semua postingan
Tampilkan postingan dengan label mental health. Tampilkan semua postingan

Who Gets Autism?

According to a major new report from Australia, social and family factors associated with autism are associated with a lower risk of intellectual disability - and vice versa. But why?


The paper is from Leonard et al and it's published in PLoS ONE, so it's open access if you want to take a peek. The authors used a database system in the state of Western Australia which allowed them to find out what happened to all of the babies born between 1984 and 1999 who were still alive as of 2005. There were 400,000 of them.

The records included information on children diagnosed with either an autism spectrum disorder (ASD), intellectual disability aka mental retardation (ID), or both. They decided to only look at singleton births i.e. not twins or triplets.

In total, 1,179 of the kids had a diagnosis of ASD. That's 0.3% or about 1 in 350, much lower than more recent estimates, but these more recent studies used very different methods. Just over 60% of these also had ID, which corresponds well to previous estimates.

There were about 4,500 cases of ID without ASD in the sample, a rate of just over 1%; the great majority of these (90%) had mild-to-moderate ID. They excluded an additional 800 kids with ID associated with a "known biomedical condition" like Down's Syndrome.

So what did they find? Well, a whole bunch, and it's all interesting. Bullet point time.

  • Between 1984 to 1999, rates of ID without ASD fell and rates of ASD rose, although there was a curious sudden fall in the rates of ASD without ID just before the end of the study. In 1984, "mild-moderate ID" without autism was by far the most common diagnosis, with 10 times the rate of anything else. By 1999, it was exactly level with ASD+ID, and ASD without ID was close behind. Here's the graph; note the logarithmic scale:

  • Boys had a much higher rate of autism than girls, especially when it came to autism without ID. This has been known for a long time.
  • Second- and third- born children had a higher rate of ID, and a lower rate of ASD, compared to firstborns.
  • Older mothers had children with more autism - both autism with and without ID, but the trend was bigger for autism with ID. But they had less ID. For fathers, the trend was the same and the effect was even bigger. Older parents are more likely to have autistic children but less likely to have kids with ID.

  • Richer parents had a strongly reduced liklihood of ID. Rates of ASD with ID were completely flat, but rates of ASD without ID were raised in the richer groups, though it was not linear (the middle groups were highest. - and effect was small.)
To summarize: the risk factors for autism were in most cases the exact opposite of those for ID. The more “advantaged” parental traits like being richer, and being older, were associated with more autism, but less ID. And as time went on, diagnosed rates of ASD rose while rates of ID fell (though only slightly for severe ID).

Why is this? The simplest explanation would be that there are many children out there for whom it's not easy to determine whether they have ASD or ID. Which diagnosis any such child gets would then depend on cultural and sociological factors - broadly speaking, whether clinicians are willing to give (and parents willing to accept) one or the other.

The authors note that autism has become a less stigmatized condition in Australia recently. Nowdays, they say, a diagnosis of ASD may be preferable to a diagnosis of "just" "plain old" ID, in terms of access to financial support amongst other things. However, it is also harder to get a diagnosis of ASD, as it requires you to go through a more extensive and complex series of assessments.

Clearly some parents will be better able to achieve this than others. In other countries, like South Korea, autism is still one of the most stigmatized conditions of childhood, and we'd expect that there, the trend would be reversed.

The authors also note the theory that autism rates are rising because of some kind of environmental toxin causing brain damage, like mercury or vaccinations. However, as they point out, this would probably cause more of all neurological/behavioural disorders, including ID; at the least it wouldn't reduce the rates of any.

These data clearly show that rates of ID fell almost exactly in parallel with rates of ASD rising, in Western Australia over this 15 year period. What will the vaccine-vexed folks over at Age of Autism make of this study, one wonders?

ResearchBlogging.orgLeonard H, Glasson E, Nassar N, Whitehouse A, Bebbington A, Bourke J, Jacoby P, Dixon G, Malacova E, Bower C, & Stanley F (2011). Autism and intellectual disability are differentially related to sociodemographic background at birth. PloS one, 6 (3) PMID: 21479223

BBC: Something Happened, For Some Reason

According to the BBC, the British recession and spending cuts are making us all depressed.


They found that between 2006 and 2010, prescriptions for SSRI antidepressants rose by 43%. They attribute this to a rise in the rates of depression caused by the financial crisis. OK there are a few caveats, but this is the clear message of an article titled Money woes 'linked to rise in depression'. To get this data they used the Freedom of Information Act.

What they don't do is to provide any of the raw data. So we just have to take their word for it. Maybe someone ought to use the Freedom of Information Act to make them tell us? This is important, because while I'll take the BBC's word about the SSRI rise of 43%, they also say that rates of other antidepressants rose - but they don't say which ones, by how much, or anything else. They don't say how many fell, or stayed flat.

Given which it's impossible to know what to make of this. Here are some alternative explanations:

  • This just represents the continuation of the well-known trend, seen in the USA and Europe as well as the UK, for increasing antidepressant use. This is my personal best guess and Ben Goldacre points out that rates rose 36% during the boom years of 2000-2005.
  • Depression has not got more common, it's just that it's more likely to be treated. This overlaps with the first theory. Support for this comes from the fact that suicide rates haven't risen - at least not by anywhere near 40%.
  • Mental illness is no more likely to be treated, but it's more likely to be treated with antidepressants, as opposed to other drugs. There was, and is, a move to get people off drugs like benzodiazepines, and onto antidepressants. However I suspect this process is largely complete now.
  • Total antidepressant use isn't rising but SSRI use is because doctors increasingly prescribe SSRIs over opposed to other drugs. This was another Ben Goldacre suggestion and it is surely a factor although again, I suspect that this process was largely complete by 2007.
  • People are more likely to be taking multiple different antidepressants, which would manifest as a rise in prescriptions, even if the total number of users stayed constant. Add-on treatment with mirtazapine and others is becoming more popular.
  • People are staying on antidepressants for longer meaning more prescriptions. This might not even mean that they're staying ill for longer, it might just mean that doctors are getting better at convincing people to keep taking them by e.g. prescribing drugs with milder side effects, or by referring people for psychotherapy which could increase use by keeping people "in the system" and taking their medication. This is very likely. I previously blogged about a paper showing that in 1993 to 2005, antidepressant prescriptions rose although rates of depression fell, because of a small rise in the number of people taking them for very long periods.
  • Mental illness rates are rising, but it's not depression: it's anxiety, or something else. Entirely plausible since we know that many people taking antidepressants, in the USA, have no diagnosable depression and even no diagnosable psychiatric disorder at all.
  • People are relying on the NHS to prescribe them drugs, as opposed to private doctors, because they can't afford to go private. Private medicine in the UK is only a small sector so this is unlikely to account for much but it's the kind of thing you need to think about.
  • Rates of depression have risen, but it's nothing to do with the economy, it's something else which happened between 2007 and 2010: the Premiership of Gordon Brown? The assassination of Benazir Bhutto? The discovery of a 2,100 year old Japanese melon?
Personally, my money's on the melon.

Neurology vs Psychiatry

Neurology and psychiatry are related fields - if for no other reason, because neurological disorders can often manifest as, and get misdiagnosed as, psychiatric ones.

But what's the borderline between neurology and psychiatry? What makes one disease "neurological" and another "mental"? Are some psychiatric disorders more "neurological" than others?

It's a rather philosophical question and you could discuss it for as long as you wanted. Rather than doing that I thought I'd have a look to see which disorders are, at the moment, considered to fall into each category.

To do this I did a quick search the archives of two journals, Neurology which the world's leading journal of... well, guess, and the American Journal of Psychiatry. I looked to see how many papers from the past 20 years had either a Title or an Abstract which referred to various different diseases. You can see the results above. Note that the total number of papers varied, obviously, and I've only plotted the proportion.

Some interesting results. Schizophrenia, which is probably considered "the most neurological" psychiatric disorder, is in fact the least talked about in Neurology. Depression is top amongst the "core" psychiatric ones.

Autism occupies a middle ground, discussed by psychiatrists at 70% and neurologists at 30%. That didn't surprise me, but what did was that ADHD is almost as neurological as autism. Mental retardation is also intermediate, though it's 30:70 in favour of neurology. Whether autism is really less neurological than mental retardation, is a good question.

Then out of the disorders with a known neuropathology, Alzheimer's disease, Huntington's disease and "dementia" (which overlaps with Alzheimer's) are a bit psychiatric while stuff like headache and epilepsy is almost 100% neurological. Why this is, is not entirely clear, since both dementia and epilepsy are caused by neurological damage, and they can both cause "psychiatric" symptoms.

I suspect the difference is that it's just much harder to treat Alzheimer's, Huntington's and dementia. With epilepsy or meningitis, neurologists have a very good chance of controlling the symptoms and few patients will be left with ongoing psychiatric problems. But with the neurodegenerative disorders, neurologists can't really do much, leaving a large pool of people for psychiatrists to study.

Someone once said that neurologists take all of the curable diseases and leave psychiatrists with the ones they can't help. These figures suggest that there may be some truth in this.

"1 Boring Old Man" Blog Isn't

Just wanted to let everyone know about a blog called 1 boring old man, which is a very poor name as it isn't boring at all.


I don't know if it's written by an old man or not, one can only assume so, but whoever writes it, it has got a lot of extremely good stuff about psychiatry and psychiatric drugs. Fans of Daniel Carlat's blog or even former readers of the now seemingly defuct Furious Seasons will find it extremely interesting.

It's actually been going since 2005, but for some reason I've only just found out about it (many thanks to regular Neuroskeptic commentator Bernard Carroll).

A Stroke Of Good Fortune Cures OCD?

A 45 year old female teacher had a history of severe obsessive-compulsive disorder, along with other problems including ADHD. Her daughter, and many other people in her family, had suffered the same problems and in a few cases had Tourette's Syndrome.But all that changed - when she suffered a stroke. This is according to a brief case report from Drs. Diamond and Ondo of Texas:

[she] had a long history of constant intrusive and obsessive thoughts that interrupted her daily activities and sleep. She had constant unfounded fears that something bad would happen to her family and had persistent violent thoughts of using knives to harm family members. She would check the door locks up to 15 times a day. In addition to her OCD symptoms, she had ... inattention, poor concentration, and difficulty sitting still.
She had never been treated for the OCD, despite how it interfered with her life, because she feared losing her job as a teacher if she sought psychiatric help. But then...
Nine months before approaching us, she developed the acute onset of paresthesia [weird sensations] and weakness in the left upper extremity and face, associated with slurred speech. Initially, she was unable to lift her arm against gravity.
These are classic signs of a stroke, but it was a very mild one, because the symptoms only lasted a few minutes and were pretty much gone even before she arrived at the emergency room. She made a full recovery. More than a full recovery in fact:
Within weeks of her stroke, she realized that her obsessive and intrusive thoughts, fears, rituals, and impulsive behavior had completely resolved. In addition, there was some improvement in her temperament. There was no improvement in attention or concentration. Owing to her improvement in neuropsychiatric symptoms, she strongly felt that her stroke was beneficial. These benefits have persisted for 24 months.
Most medical case reports concern patients who died, or got really sick, in a particularly interesting fashion, but this one has a happy ending. Strokes can be devastating, of course, although people also make full recoveries - it all depends on the severity of the stroke, and whether they get prompt treatment.

There have been a few other cases of brain damage which brought unexpectedly beneficial effects. In Vietnam veterans, for example, people with damage to the vmPFC due to combat trauma seemed to be protected from depression.

Whether the stroke really cured her, or whether it was some kind of psychological "placebo" effect, we'll never know. It's hard to see why a stroke would have a placebo effect, but on the other hand, an MRI scan revealed that the stroke occured in an area of the brain - the right frontoparietal cortex - which is fairly low down on the list of "OCD-ish" areas.

The authors make some vague comments about "modulation of the cortical–subcortical circuits" but this is really the neuroscientific equivalent of saying "We guess it did something", because the entire brain is made of cortical-subcortical circuits, given that the cortex is at the top and everything else is, by definition, the sub-cortex. It's quite possible. But we really can't tell.

ResearchBlogging.orgDiamond A, & Ondo WG (2011). Resolution of Severe Obsessive-Compulsive Disorder After a Small Unilateral Nondominant Frontoparietal Infarct. The International journal of neuroscience PMID: 21426244

Depressed or Bereaved? (Part 2)

In Part 1, I discussed a paper by Jerome Wakefield examining the issue of where to draw the line between normal grief and clinical depression.


The line moved in the American Psychiatric Association's DSM diagnostic system when the previous DSM-III edition was replaced by the current DSM-IV. Specifically, the "bereavement exclusion" was made narrower.

The bereavement exclusion says that you shouldn't diagnose depression in someone whose "depressive" symptoms are a result of grief - unless they're particularly severe or prolonged when you should. DSM-IV lowered the bar for "severe" and "prolonged", thus making grief more likely to be classed as depression. Wakefield argued that the change made things worse.

But DSM-V is on its way soon. The draft was put up online in 2010, and it turns out that depression is to have no bereavement exclusion at all. Grief can be diagnosed as depression in exactly the same way as depressive symptoms which come out of the blue.

The draft itself offered just one sentence by way of justification for this. However, big cheese psychiatrist Kenneth S. Kendler recently posted a brief note defending the decision. Wakefield has just published a rather longer paper in response.

Wakefield starts off with a bit of scholarly kung-fu. Kendler says that the precursors to the modern DSM, the 1972 Feighner and 1975 RDC criteria, didn't have a bereavement clause for depression either. But they did - albeit not in the criteria themselves, but in the accompanying how-to manuals; the criteria themselves weren't meant to be self-contained, unlike the DSM. Ouch! And so on.

Kendler's sole substantive argument against the exclusion is that it is "not logically defensible" to exclude depression induced by bereavement, if we don't have a similar provision for depression following other severe loss or traumatic events, like becoming unemployed or being diagnosed with cancer.

Wakefield responds that, yes, he has long made exactly that point, and that in his view we should take the context into account, rather than just looking at the symptoms, in grief and many other cases. However, as he points out, it is better to do this for one class of events (bereavement), than for none at all. He quotes Emerson's famous warning that "A foolish consistency is the hobgoblin of little minds". It's better to be partly right, than consistently wrong.

Personally, I'm sympathetic to Wakefield's argument that the bereavement exclusion should be extended to cover non-bereavement events, but I'm also concerned that this could lead to underdiagnosis if it relied too much on self-report.

The problem is that depression usually feels like it's been caused by something that's happened, but this doesn't mean it was; one of the most insidious features of depression is that it makes things seem much worse than they actually are, so it seems like the depression is an appropriate reaction to real difficulties, when to anyone else, or to yourself looking back on it after recovery, it was completely out of proportion. So it's a tricky one.

Anyway, back to bereavement; Kendler curiously ends up by agreeing that there ought to be a bereavement clause - in practice. He says that just because someone meets criteria for depression does not mean we have to treat them:

...diagnosis in psychiatry as in the rest of medicine provides the possibility but by no means the requirement that treatment be initiated ... a good psychiatrist, on seeing an individual with major depression after bereavement, would start with a diagnostic evaluation.

If the criteria for major depression are met, then he or she would then have the opportunity to assess whether a conservative watch and wait approach is indicated or whether, because of suicidal ideation, major role impairment or a substantial clinical worsening the benefits of treatment outweigh the limitations.
The final sentence is lifted almost word for word from the current bereavement clause, so this seems to be an admission that the exclusion is, after all, valid, as part of the clinical decision-making process, rather than the diagnostic system.

OK, but as Wakefield points out, why misdiagnose people if you can help it? It seems to be tempting fate. Kendler says that a "good psychiatrist" wouldn't treat normal, uncomplicated bereavement as depression. But what about the bad ones? Why on earth would you deliberately make your system such that good psychiatrists would ignore it?

More importantly, scrapping the bereavement criterion would render the whole concept of Major Depression meaningless. Almost everyone suffers grief at some point in their lives. Already, 40% of people meet criteria for depression by age 32, and that's with a bereavement exclusion.

Scrap it and, I don't know, 80% will meet criteria by that age - so the criteria will be useless as a guide to identifying the people who actually have depression as opposed to the ones who have just suffered grief. We're already not far off that point, but this would really take the biscuit.

ResearchBlogging.orgWakefield JC (2011) Should Uncomplicated Bereavement-Related Depression Be Reclassified as a Disorder in the DSM-5? The Journal of nervous and mental disease, 199 (3), 203-8 PMID: 21346493

Black Bile and Black Dogs

Depression is black. That's been the view of Western culture ever since the ancient Greeks, with their concept of "melan cholia" (ÎŒÎ”Î»Î±ÎłÏ‡ÎżÎ»ÎŻÎ±) - black bile. The idea was that psychological states were associated with particular bodily fluids; melancholy was associated with the "black bile" of the spleen, as opposed to the go-getting, passionate "yellow bile" of the gall-bladder

What this "black bile" (melan chole) actually was is rather mysterious. The gall bladder does indeed produce bile, a digestive juice which is greenish-yellow, but the spleen doesn't secrete anything as such. It itself is a dark greyish-purple, which might have given rise to the idea that it contained something black. Here's another theory.

The other color associated with depression is blue, of course, as in The Blues. However, when picturing depression-blue, I think most people generally see it as something rather close to black. It's the sky at twilight, not a bright summer's day, right? It's not a happy blue.

Winston Churchill famously referred to his depression as his Black Dog. There's a rather nice correspondence here with Chinese, though I doubt Churchill knew it. Here's the Chinese character for black and (one of) the characters for dog:
Write these as two separate characters and it says, well, black dog (badly). But there's another character which consists of "black" & " dog" combined:

This means silence; quiet; speechless; mute.

This is as good a one-word description of depression as any. Churchill's metaphor has always struck me as slightly misleading in one sense (although it's excellent in others): depression is not a thing; not even a black one. It is a lack, of motivation, energy, joy, imagination; you don't wake up and feel depressed, you wake up depressed and feel terrible, but the depression is hidden, only evident in retrospect, just as you don't tend to notice how quiet it is until a noise breaks the silence.

Neural Correlates of 80s Hip Hop

A ground-breaking new study reveals the neurological basis of seminal East Coast hip-hop pioneers Run-D.M.C.

The study is Diffusion tensor imaging of the hippocampus and verbal memory performance: The RUN DMC Study, and it actually has nothing to do with hip-hop, but it does have one of the best study acronyms I have ever seen.

RUN DMC stands for the "Radboud University Nijmegen Diffusion tensor and Magnetic resonance imaging Cohort study".

Or maybe it does relate to rapping. Because the paper is about verbal memory, and if there's one thing a rapper needs, it's a good memory for words, otherwise they'd forget their lyrics and... OK no, it doesn't relate to hip-hop.

It is however a very nice piece of research. They took no fewer than 503 elderly people - making this by far the single biggest neuroimaging study I have ever read. They used DTI to measure the quality of white-matter tracts in the brain and correlated this with verbal memory function. DTI is an extremely clever technique which allows you to measure the integrity of white matter pathways.

The theory behind the study is that in elderly people, white matter often shows degeneration. This is thought to be caused by vascular disease - problems with the blood flow to the brain, such as cerebral small-vessel disease which means, essentially, a series of mild strokes, which often go unnoticed at the time, but they build up to cause brain damage, specifically white matter disruption.

The symptoms of this are extremely varied and can range from cognitive and memory impairment, to depression, to motor problems (clumsiness), all depending on where in the brain it happens.

All of the people in this study had cerebral small-vessel disease as defined on the basis of symptoms and the presence of visible white matter lesions on the basic MRI scan. The authors found that the integrity of the white matter tracts in the area of the hippocampus, as measured with DTI, correlated with performance on a simple word learning task:


The healthier the hippocampal white matter, the better people did on the task. This makes sense as the hippocampus is a well known memory centre. This is only a correlation, and doesn't prove that the hippocampal damage caused the memory problems, but it seems entirely plausible. The authors controlled for things like age, gender, and the size of the hippocampus, as far as possible.

Should we all be worried about our white matter when we get older? Quite possibly - but luckily, the risk factors for vascular disease are quite well understood, and many of them are things you can change by having a healthy lifestyle.

Smoking is bad news, as are hypertension (high blood pressure), obesity, and high cholesterol. Diabetes is also a risk factor. So you should quit smoking, eat well, and ensure that you're getting tested and if necessary treated for hypertension and diabetes. All of which, of course, is a good idea from the point of view of general health as well.




ResearchBlogging.orgvan Norden AG, de Laat KF, Fick I, van Uden IW, van Oudheusden LJ, Gons RA, Norris DG, Zwiers MP, Kessels RP, & de Leeuw FE (2011). Diffusion tensor imaging of the hippocampus and verbal memory performance: The RUN DMC Study. Human brain mapping PMID: 21391278

Depressed Or Bereaved? (Part 1)

Part 2 is now out here.

My cat died on Tuesday. She may have been a manipulative psychopath, but she was a likeable one. She was 18.On that note, here's a paper about bereavement.

It's been recognized since forever that clinical depression is similar, in many ways, to the experience of grief. Freud wrote about it in 1917, and it was an ancient idea even then. So psychiatrists have long thought that symptoms, which would indicate depression in someone who wasn't bereaved, can be quite normal and healthy as a response to the loss of a loved one. You can't go around diagnosing depression purely on the basis of the symptoms, out of context.

On the other hand, sometimes grief does become pathological - it triggers depression. So equally, you can't just decide to never diagnose depression in the bereaved. How do you tell the difference between "normal" and "complicated" grief, though? This is where opinions differ.

Jerome Wakefield (of Loss of Sadness fame) and colleagues compared two methods. They looked at the NCS survey of the American population, and took everyone who'd suffered a possible depressive episode following bereavement. There were 156 of these.

They then divided these cases into "complicated" grief (depression) vs "uncomplicated" grief, first using the older DSM-III-R criteria, and then with the current DSM-IV ones. Both have a bereavement exclusion for the depression criteria - don't diagnose depression if it's bereavement - but they also have criteria for complicated grief which is depression, exclusions to the exclusion.

The systems differ in two major ways: the older criteria were ambiguous but at the time, they were generally interpreted to mean that you needed to have two features out of a possible five; prolonged duration was one of the list and anything over 12 months was considered "prolonged". In DSM-IV, however, you only need one criterion, and anything over 2 months is prolonged.

What happened? DSM-IV classified many more cases as complicated than the older criteria - 80% vs 45%. That's no surprise there because the criteria are obviously a lot broader. But which was better? In order to evaluate them, they compared the "complicated" vs "normal" episodes on six hallmarks of clinical depression - melancholic features, seeking medical treatment, etc.

They found that "complicated" cases were more severe under both criteria but the difference was much more clear cut using DSM-III-R.

Wakefield et al are not saying that the DSM-III-R criteria were perfect. However, it was better at identifying the severe cases than the DSM-IV, which is worrying because DSM-IV was meant to be an improvement on the old system.

Hang on though. DSM-V is coming soon. Are they planning to put things back to how they were, or invent an even better system? No. They're planning to, er, get rid of the bereavement criteria altogether and treat bereavement just like non-bereavement. Seriously. In other words they are planning to diagnose depression purely on the basis of the symptoms, out of context.

Which is so crazy that Wakefield has written another paper all about it (he's been busy recently), which I'm going to cover in an upcoming post. So stay tuned.

ResearchBlogging.orgWakefield JC, Schmitz MF, & Baer JC (2011). Did narrowing the major depression bereavement exclusion from DSM-III-R to DSM-IV increase validity? The Journal of nervous and mental disease, 199 (2), 66-73 PMID: 21278534

Paxil: The Whole Truth?

Paroxetine, aka Paxil aka Seroxat, is an SSRI antidepressant.

Like other SSRIs, its reputation has see-sawed over time. Hailed as miracle drugs in the 1990s and promoted for everything from depression to "separation anxiety" in dogs, they fell from grace over the past decade.

First, concerns emerged over withdrawal symptoms and suicidality especially in young people. Then more recently their antidepressant efficacy came into serious question. Paroxetine has arguably the worst image of all SSRIs, although whether it's much different to the rest is unclear.

Now a new paper claims to provide a definitive assessment of the safety and efficacy of paroxetine in adults (age 18+). The lead authors are from GlaxoSmithKline, who invented paroxetine. So it's no surprise that the text paints GSK and their product in a favourable light, but the data warrant a close look and the results are rather interesting - and complicated.

They took all of the placebo-controlled trials on paroxetine for any psychiatric disorder - because it wasn't just trialled in depression, but also in PTSD, anxiety, and more. They excluded studies with fewer than 30 people; this makes sense though it's somewhat arbitrary, why not 40 or 20? Anyway, they ended up with 61 trials.

First they looked at suicide. In a nutshell paroxetine increased suicidal "behaviour or ideation" in younger patients (age 25 or below) relative to placebo, whether or not they were being treated for depression. In older patients, it only increased suicidality in the depression trials, and the effect was smaller. I've put a red dot where paroxetine was worse than placebo; this doesn't mean the effect was "statistically significant", but the numbers are so small that this is fairly meaningless. Just look at the numbers.

This is not very new. It's been accepted for a while that broadly the same applies when you look at trials of other antidepressants. Whether this causes extra suicides in the real world is a big question.

When it comes to efficacy, however, we find some rather startling info that's not been presented together in one article before, to my knowledge. Here's a graph showing the effect of paroxetine over-and-above placebo in all the different disorders, expressed as a proportion of the improvement seen in the placebo group.

Now I should point out that I just made this measure up. It's not ideal. If the placebo response is very small, then a tiny drug effect will seem large by comparison, even if what this really means is that neither drug nor placebo do any good.

However the flip side of that coin is that it controls for the fact that rating scales for different disorders might be just more likely to show change than others. The d score is a more widely used standardized measure of effect size - though it has its own shortcomings - and I'd like to know those, but the data they provide don't allow us to easily calculate it. You could do it from the GSK database but it would take ages.

Anyway as you can see paroxetine was better, relative to placebo, against PTSD, PMDD, obsessive-compulsive disorder, and social anxiety, than it was against depression measured with the "gold-standard" HAMD scale! In fact the only thing it was worse against was Generalized Anxiety Disorder. Using the alternative MADRS depression scale, the antidepressant effect was bigger, but still small compared to OCD and social anxiety.

This is rather remarkable. Everyone calls paroxetine "an antidepressant", yet at least in one important sense it works better against OCD and social anxiety than it does against depression!

In fact, is paroxetine an antidepressant at all? It works better on MADRS and very poorly on the HAMD; is this because the HAMD is a better scale of depression, and the MADRS actually measures anxiety or OCD symptoms?

That's a lovely neat theory... but in fact the HAMD-17 has two questions about anxiety, scoring 0-4 points each, so you can score up to 8 (or 12 if you count "hypochondriasis", which is basically health anxiety, so you probably should), out of a total maximum of 52. The MADRS has one anxiety item with a max score of 6 on a total of 60. So the HAMD is more "anxious" than the MADRS.

This is more than just a curiosity. Paroxetine's antidepressant effect was tiny in those aged 25 or under on the HAMD - treatment just 9% of the placebo effect - but on the MADRS in the same age group, the benefit was 35%! So what is the HAMD measuring and why is it different to the MADRS?

Honestly, it's hard to tell because the Hamilton scale is so messy. It measures depression and the other distressing symptoms which commonly go along with it. The idea, I think, was that it was meant to be a scale of the patient's overall clinical severity - how seriously they were suffering - rather than a measure of depression per se.

Which is fine. Except that most modern trials carefully exclude anyone with "comorbid" symptoms like anxiety, and on the other hand, recruit people with symptoms quite different to the depressed inpatients that Dr Max Hamilton would have seen when he invented the scale in 1960.

Yet 50 years later the HAMD17, unmodified, is still the standard scale. It's been repeatedly shown to be multi-factorial (it doesn't measure one thing), no-one even agrees on how to interpret it, and a "new scale", the HAMD6, which consists of simply chucking out 11 questions and keeping the 6 that actually measure depression, has been shown to be better. Yet everyone still uses the HAMD17 because everyone else does.

Link: I recently covered a dodgy paper about paroxetine in adolescents with depression; it wasn't included in this analysis because this was about adults.

ResearchBlogging.orgCarpenter DJ, Fong R, Kraus JE, Davies JT, Moore C, & Thase ME (2011). Meta-analysis of efficacy and treatment-emergent suicidality in adults by psychiatric indication and age subgroup following initiation of paroxetine therapy: a complete set of randomized placebo-controlled trials. The Journal of clinical psychiatry PMID: 21367354

Amy Bishop, Neuroscientist Turned Killer

Across at Wired, Amy Wallace has a long but riveting article about Amy Bishop, the neuroscience professor who shot her colleagues at the University of Alabama last year, killing three.

It's a fascinating article because of the picture it paints of a killer and it's well worth the time to read. Yet it doesn't really answer the question posed in the title: "What Made This University Scientist Snap?"

Wallace notes the theory that Bishop snapped because she was denied tenure at the University, a serious blow to anyone's career and especially to someone who, apparantly, believed she was destined for great things. However, she points out that the timing doesn't fit: Bishop was denied tenure several months before the shooting. And she shot at some of the faculty who voted in her favor, ruling out a simple "revenge" motive.

But even if Bishop had snapped the day after she found out about the tenure decision, what would that explain? Thousands of people are denied tenure every year. This has been going on for decades. No-one except Bishop has ever decided to pick up a gun in response.

Bishop had always displayed a streak of senseless violence; in 1986, she killed her 18 year old brother with a shotgun in her own kitchen. She was 21. The death was ruled an accident, but probably wasn't. It's not clear what it was, though: Bishop had no clear motive.

Amy had said something that upset her father. That morning they’d squabbled, and at about 11:30 am, Sam, a film professor at Northeastern University, left the family’s Victorian home to go shopping... Amy, 21, was in her bedroom upstairs. She was worried about “robbers,” she would later tell the police. So she loaded her father’s 12-gauge pump-action shotgun and accidentally discharged a round in her room. The blast struck a lamp and a mirror and blew a hole in the wall...

The gun, a Mossberg model 500A, holds multiple rounds and must be pumped after each discharge to chamber another shell. Bishop had loaded the gun with number-four lead shot. After firing the round into the wall, she could have put the weapon aside. Instead, she took it downstairs and walked into the kitchen. At some point, she pumped the gun, chambering another round.

...[her mother] told police she was at the sink and Seth was by the stove when Amy appeared. “I have a shell in the gun, and I don’t know how to unload it,” Judy told police her daughter said. Judy continued, “I told Amy not to point the gun at anybody. Amy turned toward her brother and the gun fired, hitting him.”

Years later Bishop, possibly with the help of her husband, sent a letter-bomb to a researcher who'd sacked her, Paul Rosenberg. Rosenberg avoided setting off the suspicious package and police disarmed it; Bishop was questioned, but never charged.

Wallace argues that Bishop's "eccentricity", or instability, was fairly evident to those who knew her but that in the environment of science, it went unquestioned because science is full of eccentrics.

I'm not sure this holds up. It's certainly true that science has more than its fair share of oddballs. The "mad scientist" trope is a stereotype but it has its basis in fact and it has done at least since Newton; many say that you can't be a great scientist and be entirely 'normal'.

But the problem with this, as a theory for why Bishop wasn't spotted sooner, is that she was spotted sooner, as unhinged, albeit not as a potential killer,by a number of people. Rosenberg sacked her, in 1993, on the grounds that her work was inadaquate and said that "Bishop just didn’t seem stable". And in 2009, the reason Bishop was denied tenure in Alabama was partially that one of her assessors referred to her as "crazy", more than once; she filed a complaint on that basis.

Bishop also published a bizarre paper in 2009 written by herself, her husband, and her three children, of "Cherokee Lab Systems", a company which was apparantly nothing more than a fancy name for their house. There may be a lot of eccentrics in science, but that's really weird.

So I think that all of these attempts at an explanation fall short. Amy Bishop is a black swan; she is the first American professor to do what she did. Hundreds of thousands of scientists have been through the same academic system and only one ended up shooting their colleagues. If there is an explanation, it lies within Bishop herself.

Whether she was suffering from a diagnosable mental illness is unclear. Her lawyer has said so, but he would; it's her only defence. Maybe we'll learn more at the trial.#

H/T: David Dobbs for linking to this.

The Mystery of "Whoonga"


According to a disturbing BBC news story, South African drug addicts are stealing medication from HIV+ people and using it to get high:

'Whoonga' threat to South African HIV patients

"Whoonga" is, allegedly, the street name for efavirenz (aka Stocrin), one of the most popular antiretroviral drugs. The pills are apparantly crushed, mixed with marijuana, and smoked for its hallucinogenic effects.

This is not, in fact, a new story; Scientific American covered it 18 months ago and the BBC themselves did in 2008 (although they didn't name efavirenz.)

Edit 16.00 pm: In fact the picture is even messier than I first thought. Some sources, e.g. Wikipedia and the articles it links to, mostly from South Africa, suggest that "whoonga" is actually a 'brand' of heroin and that the antiretrovirals may not be the main ingredient, if they're an ingredient at all. If this is true, then the BBC article is misleading. Edit and see the Comments for more on this...

Why would an antiviral drug get you high? This is where things get rather mysterious. Efavirenz is known to enter the brain, unlike most other HIV drugs, and psychiatric side-effects including anxiety, depression, altered dreams, and even hallucinations are common in efavirenz use, especially with high doses (1,2,3), but they're usually mild and temporary. But what's the mechanism?

No-one knows, basically. Blank et al found that efavirenz causes a positive result on urine screening for benzodiazepines (like Valium). This makes sense given the chemical structure:
Efavirenz is not a benzodiazepine, because it doesn't have the defining diazepine ring (the one with two Ns). However, as you can see, it has a lot in common with certain benzos such as oxazepam and lorazepam.

However, while this might well explain why it confuses urine tests, it doesn't by itself go far to explaining the reported psychoactive effects. Oxazepam and lorazepam don't cause hallucinations or psychosis, and they reduce anxiety, rather than causing it.

They also found that efavirenz caused a false positive for THC, the active ingredient in marijuana; this was probably caused by the gluconuride metabolite. Could this metabolite have marijuana-like effects? No-one knows at present.

Beyond that there's been little research on the effects of efavirenz in the brain. This 2010 paper reviewed the literature and found almost nothing. There were some suggestions that it might affect inflammatory cytokines or creatine kinase, but these are not obvious candidates for the reported effects.

Could the liver be responsible, rather than the brain? Interestingly, the 2010 paper says that efavirenz inhibits three liver enzymes: CYPs 2C9, 2C19, and 3A4. All three are involved in the breakdown of THC, so, in theory, efavirenz might boost the effects of marijauna by this mechanism - but that wouldn't explain the psychiatric side effects seen in people who are taking the drug for HIV and don't smoke weed.

Drugs that cause hallucinations generally either agonize 5HT2A receptors or block NMDA receptors. Off the top of my head, I can't see any similarities between efavirenz and drugs that target those systems like LCD (5HT2A) or ketamine or PCP (NMDA), but I'm no chemist and anyway, structural similarity is not always a good guide to what drugs do.

If I were interested in working out what's going on with efavirenz, I'd start by looking at GABA, the neurotransmitter that's the target of benzos. Maybe the almost-a-benzodiazepine-but-not-quite structure means that it causes some unusual effects on GABA receptors? No-one knows at present. Then I'd move on to 5HT2A and NMDA receptors.

Finally, it's always possible that the users are just getting stoned on cannabis and mistakenly thinking that the efavirenz is making it better through the placebo effect. Stranger things have happened. If so, it would make the whole situation even more tragic than it already is.

ResearchBlogging.orgCavalcante GI, Capistrano VL, Cavalcante FS, Vasconcelos SM, MacĂȘdo DS, Sousa FC, Woods DJ, & Fonteles MM (2010). Implications of efavirenz for neuropsychiatry: a review. The International journal of neuroscience, 120 (12), 739-45 PMID: 20964556

The Web of Morgellons

A fascinating new paper: Morgellons Disease, or Antipsychotic-Responsive Delusional Parasitosis, in an HIV Patient: Beliefs in The Age of the Internet

“Mr. A” was a 43-year-old man...His most pressing medical complaint was worrisome fatigue. He was not depressed...had no formal psychiatric history, no family psychiatric history, and he was a successful businessman.

He was referred to the psychiatry department by his primary-care physician (PCP) because of a 2-year-long complaint of pruritus [itching] accompanied by the belief of being infested with parasites. Numerous visits to the infectious disease clinic and an extensive medical work-up...had not uncovered any medical disorder, to the patient’s great frustration.

Although no parasites were ever trapped, Mr. A caused skin damage by probing for them and by applying topical solutions such as hydrogen peroxide to “bring them to the surface.” After reading about Morgellons disease on the Internet, he “recalled” extruding particles from his skin, including “dirt” and “fuzz.”

During the initial consultation visit with the psychiatrist, Mr. A was apprehensive but cautiously optimistic that a medication could help. The psychiatrist had been forewarned by the PCP that the patient had discovered a website describing Morgellons and “latched onto” this diagnosis.

However, it was notable that the patient allowed the possibility (“30%”) that he was suffering from delusions (and not Morgellons), mostly because he trusted his PCP, “who has taken very good care of me for many years.”

The patient agreed to a risperidone [an antipsychotic] trial of up to 2 mg per day. [i.e. a lowish dose]. Within weeks, his preoccupation with being infested lessened significantly... Although not 100% convinced that he might not have Morgellons disease, he is no longer pruritic and is no longer damaging his skin or trying to trap insects. He remains greatly improved 1 year later.
(Mr A. had also been HIV+ for 20 years, but he still had good immune function and the HIV may have had nothing to do with the case.)

"Morgellons" is, according to people who say they suffer from it, a mysterious disease characterised by the feeling of parasites or insects moving underneath the skin, accompanied by skin lesions out of which emerge strange, brightly-coloured fibres or threads. Other symptoms include fatigue, aches and pains, and difficulty concentrating.

According to almost all doctors, there are no parasites, the lesions are caused by the patient's own scratching or attempts to dig out the non-existent critters, and the fibres come from clothes, carpets, or other textiles which the patient has somehow inserted into their own skin. It may seem unbelievable that someone could do this "unconsciously", but stranger things have happened.

As the authors of this paper, Freudenreich et al, say, Morgellons is a disease of the internet age. It was "discovered" in 2002 by a Mary Leitao, with Patient Zero being her own 2 year old son. Since then its fame, and the reported number of cases, has grown steadily - especially in California.

Delusional parasitosis is the opposite of Morgellons: doctors believe in it, but the people who have it, don't. It's seen in some mental disorders and is also quite common in abusers of certain drugs like methamphetamine. It feels like there are bugs beneath your skin. There aren't, but the belief that there are is very powerful.

This then is the raw material in most cases; what the concept of "Morgellons" adds is a theory, a social context and a set of expectations that helps make sense of the otherwise baffling symptoms. And as we know expectations, whether positive or negative, tend to be become experiences. The diagnosis doesn't create the symptoms out of nowhere but rather takes them and reshapes them into a coherent pattern.

As Freudenreich et al note, doctors may be tempted to argue with the patient - you don't have Morgellons, there's no such thing, it's absurd - but the whole point is that mainstream medicine couldn't explain the symptoms, which is why the patient turned to less orthodox ideas.

Remember the extensive tests that came up negative "to the patient’s great frustration." And remember that "delusional parasitosis" is not an explanation, just a description, of the symptoms. To diagnose someone with that is saying "We've no idea why but you've imagined this". True, maybe, but not very palatable.

Rather, they say, doctors should just suggest that maybe there's something else going on, and should prescribe a treatment on that basis. Not rejecting the patient's beliefs but saying, maybe you're right, but in my experience this treatment makes people with your condition feel better, and that's why you're here, right?

Whether the pills worked purely as a placebo or whether there was a direct pharmacological effect, we'll never know. Probably it was a bit of both. It's not clear that it's important, really. The patient improved, and it's unlikely that it would have worked as well if they'd been given in a negative atmosphere of coercion or rejection - if indeed he'd agreed to take them at all.

Morgellons is a classic case of a disease that consists of an underlying experience filtered through the lens of a socially-transmitted interpretation. But every disease is that, to a degree. Even the most rigorously "medical" conditions like cancer also come with a set of expectations and a social meaning; psychiatric disorders certainly do.

I guess Morgellons is too new to be a textbook case yet - but it should be. Everyone with an interest in the mind, everyone who treats diseases, and everyone who's ever been ill - everyone really - ought to be familiar with it because while it's an extreme case, it's not unique. "All life is here" in those tangled little fibres.

ResearchBlogging.orgFreudenreich O, Kontos N, Tranulis C, & Cather C (2010). Morgellons disease, or antipsychotic-responsive delusional parasitosis, in an hiv patient: beliefs in the age of the internet. Psychosomatics, 51 (6), 453-7 PMID: 21051675

WMDs vs MDD

Weapons of Mass Destruction. Nuclear, chemical and biological weapons. They're really nasty, right?

Well, some of them are. Nuclear weapons are Very Destructive Indeed. Even a tiny one, detonated in the middle of a major city, would probably kill hundreds of thousands. A medium-sized nuke could kill millions. The biggest would wipe a small country off the map in one go.

Chemical and biological weapons, on the other hand, while hardly nice, are just not on the same scale.

Sure, there are nightmare scenarios - a genetically engineered supervirus that kills a billion people - but they're hypothetical. If someone does design such a virus, then we can worry. As it is, biological weapons have never proven very useful. The 2001 US anthrax letters killed 5 people. Jared Loughner killed 6 with a gun he bought from a chain store.

Chemical weapons are little better. They were used heavily in WW1 and the Iran-Iraq War against military targets and killed many but never achieved a decisive victory, and the vast majority of deaths in these wars were caused by plain old bullets and bombs. Iraq's use of chemical weapons against Kurds in Halabja killed perhaps 5,000 - but this was a full-scale assault by an advanced air force, lasting several hours, on a defenceless population.

When a state-of-the-art nerve agent was used in the Tokyo subway attack, after much preparation by the cult responsible, who had professional chemists and advanced labs, 13 people died. In London on the 7th July 2005, terrorists killed 52 people with explosives made from haircare products.

Nuclear weapons aside, the best way to cause mass destruction is just to make an explosion, the bigger the better; yet conventional explosives, no matter how big, are not "WMDs", while chemical and biological weapons are.

So it seems to me that the term and the concept of "WMDs" is fundamentally unhelpful. It lumps together the apocalyptically powerful with the much less destructive. If you have to discuss everything except guns and explosives in one category, terms like "Unconventional weapons" are better as they avoid the misleading implication that all of these weapons are very, and equivalently, deadly; but grouping them together at all is risky.

That's WMDs. But there are plenty of other unhelpful concepts out there, some of which I've discussed previously. Take the concept of "major depressive disorder", for example. At least as the term is currently used, it lumps together extremely serious cases requiring hospitalization with mild "symptoms" which 40% of people experience by age 32.

Boy Without A Cerebellum...Has No Cerebellum

A reader pointed me to this piece:

Boy Without a Cerebellum Baffles Doctors
Argh. This is going to be a bit awkward. So I'll just say at the outset that I have nothing against kids struggling with serious illnesses and I wish them all the best.


The article's about Chase Britton, a boy who apparantly lacks two important parts of the brain: the cerebellum and the pons. Despite this, the article says, Chase is a lovely kid and is determined to be as active as possible.

As I said, I am all in favor of this. However the article runs into trouble is where it starts to argue that "doctors are baffled" by this:

When he was 1 year old, doctors did an MRI, expecting to find he had a mild case of cerebral palsy. Instead, they discovered he was completely missing his cerebellum -- the part of the brain that controls motor skills, balance and emotions.

"That's when the doctor called and didn't know what to say to us," Britton said in a telephone interview. "No one had ever seen it before. And then we'd go to the neurologists and they'd say, 'That's impossible.' 'He has the MRI of a vegetable,' one of the doctors said to us."

Chase is not a vegetable, leaving doctors bewildered and experts rethinking what they thought they knew about the human brain.

They don't say which doctor made the "vegetable" comment but whoever it was deserves to be hit over the head with a large marrow because it's just not true. The cerebellum is more or less a kind of sidekick for the rest of the brain. Although it actually contains more brain cells than the rest of the brain put together (they're really small ones), it's not required for any of our basic functions such as sensation or movement.

Without it, you can still move, because movement commands are initiated in the motor cortex. Such movement is clumsy and awkward (ataxia), because the cerebellum helps to coordinate things like posture and gait, getting the timing exactly right to allow you to move smoothly. Like how your mouse makes it easy and intuitive to move the cursor around the screen.

Imagine if you had no mouse and had to move the cursor with a pair of big rusty iron levers to go left and right, up and down. It would be annoying, but eventually, maybe, you could learn to compensate.

From the footage of Chase alongside the article it's clear that he has problems with coordination, albeit he's gradually learning to be able to move despite them.

Lacking a pons is another kettle of fish however. The pons is part of your brainstem and it controls, amongst other things, breathing. In fact you (or rather your body) can survive perfectly well if the whole of your brain above the pons is removed; only the brainstem is required for vital functions.

So it seems very unlikely that Chase actually lacks a pons. The article claims that scans show that "There is only fluid where the cerebellum and pons should be" but as Steven Novella points out in his post on the case, the pons might be so shrunken that it's not easily visible - at least not in the place it normally is - yet functional remnants could remain.

As for the idea that the case is bafflingly unique, it's not really. There are no less than 6 known types of pontocerebellar hypoplasia caused by different genes; Novella points to a case series of children whose cerebellums seemed to develop normally in the womb, but then degenerated when they were born prematurely, which Chase was.

The article has had well over a thousand comments and has attracted lots of links from religious websites amongst others. The case seems, if you believe the article, to mean that the brain isn't all that important, almost as if there was some kind of immaterial soul at work instead... or at the very least suggesting that the brain is much more "plastic" and changeable than neuroscientists suppose.

Unfortunately, the heroic efforts that Chase has been required to make to cope with his disability suggest otherwise and as I've written before, while neuroplasticity is certainly real it has its limits.

 
powered by Blogger