Depression Treatment Increased From 1998 to 2007

A paper just out reports on the changing patterns of treatment for depression in the USA, over the period from 1998 to 2007.

The headline news is that it increased: the overall rate of people treated for some form of "depression" went from 2.37% to 2.88% per year. That's an increase of 21%, which is not trivial, but it's much less than the increase in the previous decade: it was just 0.73% in 1987.

But the increase was concentrated in. some groups of people.

  • Americans over 50 accounted for the bulk of the rise. Their use went up by about 50%, while rates in younger people stayed almost steady. In '98 the peak age band was 35-49, now it's 50-64, with almost 5% of those people getting treated in any given year.
  • Men's rates of treatment went up by over 40% while women's only increased by 10%. Women are still more likely to get treated for depression than men, though, with a ratio of 1.7 women for each 1 man. But that ratio is a lot closer than it used to be.
  • Black people's rates increased hugely, by 120%. Rates in black people now stand at 2.2% which is close behind whites at 3.2%. Hispanics are now the least treated major ethnic group at 1.9%: in previous studies, blacks were the least treated. (There was no data on Asians or others).
So the increase wasn't an across the board rise, as we saw from '87 to '98. Rather the '98-'07 increase was more of a "catching up" by people who've historically had low levels of treatment, closing in on the level of the historically highest group: middle-aged white women.

In terms of what treatments people got, out of everyone treated for depression, 80% got some kind of drugs, and that didn't change much. But use of psychotherapy declined a bit from 54% to 43% (some people got both).

What's also interesting is that the same authors reported last year that, over pretty much the same time period ('96 to '05), the number of Americans who used antidepressants in any given year sky-rocketed from 5% to 10% - that is to say, much faster than the rate of depression treatment rose! And the data are comparable, because they came from the same national MEPS surveys.

In other words, the decade must have seen antidepressants increasingly being used to treat stuff other than depression. What stuff? Well, all kinds of things. SSRIs are popular in everything from anxiety and OCD to premature ejaculation. Several of the "other new" drugs, like mirtazapine and trazodone, are very good at putting you to sleep (rather too good, some users would say...)

ResearchBlogging.orgMarcus SC, & Olfson M (2010). National trends in the treatment for depression from 1998 to 2007. Archives of general psychiatry, 67 (12), 1265-73 PMID: 21135326

XMRV - Innocent on All Counts?

A bombshell has just gone off in the continuing debate over XMRV, the virus that may or may not cause chronic fatigue syndrome. Actually, 4 bombshells.

A set of papers out today in Retrovirology (1,2,3,4) claim that many previous studies claiming to have found the virus haven't actually been detecting XMRV at all.

Here's the rub. XMRV is a retrovirus, a class of bugs that includes HIV. Retroviruses are composed of RNA, but they can insert themselves into the genetic material of host cells as DNA. This is how they reproduce: once their DNA is part of the host cell's chromosomes, that cell is ends up making more copies of the virus.

But there are lots of retroviruses out there, and there used to be yet others that are now extinct. So bits of retroviral DNA are scattered throughout the genome of animals. These are called endogenonous retro-viruses (ERVs).

XMRV is extremely similar to certain ERVs found in the DNA of mice. And mice are the most popular laboratory mammals in the world. So you can see the potential problem: laboratories all over the world are full of mice, but mouse DNA might show up as "XMRV" DNA on PCR tests.

Wary virologists take precautions against this by checking specifically for mouse DNA. But most mouse-contamination tests are targeted at mouse mitochondrial DNA (mtDNA). In theory, a test for mouse mtDNA is all you need, because mtDNA is found in all mouse cells. In theory.

Now the four papers (or are they the Four Horsemen?) argue, in a nutshell, that mouse DNA shows up as "XMRV" on most of the popular tests that have been used in the past, that mouse contamination is very common - even some of the test kits are affected! - and that tests for mouse mtDNA are not good enough to detect the problem.

  • Hue et al say that "Taqman PCR primers previously described as XMRV-specific can amplify common murine ERV sequences from mouse suggesting that mouse DNA can contaminate patient samples and confound specific XMRV detection." They go on to show that some human samples previously reported as infected with XMRV, are actually infected with a hybrid of XMRV and a mouse ERV which we know can't infect humans.
  • Sato et al report that PCR testing kits from Invitrogen, a leading biotech company, are contaminated with mouse genes including an ERV almost identical to XMRV, and that this shows up as a false positive using commonly used PCR primers "specific to XMRV".
  • Oakes et al say that in 112 CFS patients and 36 healthy control, they detected "XMRV" in some samples but all of these samples were likely contaminated with mouse DNA because "all samples that tested positive for XMRV and/or MLV DNA were also positive for the highly abundant IAP long terminal repeat [found only in mice] and most were positive for murine mitochondrial cytochrome oxidase sequences [found only in mice]"
  • Robinson et al agree with Oakes et al: they found "XMRV" in some human samples, in this case prostate cancer cells, but they then found that all of the "infected" samples were contaminated with mouse DNA. They recommend that in future, samples should be tested for mouse genes such as the IAP long terminal repeat or cytochrome oxidase, and that researchers should not rely on tests for mouse mtDNA.
They're all open-access so everyone can take a peek. For another overview see this summary published alongside them in Retrovirology.

I lack the technical knowledge to evaluate these claims, no doubt plenty of people will be rushing to do that before long. (Update: The excellent virologyblog has a more technical discussion of these studies.) But there are a couple of things to bear in mind.

Firstly, these papers cast doubt on tests using PCR to detect XMRV DNA. However, they don't have anything to say about studies which have looked for antibodies against XMRV in human blood, at least not directly. There haven't been many of these, but the paper which started the whole story, Lombardi et al (2009), did look for, and found, anti-XMRV immunity, and also used various other methods to support the idea that XMRV is present in humans. So this isn't an "instant knock-out" of the XMRV theory, although it's certainly a serious blow.

Secondly, if the 'mouse theory' is true, it has serious implications for the idea that XMRV causes chronic fatigue syndrome and also for the older idea that it's linked to prostate cancer. But it still leaves a mystery: why were the samples from CFS or prostate cancer patients more likely to be contaminated with mouse DNA than the samples from healthy controls?

ResearchBlogging.orgRobert A Smith (2010). Contamination of clinical specimens with MLV-encoding nucleic acids: implications for XMRV and other candidate human retroviruses Retrovirology : 10.1186/1742-4690-7-112

The Almond of Horror

Remember the 90s, when No Fear stuff was cool, and when people still said "cool"?

Well, a new paper has brought No Fear back, by reporting on a woman who has no fear - due to brain damage. The article, The Human Amygdala and the Induction and Experience of Fear, is brought to you by a list of neuroscientists including big names such as Antonio Damasio (of Phineas Gage fame).

The basic story is nice and simple. There's a woman, SM, who lacks a part of the brain called the amygdala. They found that she can't feel fear. Therefore, it's reasonable to assume that the amygdala's required for fear. But there's a bit more to it than that...

The amygdala is a small nugget of the brain nestled in the medial temporal lobe. The name comes from the Greek for "almond" because apparently it looks like one, though I can't say I've noticed the resemblance myself.

What does it do? Good question. There are two main schools of thought. Some think that the amygdala is responsible for the emotion of fear, while others argue that its role is much broader and that it's responsible for measuring the "salience" or importance of stimuli, which covers fear but also much else.

That's where this new paper comes in, with the patient SM. She's not a new patient: she's been studied for years, and many papers have been published about her. I wonder if her acronym doesn't stand for "Scientific Motherlode"?

She's one of the very few living cases of Urbach-Wiethe disease, an extremely rare genetic disorder which causes selective degeneration of the amygdala as well as other symptoms such as skin problems.

Previous studies on SM mostly focussed on specific aspects of her neurological function e.g. memory, perception and so on. However there have been a few studies of her "everyday" experiences and personality. Thus we learned that:

Two experienced clinical psychologists conducted "blind" interviews of SM (the psychologists were not provided any background information)... Both reached the conclusion that SM expressed a normal range of affect and emotion... However, they both noted that SM was remarkably dispassionate when relating highly emotional and traumatic life experiences... To the psychologists, SM came across as a "survivor", as being "resilient" and even "heroic".
These observations were based on interviews under normal conditions; what would happen if you actually went out of your way to try and scare her? So they did.

First, they took her to an exotic pet store and got her to meet various snakes and spiders. She was perfectly happy picking up the various critters and had to be prevented from getting too closely acquainted with the more dangerous ones.

What's fascinating is that before she went to the store, she claimed to hate snakes and spiders! Why? Before she developed Urbach-Wiethe disease, she had a normal childhood up to about the age of 10. Presumably she used to be afraid of them, and just never updated this belief, a great example of how our own narratives about our feelings can clash with our real feelings.

They subsequently confirmed that SM was fearless by taking her to a "haunted asylum" (check it out, even the website is scary) and showing her various horror movie clips, as well as through interviews with herself and her son. They also describe an incredible incident from several years ago: SM was walking home late at night when she saw
A man, whom SM described as looking “drugged-out.” As she walked past the park, the man called out and motioned for her to come over. SM made her way to the park bench. As she got within arm’s reach of the man, he suddenly stood up, pulled her down to the bench by her shirt, stuck a knife to her throat, and exclaimed, “I’m going to cut you, bitch!”

SM claims that she remained calm, did not panic, and did not feel afraid. In the distance she could hear the church choir singing. She looked at the man and confidently replied, “If you’re going to kill me, you’re gonna have to go through my God’s angels first.” The man suddenly let her go. SM reports “walking” back to her home. On the following day, she walked past the same park again. There were no signs of avoidance behavior and no feelings of fear.
All this suggests that the amygdala has a key role in the experience of fear, as opposed to other emotions: there is no evidence to suggest that SM lacks the ability to experience happiness or sadness in the same way.

So this is an interesting contribution to the debate on the role of the amygdala, although we really need someone to do equally detailed studies on other Urbach-Wiethe patients to make sure that it's not just that SM happens to be unusually brave for some other reason. What's doubly interesting, though, is that Ralph Adolphs, one of the authors, has previously argued against the view of the amygdala as a "fear center".

Links: I've previously written about the psychology of horror movies and I've reviewed quite a lot of them too.

ResearchBlogging.orgJustin S. Feinstein, Ralph Adolphs, Antonio Damasio,, & and Daniel Tranel (2010). The Human Amygdala and the Induction and Experience of Fear Current Biology

The Scanner's Prayer

MRI scanners have revolutionized medicine and provided neuroscientists with some incredible tools for exploring the brain.

But that doesn't mean they're fun to use. They can be annoying, unpredictable beings, and you never know whether they're going to bless you with nice results or curse you with cancelled scans and noisy data.

So for the benefit of everyone who has to work with MRI, here is a devotional litany which might just keep your scanner from getting wrathful at the crucial moment. Say this before each scan. Just remember, the magnet is always on and it can read your mind, so make sure you really mean it, and refrain from scientific sins...


Our scanner, which art from Siemens,
Hallowed be thy coils.
Thy data come;
Thy scans be done;
In grey matter as it is in white matter.
Give us this day our daily blobs.
And forgive us our trespasses,
As we forgive them that trespass onto our scan slots.
And lead us not into the magnet room carrying a pair of scissors,
But deliver us from volunteers who can’t keep their heads still.
For thine is the magnet,
The gradients,
And the headcoil,
For ever and ever (at least until we can afford a 7T).

(Apologies to Christians).

The Time Travelling Brain

What's the difference between walking down the street yesterday, and walking down the street tomorrow?

It's nothing to do with the walking, or the street: that's the same. When seems to be something external to the what, how, and where of the situation. But this creates a problem for neuroscientists.

We think we know how the fact that the brain could store the concept of "walking down the street" (or "walking" and "street"). Very roughly, simple sensory impressions are thought to get built up into more and more complex combinations, and this happens as you move away from the brain's primary visual cortex (V1) and down the so-called ventral visual stream.

In area V1, cells respond mostly to nothing more complex than position and the orientations of straight lines: / or \ or _ , etc. Whereas once you get to the temporal lobe, far down the stream, you have cells that respond to Jennifer Aniston. In between are progressively more complex collections of features.

Even if the details are wrong, the fact that complex objects are composed of simpler parts and ultimately raw sensations, means that our ability to process complex scenes doesn't seem too mysterious, given that we have senses.

But the fact that we can take any given scene, and effortlessly think of it as either "past", "present", or "future", is puzzling under this view because, as I said, the scene itself is the same in all cases. And it's not as if we have a sense devoted to time: the only time we're ever directly aware of, is "right now".

Swedish neuroscientists Nyberg et al used fMRI to measure brain activity associated with "mental time travel": Consciousness of subjective time in the brain. They scanned volunteers and asked them imagine walking between two points, in 4 different situations: past, present, future, or remembered (as opposed to imagined in the past). This short walk was one which they'd really done, many times.

What happened?
Compared to a control task of doing mental arithmetic, both remembering and imagining the walk activated numerous brain areas and there was very strong overlap between the two conditions. No big surprise there.

The crucial contrast was between remembering, past imagining and future imagining, vs. imagining in the present. This revealed a rather cute little blob:

This small nugget of the left parietal cortex represents an area where the brain is more active when thinking about times other than the present, relative to thinking about the same thing, but right now. They note that this area "partly overlaps a left angular region shown to be recruited during both past and future thinking and with parietal regions implicated in self-projection in past, present, or future time."

So what? This is a nice study, but like most fMRI it doesn't tell us what this area is actually doing. To know that, we'd need to know what would happen to someone if that area were damaged. Would they be unable to imagine any time except the present? Would they think their memories were happening right now? Maybe you could use rTMS could temporarily inactivate it - if you could find volunteers willing to lose their sense of time for a while...

ResearchBlogging.orgNyberg L, Kim AS, Habib R, Levine B, & Tulving E (2010). Consciousness of subjective time in the brain. Proceedings of the National Academy of Sciences of the United States of America PMID: 21135219

Wikileaks: A Conversation

"Wikileaks is great. It lets people leak stuff."

"Hang on, so you're saying that no-one could leak stuff before? They invented it?"

"Well, no, but they brought leaking to the masses. Sure, people could post documents to the press before, but now anyone in the world can access the leaks!"

"Great, but isn't that just the internet that did that? If it weren't for Wikileaks, people could just upload their leaks to a blog. Or email them to 50 newspapers. Or put them on the torrents. Or start their own site. If it's good, it would go viral, and be impossible to take down. Just like Wikileaks, with all their mirrors, except even more secure, because there'd be literally no-one to arrest or cut off funding to."

"OK, but Wikileaks is a brand. It's not about the technical stuff - it's the message. Like one of their wallpapers says, they're synonymous with free speech."

"So you think it's a good thing that one organization has become synonymous with the whole process of leaking? With the whole concept of openness? What will happen to the idea of free speech, then, if that brand image suddenly gets tarnished - like, say, if their founder and figurehead gets convicted of a serious crime, or..."

"He's innocent! Justice for Julian!"

"Quite possibly, but why do you care? Is he a personal friend?"

"It's an attack on free speech!"

"So you agree that one man has become synonymous with free speech? Doesn't that bother you?"

"Erm... well. Look, fundamentally, we need Wikileaks. Before, there was no centralized system for leaking. Anyone could do it. It was a mess! Wikileaks put everything in one place, and put a committee of experts in a position to decide what was worth leaking and what wasn't. It brought much-needed efficiency and respectability to the idea of leaking. Before Wikileaks, it was anarchy. They're like... the government."


Edit: See also The Last Psychiatrist's take.

Meditation vs. Medication for Depression

What's the best way to overcome depression? Antidepressant drugs, or Buddhist meditation?

A new trial has examined this question: Segal et al. The short answer is that 8 weeks of mindfulness mediation training was just as good as prolonged antidepressant treatment over 18 months. But like all clinical trials, there are some catches.

Right mindfulness, sammā-sati, is the 7th step on the Buddha's Nobel Eightfold Path of enlightenment. In its modern therapeutic form, however, it's a secular practice: you don't have to be a Buddhist to meditate here (but it presumably helps).

Mindfulness meditation is also branded nowadays as mindfulness-based cognitive behavioural therapy (MCBT), although how much it has in common with regular CBT is debatable. The technique is derived from the Buddhist tradition.

The essence of mindfulness is deceptively simple: you try to become a detached observer of your own feelings and thoughts. Rather than just getting angry, you notice the feelings of anger, without letting them take over. As I've written before, while this might sound easy, we're not always aware of our own feelings.

MCBT has attracted a lot of attention as a possible way of helping people with depression achieve relapse prevention. The idea is that if you can train people to become aware of depressive thoughts and feelings if they start to reappear, they'll be able to avoid being sucked into the cycle of depression.

The 160 patients in this trial were initially treated with antidepressants, starting with an SSRI, and if that didn't work, moving onto venlafaxine (up to 375 mg, as necessary, which is a serious dose) or mirtazapine for people who couldn't take the side effects. This is a sensible treatment regime, not one relying on low doses and doubtful drugs, as in many other antidepressant trials.

About half of the patients both stayed in the trial and achieved remission. After 5 months of sustained treatment, these 84 patients were randomized into 3 groups: continuation of their antidepressant, placebo pills, or mindfulness. The people who ended up on placebo had their antidepressants gradually replaced by sugar pills over a number of weeks, to avoid withdrawal effects.

Here's what happened:

People on placebo did very badly, with only 20% remaining well 18 months later. People who either stayed on the drugs, or who got the mindfulness training, did a lot better, with 70% staying well, and there were no differences between the two.

However here's the catch. This was only true of a sub-set of the patients, the ones who had an "unstable remission", meaning that when they were originally treated with drugs, their symptoms went up and down a bit. The "stable remission" people showed no benefits of either treatment, with the ones on placebo doing slightly better, if anything.

Overall, though, this is a decent study, and shows that, for some people, mindfulness can be helpful. A skeptic could complain that mindfulness was no better than medication, but it might have two advantages: cost, and side effects, though this would depend on the medication you were talking about (some are a lot more expensive, and more prone to side-effects, than others.) The mindfulness meditation also wasn't double-blind, so the benefits may have been placebo effects, but that could be said of almost any trial of psychotherapy.

I also wonder whether you'd do even better if you became all mindful and stayed on medication: this study had no combined-treatment group, unfortunately, but this is something to look into...

ResearchBlogging.orgSegal ZV, Bieling P, Young T, Macqueen G, Cooke R, Martin L, Bloch R, & Levitan RD (2010). Antidepressant Monotherapy vs Sequential Pharmacotherapy and Mindfulness-Based Cognitive Therapy, or Placebo, for Relapse Prophylaxis in Recurrent Depression. Archives of general psychiatry, 67 (12), 1256-64 PMID: 21135325

Delusions of Gender

Note: This book quotes me approvingly, so this is not quite a disinterested review.

Cordelia Fine's Delusions of Gender is an engaging, entertaining and powerfully argued reply to the many authors - who range from the scientifically respectable to the less so - who've recently claimed to have shown biological sex differences in brain, mind and behaviour.

Fine makes a strong case that the sex differences we see, in everything from behaviour to school achievements in mathematics, could be caused by the society in which we live, rather than by biology. Modern culture, she says, while obviously less sexist than in the past, still contains deeply entrenched assumptions about how boys and girls ought to behave, what they ought to do and what they're good at, and these - consciously or unconsciously - shape the way we are.

Some of the Fine's targets are obviously bonkers, like Vicky Tuck, but for me, the most interesting chapters were those dealing in detail with experiments which have been held up as the strongest examples of sex differences, such as the Cambridge study claiming that newborn boys and girls differ in how much they prefer looking at faces as opposed to mechanical mobiles.

But Delusions is not, in Steven Pinker's phrase, saying we ought to return to "Blank Slatism", and it doesn't try to convince you that every single sex difference definately is purely cultural. It's more modest, and hence, much more believable: simply a reminder that the debate is still an open one.

Fine makes a convincing case (well, it convinced me) that the various scientific findings, mostly from the past 10 years, that seem to prove biological differences, are not, on the whole, very strong, and that even if we do accept their validity, they don't rule out a role for culture as well.

This latter point is, I think, especially important. Take, for example, the fact that in every country on record, men roughly between the ages of 16-30 are responsible for the vast majority of violent crimes. This surely reflects biology somehow; whether it's the fact that young men are physically the strongest people, or whether it's more psychological, is by the by.

But this doesn't mean that young men are always violent. In some countries, like Japan, violent crime is extremely rare; in other countries, it's tens of times more common; and during wars or other periods of disorder, it becomes the norm. Young men are always, relatively speaking, the most violent but the absolute rate of violence varies hugely, and that has nothing to do with gender. It's not that violent places have more men than peaceful ones.

Gender, in other words, doesn't explain violence in any useful way - even though there surely are gender differences. The same goes for everything else: men and women may well have, for biological reasons, certain tendencies or advantages, but that doesn't automatically explain (and it doesn't justify) all of the sex differences we see today; it's only ever a partial explanation, with culture being the other part.

Online Comments: It's Not You, It's Them

Last week I was at a discussion about New Media, and someone mentioned that they'd been put off from writing content online because of a comment on one of their articles accusing them of being "stupid".

I found this surprising - not the comment, but that anyone would take it so personally. It's the internet. You will get called names. Everyone does. It doesn't mean there's anything wrong with you.

I suspect this is a generational issue. People who 'grew up online' know, as Penny Arcade explained, that

The sad fact is that there are millions of people whose idea of fun is to find people they disagree with, and mock them. And they're right, it can be fun - why else do you think people like Jon Stewart are so popular? - but that's all it is, entertainment. If you're on the receiving end, don't take it seriously.

If you write something online, and a lot of people read it, you will get slammed. Someone, somewhere, will disagree with you and they'll tell you so, in no uncertain terms. This is true whatever you write about, but some topics are like a big red rag to the herds of bulls out there.

Just to name a few, if you say anything vaguely related to climate change, religion, health, the economy, feminism or race, you might as well be holding a placard with a big arrow pointing down at you and "Sling Mud Here" on it.

The point is - it's them, not you. They are not interested in you, they don't know you, it's not you. True, they might tailor their insults a bit; if you're a young woman you might be, say, a "stupid girl" where a man would merely get called an "idiot". But this doesn't mean that the attacks are a reflection on you in any way. You just happen to be the one in the line of fire.

What do you do about this? Nothing.

Trying to enter into a serious debate is pointless. Insulting them back can be fun, just remember that if you find it fun, you've become one of them: "he who stares too long into the abyss...", etc. Complaining to the moderators might help, but unless the site has a rock solid zero-tolerance-for-fuckwads policy, probably not. Where the blight has taken root, like Comment is Free, I'd not waste your time complaining. Just ignore it and carry on.

The most important thing is not to take it personally. Do not get offended. Do not care. Because no-one else cares. Especially the people who wrote the comments. They presumably care about whatever "issue" prompted their attack, but they don't care about you. If anything, you should be pleased, because on the internet, the only stuff that doesn't attract stupid comments is the stuff that no-one reads.

I've heard these attacks referred to as "policing" existing hierarchies or "silencing" certain types of people. This seems to me to be granting them far more respect than they deserve. With the actual police, if you break the rules, they will physically arrest you. They have power. Internet trolls don't: if they succeed in policing or silencing anybody, it's because their targets let them boss them around. They're nobody; they're not your problem.

If you can't help being offended by such comments, don't read them, but ideally you shouldn't need to resort to that. For one thing, it means you miss the sensible comments (and there's always a few). But fundamentally, you shouldn't need to do this, because you really shouldn't care what some anonymous joker from the depths of the internet thinks about you.

Autism and Old Fathers

A new study has provided the strongest evidence yet that the rate of autism in children rises with the father's age: Advancing paternal age and risk of autism. But questions remain.

The association between old fathers and autism has been known for many years, and the most popular explanation has been genetic: sperm from older men are more likely to have accumulated DNA damage, which might lead to autism.

As I've said before, this might explain some other puzzling things such as the fact that autism is more common in the wealthy; it might even explain any recent increases in the prevalence of autism, if people nowadays are waiting longer to have kids.

But there are other possibilities. It might be that the fathers of autistic people tend to have mild autistic symptoms themselves (which they do), and this makes them likely to delay having children, because they're socially anxious and so take longer to get married, or whatever. It's not implausible.

The new study aimed to control for this, by looking at parents who had two or more children, at least one of them with autism, and at least one without it. Even within such families, the autistic children tended to have older fathers when they were born - that is to say, they were born later. See the graphs below for details. This seems to rule out explanations based on the characteristics of the parents.

However, there's another objection, the "experienced parent" theory. Maybe if parents have already had one neurotypical child, they're better at spotting the symptoms of autism in subsequent children, by comparison with the first one.

The authors tried to account for this as well, by controlling for the birth-order ("parity") of the kids. They also controlled for the mother's age amongst several other factors such as year of birth and history of mental illness in the parents. The results were still highly significant: older fathers meant a higher risk of autism. As if that wasn't enough, they also did a meta-analysis of all the previous studies and confirmed the same thing.

So overall, this is a very strong study, but there's a catch. The study population included over a million children (1,075,588) born in Sweden between 1983 and 1992. Of these, there was a total of 883 diagnosed cases of autism. That's a rate of 0.08%. In other words, although older fathers raised the risk of autism by quite a lot relatively speaking, the absolute rate was still tiny.

The most recent estimates of autism prevalence in Britain have put the figure at somewhere in the region of between 1% and 2% e.g. Baird et al (2006) and Baron-Cohen et al (2009) with American studies, using slightly different methods, generally coming in just below 1%. So the Swedish figure is more than 10 times lower than modern estimates. Whether this reflects different criteria for diagnosis, national differences, or increased prevalence over time, is debatable but it does raise the question of whether these findings still apply today.

The only way to know for sure would be to do a randomized controlled trial - get half your volunteer men to wait 10 years before having children - but I don't think that's going to happen any time soon...

ResearchBlogging.orgHultman CM, Sandin S, Levine SZ, Lichtenstein P, & Reichenberg A (2010). Advancing paternal age and risk of autism: new evidence from a population-based study and a meta-analysis of epidemiological studies. Molecular psychiatry PMID: 21116277

How To Fool A Lie Detector Brain Scan

Can fMRI scans be used to detect deception?

It would be nice, although a little scary, if they could. And there have been several reports of succesful trials under laboratory conditions. However, a new paper in Neuroimage reveals an easy way of tricking the technology: Lying In The Scanner.

The authors used a variant of the "guilty knowledge test" which was originally developed for use with EEG. Essentially, you show the subject a series of pictures or other stimui, one of which is somehow special; maybe it's a picture of the murder weapon or something else which a guilty person would recognise, but the innocent would not.

You then try to work out whether the subject's brain responds differently to the special target stimulus as opposed to all the other irrelevant ones. In this study, the stimuli were dates, and for the "guilty" volunteers, the "murder weapon" was their own birthday, a date which obviously has a lot of significance for them. For the "innocent" people, all the dates were random.

What happened? The scans were extremely good at telling the "guilty" from the "innocent" people - it managed a 100% accuracy with no false positive or false negatives. The image above shows the activation associated with the target stimulus (birthdays) over and above the control stimuli. In two seperate groups of volunteers, the blobs were extremely similar. So the technique does work in principle, which is nice.

But the countermeasures fooled it entirely, reducing accuracy to well below random chance. And the countermeasures were very simple: before the scan, subjects were taught to associate an action, a tiny movement of one of their fingers or toes, with some of the "irrelevant" dates. This, of course, made these dates personally relevant, just like the really relevant stimuli, so there was no difference between them, making the "guilty" appear "innocent".

Maybe it'll be possible in the future to tell the difference between brain responses to really significant stimuli as opposed to artifical ones, or at least, to work out whether or not someone is using this trick. Presumably, if there's a neural signiture for guilty knowledge, there's also one for trying to game the system. But as it stands, this is yet more evidence that lie detection using fMRI is by no means ready for use in the real world just yet...

ResearchBlogging.orgGanis G, Rosenfeld JP, Meixner J, Kievit RA, & Schendan HE (2010). Lying in the scanner: Covert countermeasures disrupt deception detection by functional magnetic resonance imaging. NeuroImage PMID: 21111834

Exercise and Depression: It's Complicated

Some ideas seem so nice, so inoffensive and so harmless, that it seems a shame to criticize them.

Take the idea that exercise is a useful treatment for depression. It's got something for everyone.

For doctors, it's attractive because it means they can recommend exercise - which is free, quick, and easy, at least for them - instead of spending the time and money on drugs or therapy. Governments like it for the same reason, and because it's another way of improving the nation's fitness. For people who don't much like psychiatry, exercise offers a lovely alternative to psych drugs - why take those nasty antidepressants if exercise will do just as well? And so on.

But this doesn't mean it's true. And a large observational study from Norway has just cast doubt on it: Physical activity and common mental disorders.

The authors took a large community sample of Norwegian people, the HUNT-2 study, which was done between 1995 and 1997. Over 90,000 people were invited to take part and full data were available from over 40,000.

What they found was that there was an association between taking part in physical exercise as a leisure activity, and lower self-reported symptoms of depression. It didn't matter whether the activity was intense or mild, and it didn't really matter how often you did it: so long as you did it, you got the benefit.

Crucially, however, the same was not true of physical exercise which was part of your job. That didn't help at all, and indeed the most strenuous jobs were associated with more depression (but less anxiety, strangely).

How does this fit with the very popular idea that exercise helps in depression? Well, many randomized trials have indeed
shown exercise to be better than not-exercize for depression
, but the problem is that these trials are never really placebo controlled. You can usually tell whether or not you're going jogging in the park every morning.

So the direct effects of exercise per se are hard to distinguish from the social and psychological meaning of "exercise". Knowing that you're starting a program of exercise could make you feel better: you're taking positive action to improve your life, you're not helpless in the face of your problems. By contrast, doing heavy work as part of your job, while physiologically beneficial, is unlikely to be so much fun.

This doesn't mean that telling people to get more exercise isn't a good idea, but if the meaning of exercise is more important than the physiology, that has some big implications for how it ought to be used.

It's good news for people who just can't take part in strenuous physical exercise because of physical illness or disability, something which is quite common in mental health. It suggests that these people could still get the benefits attributed to exercise even if they did less demanding forms of meaningful activity.

But it's bad news for doctors tempted to default to "get out and go jogging" whenever they see a potentially depressed person. Because if it's the meaning of exercise that counts, and you recommend exercise in a way which sounds like you're dismissing their problems, the meaning will be anything but helpful.

In clinical trials of exercise, the exercise program has, almost by definition, a positive value: it's the whole point of the trial. And the participants just wouldn't have volunteered for the trial if they didn't, on some level, think it would make them feel better.

But not everyone thinks that way. If you go to your doctor looking to get medication, or psychotherapy, or something like that, and you're told that all you need to do is go and get more exercise, it would be easy to see that as a brush-off, especially if it's done unsympathetically. The point is, if exercise doesn't feel like a positive step, it probably won't be one.

ResearchBlogging.orgHarvey SB, Hotopf M, Overland S, & Mykletun A (2010). Physical activity and common mental disorders. The British journal of psychiatry : the journal of mental science, 197, 357-64 PMID: 21037212

The Town That Went Mad

Pont St. Esprit is a small town in southern France. In 1951 it became famous as the site of one of the most mysterious medical outbreaks of modern times.

As Dr's Gabbai, Lisbonne and Pourquier wrote to the British Medical Journal, 15 days after the "incident":

The first symptoms appeared after a latent period of 6 to 48 hours. In this first phase, the symptoms were generalized, and consisted in a depressive state with anguish and slight agitation.

After some hours the symptoms became more clearly defined, and most of the patients presented with digestive disturbances... Disturbances of the autonomic nervous system accompanied the digestive disorders-gusts of warmth, followed by the impression of "cold waves", with intense sweating crises. We also noted frequent excessive salivation.

The patients were pale and often showed a regular bradycardia (40 to 50 beats a minute), with weakness of the pulse. The heart sounds were rather muffled; the extremities were cold... Thereafter a constant symptom appeared - insomnia lasting several days... A state of giddiness persisted, accompanied by abundant sweating and a disagreeable odour. The special odour struck the patient and his attendants.
In most patients, these symptoms, including the total insomnia, persisted for several days. In some of the patients, these symptoms progressed to full-blown psychosis:
Logorrhoea [speaking a lot], psychomotor agitation, and absolute insomnia always presaged the appearance of mental disorders. Towards evening visual hallucinations appeared, recalling those of alcoholism. The particular themes were visions of animals and of flames. All these visions were fleeting and variable.

In many of the patients they were followed by dreamy delirium. The delirium seemed to be systematized, with animal hallucinations and self-accusation, and it was sometimes mystical or macabre. In some cases terrifying visions were followed by fugues, and two patients even threw themselves out of windows... Every attempt at restraint increased the agitation.

In severe cases muscular spasms appeared, recalling those of tetanus, but seeming to be less sustained and less painful... The duration of these periods of delirium was very varied. They lasted several hours in some patients, in others they still persist.
In total, about 150 people suffered some symptoms. About 25 severe cases developed the "delirium". 4 people died "in muscular spasm and in a state of cardiovascular collapse"; three of these were old and in poor health, but one was a healthy 25-year-old man.

At first, the cause was assumed to be ergotism - poisoning caused by chemicals produced by a fungus which can infect grain crops. Contaminated bread was, therefore, thought to be responsible. Ergotism produces symptoms similar to those reported at Pont St. Esprit, including hallucinations, because some of the toxins are chemically related to LSD.

However, there have been other theories. Some (including Albert Hofmann, the inventor of LSD) attribute the poisoning to pesticides containing mercury, or to the flour bleaching agent nitrogen trichloride.

More recently, journalist Hank Albarelli claimed that it was in fact a CIA experiment to test out the effects of LSD as a chemical weapon, though this is disputed. What really happened is, in other words, still a mystery.

Link: The Crazies (2010) is a movie about a remarkably similar outbreak of mass insanity in a small town.

ResearchBlogging.orgGABBAI, LISBONNE, & POURQUIER (1951). Ergot poisoning at Pont St. Esprit. British medical journal, 2 (4732), 650-1 PMID: 14869677

Massive Magnets Reveal More Sex In the Brain

"Is that a 7 Tesla magnet in your pocket, or are you just pleased to see me?"

German neuroscientists Metzger et al report on the results of a study using the latest, ultra-high-field Magnetic Resonance Imaging to measure brain activity in response to sexually arousing stimuli.

Most fMRI studies are done using MRI scanners with a field strength of either 1.5 Tesla or, most commonly nowadays, 3.0 Tesla. However, a few especially forward-thinking, by which I mean wealthy, research centres have started investing in 7 Tesla scanners.

Stronger magnetic fields mean that the scanner is able to pick up smaller differences in brain activation, with a better temporal and spatial resolution, although it's not all good news because some of the artefacts that can spoil MRI images also get worse with higher fields. 7 Tesla magnets are also so incredibly powerful that you literally have to tread carefully around them: move too fast through the field, and you can suffer dizziness, vertigo, and visual disturbances...

Anyway, Metzger et al's paper is one of the first 7 Tesla fMRI studies and it's pretty cool. They managed to achieve a spatial resolution of 1.7 x 1.7 x 3 mm , or about three times better than most studies, and a temporal resolution of 1 second, twice or three times better than usual.

They showed heterosexual male subjects a range of pictures, some of them pornographic, others "emotional" but non-sexual. They found that anticipating seeing a picture, as opposed to than actually viewing one, activated different areas of the cortex and also of the thalamus (see image above). Sexual arousal was also correlated with activity in a third thalamic area.

This fits with previous work, so it's not too surprising, but it shows the power of 7 Tesla fMRI: as you can see, the thalamus is a small structure, and conventional fMRI struggles to localize activity to particular subdivisions of it. But we know that the thalamus is a hotbed of activity, because almost all the information that goes to and from the rest of brain passes through it. Until now, fMRI researchers have tended to treat the thalamus as a no-man's land but with any luck, 7 Tesla scanners will start to change that. For those who can afford them...

ResearchBlogging.orgMetzger CD, Eckert U, Steiner J, Sartorius A, Buchmann JE, Stadler J, Tempelmann C, Speck O, Bogerts B, Abler B, & Walter M (2010). High field FMRI reveals thalamocortical integration of segregated cognitive and emotional processing in mediodorsal and intralaminar thalamic nuclei. Frontiers in neuroanatomy, 4 PMID: 21088699

England Rules the (Brain) Waves

Yes, England has finally won something. After a poor showing in the 2010 World Cup, the Eurovision Song Contest, and the global economic crisis, we're officially #1 in neuroscience. Which clearly is the most important measure of a nation's success.

According to data collated by and released recently, each English neuroscience paper from the past 10 years has been cited, on average, 24.53 times, making us the most cited country in the world relative to the total number of papers published (source here). We're second only to the USA in terms of overall citations.

(In this table, "Rank" refers to total number of citations).

Why is this? I suspect it owes a lot to the fact that England has produced many of the technical papers which everyone refers to (although few people have ever read). Take the paper Dynamic Causal Modelling by Karl Friston et al from London. It's been cited 649 times since 2003, because it's the standard reference for the increasingly popular fMRI technique of the same name.

Or take Ashburner and Friston's Voxel-Based Morphometry—The Methods, cited over 2000 times in the past 10 years, which introduced a method for measuring the size of different brain regions. Or take...most of Karl Friston's papers, actually. He's the single biggest contributor to the way in which modern neuroimaging is done.

The Tree of Science

How do you know whether a scientific idea is a good one or not?

The only sure way is to study it in detail and know all the technical ins and outs. But good ideas and bad ideas behave differently over time, and this can provide clues as to which ones are solid; useful if you're a non-expert trying to evaluate a field, or a junior researcher looking for a career.

Today's ideas are the basis for tomorrow's experiments. A good idea will lead to experiments which provide interesting results, generating new ideas, which will lead to more experiments, and so on.

Before long, it will be taken as granted that it's true, because so many successful studies assumed it was. The mark of a really good idea is not that it's always being tested and found to be true; it's that it's an unstated assumption of studies which could only work if it were true. Good ideas grow onwards and upwards, in an expanding tree, with each exciting new discovery becoming the boring background of the next generation.

Astronomers don't go around testing whether light travels at a finite speed as opposed to an infinite one; rather, if it were infinite, their whole set-up would fail.

Bad ideas generate experiments too, but they don't work out. The assumptions are wrong. You try to explain why something happens, and you find that it doesn't happen at all. Or you come up with an "explanation", but next time, someone comes along and finds evidence suggesting the "true" explanation is the exact opposite.

Unfortunately, some bad ideas stick around, for political or historical reasons or just because people are lazy. What tends to happen is that these ideas are, ironically, more "productive" than good ideas: they are always giving rise to new hypotheses. It's just that these lines of research peter out eventually, meaning that new ones have to take their place.

As an example of a bad idea, take the theory that "vaccines cause autism". This hypothesis is, in itself, impossible to test: it's too vague. Which vaccines? How do they cause autism? What kind of autism? In which people? How often?

The basic idea that some vaccines, somewhere, somehow, cause some autism, has been very productive. It's given rise to a great many, testable, ideas. But every one which has been tested has proven false.

First there was the idea that the MMR vaccine causes autism, linked to a "leaky gut" or "autistic enterocolitis". It doesn't, and it's not linked to that. Then along came the idea that actually it's mercury preservatives in vaccines that cause autism. It doesn't. No problem - maybe it's aluminium? Or maybe it's just the Hep B vaccine? And so on.

At every turn, it's back to square one after a few years, and a new idea is proposed. "We know this is true; now we just need to work out why and how...". Except that turns out to be tricky. Hmm. Maybe, if you keep ending up back at square one, you ought to find a new square to start from.

Genes To Brains To Minds To... Murder?

A group of Italian psychiatrists claim to explain How Neuroscience and Behavioral Genetics Improve Psychiatric Assessment: Report on a Violent Murder Case.

The paper presents the horrific case of a 24 year old woman from Switzerland who smothered her newborn son to death immediately after giving birth in her boyfriend's apartment. After her arrest, she claimed to have no memory of the event. She had a history of multiple drug abuse, including heroin, from the age of 13.

Forensic psychiatrists were asked to assess her case and try to answer the question of whether "there was substantial evidence that the defendant had an irresistible impulse to commit the crime." The paper doesn't discuss the outcome of the trial, but the authors say that in their opinion she exhibits a pattern of "pathologically impulsivity, antisocial tendencies, lack of planning...causally linked to the crime, thus providing the basis for an insanity defense."

But that's not all. In the paper, the authors bring neuroscience and genetics into the case in an attempt to provide
a more “objective description” of the defendant’s mental disease by providing evidence that the disease has “hard” biological bases. This is particularly important given that psychiatric symptoms may be easily faked as they are mostly based on the defendant’s verbal report.
So they scanned her brain, and did DNA tests for 5 genes which have been previously linked to mental illness, impulsivity, or violent behaviour. What happened? Apparently her brain has "reduced gray matter volume in the left prefrontal cortex" - but that was compared to just 6 healthy control women. You really can't do this kind of analysis on a single subject, anyway.

As for her genes, well, she had genes. On the famous and much-debated 5HTTLPR polymorphism, for example, her genotype was long/short; while it's true that short is generally considered the "bad" genotype, something like 40% of white people, and an even higher proportion of East Asians, carry it. The situation was similar for the other four genes (STin2 (SCL6A4), rs4680 (COMT), MAOA-uVNTR, DRD4-2/11, for gene geeks).

I've previously posted about cases in which a well-defined disorder of the brain led to criminal behaviour. There was the man who became obsessed with child pornography following surgical removal of a tumour in his right temporal lobe. There are the people who show "sociopathic" behaviour following fronto-temporal degeneration.

However this woman's brain was basically "normal" at least as far as a basic MRI scan could determine. All the pieces were there. Her genotypes was also normal in that lots of normal people carry the same genes; it's not (as far as we know) that she has a rare genetic mutation like Brunner syndrome in which an important gene is entirely missing. So I don't think neurobiology has much to add to this sad story.


We're willing to excuse perpetrators when there's a straightforward "biological cause" for their criminal behaviour: it's not their fault, they're ill. In all other cases, we assign blame: biology is a valid excuse, but nothing else is.

There seems to be a basic difference between the way in which we think about "biological" as opposed to "environmental" causes of behaviour. This is related, I think, to the Seductive Allure of Neuroscience Explanations and our fascination with brain scans that "prove that something is in the brain". But when you start to think about it, it becomes less and less clear that this distinction works.

A person's family, social and economic background is the strongest known predictor of criminality. Guys from stable, affluent families rarely mug people; some men from poor, single-parent backgrounds do. But muggers don't choose to be born into that life any more than the child-porn addict chose to have brain cancer.

Indeed, the mugger's situation is a more direct cause of his behaviour than a brain tumour. It's not hard to see how a mugger becomes, specifically, a mugger: because they've grown up with role-models who do that; because their friends do it or at least condone it; because it's the easiest way for them to make money.

But it's less obvious how brain damage by itself could cause someone to seek child porn. There's no child porn nucleus in the brain. Presumably, what it does is to remove the person's capacity for self-control, so they can't stop themselves from doing it.

This fits with the fact that people who show criminal behaviour after brain lesions often start to eat and have (non-criminal) sex uncontrollably as well. But that raises the question of why they want to do it in the first place. Were they, in some sense, a pedophile all along? If so, can we blame them for that?

ResearchBlogging.orgRigoni D, Pellegrini S, Mariotti V, Cozza A, Mechelli A, Ferrara SD, Pietrini P, & Sartori G (2010). How neuroscience and behavioral genetics improve psychiatric assessment: report on a violent murder case. Frontiers in behavioral neuroscience, 4 PMID: 21031162

Blue Morning

Recently, I wrote about diurnal mood variation: the way in which depression often waxes and wanes over the course of the day. Mornings are generally the worst.

A related phenomenon is late insomnia, or "early morning waking".

But this phrase is rather an understatement. Everyone's woken up early. Maybe you had a flight to catch. Or you were drunk and threw up. Or you just needed a pee. That's early morning waking, but not the depressive kind. When you're depressed, the waking up is the least of your problems.

Suddenly, you are awake, more awake than you've ever been. And you know something terrible has happened, or is about to happen, or that you've done something terribly wrong. It feels like a Eureka moment. You can be a level-headed person, not given to jumping to conclusions, but you will be convinced of this.

In a panic attack, you think you're going to die. Your heart is beating too fast, your breathing's too deep: your body is exploding, you can feel it too closely. With this, With this, you think you should die or even, in some sense, already have. It feels cold: you can no longer feel the warmth of your own body.

The moment passes; the terrible truth that you were so certain of five minutes ago becomes a little doubtful. Maybe it's not quite so bad. At this point, the wakefulness goes too, and you become, well, as tired as you ought to be at 3 am. You try to go back to sleep. If you're lucky, you succeed. If not, you lie awake until morning in a state of miserable contemplation.

While it's happening, you think that you're going to feel this way forever; bizarrely, you think you always have felt this way. In fact, this is the darkest hour.


Why does this happen? There has been almost no research on early morning waking. Presumably, because it's so hard to study. To observe it, you would have to get your depressed patients to spend all night in your brain scanner (or, if you prefer, on your analyst's couch), and even then, it doesn't happen every night.

But here's my theory: the key is the biology of sleep. There are many stages of sleep; at a very rough approximation there's dreaming REM, and dreamless slow-wave. Now, REM sleep tends to happen during the second half of the night - the early morning.

During REM sleep, the brain is, in many respects, awake. This is presumably what allows us to have concious dreams. Whereas in slow wave sleep, the brain really is offline; slow waves are also seen in the brain of people in comas, or under deep anaesthesia.

When we're awake, the brain is awash with modulatory neurotransmitters, such as serotonin, norepinephrine, and acetylcholine. During REM, acetylcholine is present, while in slow-wave sleep it's not; indeed acetylcholine may well be what stops slow waves and "wakes up" the cortex.

But unlike during waking, serotonin and norepinephrine neurons are entirely inactive during REM sleep - and only during REM sleep. This fact is surprisingly little-known, but it seems to me that it explains an awful lot.

For one thing, it explains why drugs which increase serotonin levels, such as SSRI antidepressants, inhibit REM sleep. Indeed, high doses of MAOi antidepressants prevent REM entirely (without any noticeable ill-effects, suggesting REM is dispensable). SSRIs only partially suppress it.

Ironically, SSRIs can make dreams more vivid and colourful. I've been told by sleep scientists that this is because they delay the onset of REM so the dreams are "shifted" later into the night making you more likely to remember them when you wake up. But there could be more to it than that.

The fact that REM is a serotonin-free zone also explains wet dreams. Serotonin is well known to suppresses ejaculation; that's why SSRIs delay orgasm, one of their least popular side effects, although it's useful to treat premature ejaculation: every cloud has a silver lining.

So, having said all that: could this also explain the terror of early-morning waking? Suppose that, for whatever reason, you woke up during REM sleep, but your serotonin cells didn't wake up quick enough, leaving you awake, but with no serotonin (a situation which never normally occurs, remember). How would that feel?

Using a technique called acute tryptophan depletion (ATD), you can lower someone's serotonin levels. In most people, this doesn't do very much, but in some people with a history of depression, it causes them to relapse. Here's what happened to one patient after ATD:
[her] previous episodes of clinical depression were associated with the loss of important friendships had, while depressed, been preoccupied with fears that she would never be able to sustain a relationship. She had not had such fears since then.

She had been fully recovered and had not taken any medication for over a year. About 2 h after drinking the tryptophan-free mixture she experienced a sudden onset of sadness, despair, and uncontrollable crying. She feared that a current important relationship would end.
We don't know why tryptophan depletion does this to some people, or why it doesn't affect everyone the same way, and it's pure speculation that early morning waking has anything to do with this. But having said that, the pieces do seem to fit.

powered by Blogger