An unusually gripping genetics paper from Biological Psychiatry: Pagnamenta et al.The authors discuss a family where two out of the three children were diagnosed with autism. In 2009, they detected a previously unknown copy number variant mutation in the two affected brothers: a 594 kb deletion knocking out two genes, called DOCK4 and IMMP2L.
Yet this mutation was also carried by their non-autistic mother and sister, suggesting that it wasn't responsible for the autism. The mother's side of the family, however, have a history of dyslexia or undiagnosed "reading difficulties"; all of the 8 relatives with the mutation "performed poorly on reading assessment".
Further investigation revealed that the affected boys also carried a second, entirely separate, novel deletion, affecting the gene CNTNAP5. Their mother and sister did not. This mutation came from their father, who was not diagnosed with autism but apparently had "various autistic traits".
Perhaps it was the combination of the two mutations that caused autism in the two affected boys. The mother's family had a mutation that caused dyslexia; the father's side had one that caused some symptoms of autism but was not, by itself, enough to cause the disorder per se.
However, things aren't so clear. There were cases of diagnosed autism spectrum disorders in the father's family, although few details are given and DNA was only available from one of the father's relatives. So it may have been that the autism was all about the CNTNAP5, and this mutation just has a variable penetrance, causing "full-blown" autism in some people and merely traits in others (like the father).
In order to try to confirm whether these two mutations do indeed cause dyslexia and autism, they searched for them in several hundred unrelated autism and dyslexia patients as well as healthy controls. They detected the a DOCK4 deletion in 1 out of 600 dyslexics (and in his dyslexic father, but not his unaffected sister), but not in 2000 controls. 3 different CNTNAP5 mutations were found in the affected kids from 3 out of 143 autism families, although one of them was also found in over 1000 controls.
This is how psychiatric genetics is shaping up: someone finds a rare mutation in one family, they follow it up, and it's only carried by one out of several hundred other cases. So there are almost certainly hundreds of genes "for" disorders like autism, and it only takes a mutation in one (or two) to cause autism.
Here's another recent example: they found PTCHD1 variants in a full 1% of autism cases. It seems to me that autism, for example, is one of the things that happens when something goes wrong during brain development. Hundreds of genes act in synchrony to build a brain; it only takes one playing out of tune to mess things up, and autism is one common result.
Mental retardation and epilepsy are the other main ones, and we know that there are dozens or hundreds of different forms of these conditions each caused by a different gene or genes. The million dollar question is what it is that makes the autistic brain autistic, as opposed to, say, epileptic.
The "rare variants" model has some interesting implications. The father in the Pagnamenta et al. study had never been diagnosed with anything. He had what the authors call "autistic traits", but presumably he and everyone just thought of those as part of who he was - and they could have been anything from shyness, to preferring routine over novelty, to being good at crosswords.
Had he not carried the CNTNAP5 mutation, he'd have been a completely different person. He might well have been drawn to a very different career, he'd probably never have married the woman he did, etc.
Of course, that doesn't mean that it's "the gene for being him"; all of his other 23,000 genes, and his environment, came together to make him who he was. But the point is that these differences don't just pile up on top of each other; they interact. One little change can change everything.
Link: BishopBlog on why behavioural genetics is more complicated than some people want you to think.Pagnamenta, A., Bacchelli, E., de Jonge, M., Mirza, G., Scerri, T., Minopoli, F., Chiocchetti, A., Ludwig, K., Hoffmann, P., & Paracchini, S. (2010). Characterization of a Family with Rare Deletions in CNTNAP5 and DOCK4 Suggests Novel Risk Loci for Autism and Dyslexia Biological Psychiatry, 68 (4), 320-328 DOI: 10.1016/j.biopsych.2010.02.002
A Tale of Two Genes



Stopping Antidepressants: Not So Fast


People who quit antidepressants slowly, by gradually decreasing the dose, are much less likely to suffer a relapse, according to Baldessarini et al. in the American Journal of Psychiatry.They describe a large sample (400) of patients from Sardinia, Italy, who had responded well to antidepressants, and then stopped taking them. The antidepressants had been prescribed for either depression, or panic attacks.
People who quit suddenly (over 1-7 days) were more likely to relapse, and relapsed sooner, than the ones who stopped gradually (over a period of 2 weeks or more).This graph shows what % of the patients in each group remained well at each time point (in terms of days since their final pill.) As you can see, the two lines separate early, and then remain apart by about the same distance (20%) for the whole 12 months.
What this means is that rapid discontinuation didn't just accelerate relapses that were "going to happen anyway". It actually caused more relapses - about 1 in 5 "extra" people. These "extra" relapses all happened in the first 3 months, because after that, the slope of the lines is identical.
On the other hand, they rarely happened immediately - it's not as if people relapsed within days of their last pill. The pattern was broadly similar for older antidepressants (tricyclics) and newer ones (SSRIs).
The authors note that these data throw up important questions about "relapse prevention" trials comparing people who stay on antidepressants vs. those who are switched - abruptly - to placebo. People who stay on the drug usually do better, but is this because the drug works, or because the people on placebo were withdrawn too fast?
This was an observational study, not an experiment. There was no randomization. People quit antidepressants for various "personal or clinical reasons"; 80% of the time it was their own decision, and only 20% of the time was it due to their doctor's advice.
So it's possible that there was some underlying difference between the two groups, that could explain the differences. Regression analysis revealed that the results weren't due to differences in dose, duration of treatment, diagnosis, age etc., but you can't measure every possible confound.
Only randomized controlled trials could provide a final answer, but there's little chance of anyone doing one. Drug companies are unlikely to fund a study about how to stop using their products. So we have only observational data to go on. These data fit in with previous studies showing that there's a similar story when it comes to quitting lithium and antipsychotics. Gradual is better.
But that's common sense. Tapering medications slowly is a good idea in general, because it gives your system more time to adapt. Of course, sometimes there are overriding medical reasons to quit quickly, but apart from in such cases, I'd always want to come off anything as gradually as possible.Baldessarini RJ, Tondo L, Ghiani C, & Lepri B (2010). Illness risk following rapid versus gradual discontinuation of antidepressants. The American journal of psychiatry, 167 (8), 934-41 PMID: 20478876

Shotgun Psychiatry


There's a paradox at the heart of modern psychiatry, according to an important new paper by Dr Charles E. Dean, Psychopharmacology: A house divided.It's a long and slightly rambling article, but Dean's central point is pretty simple. The medical/biological model of psychiatry assumes that there are such things as psychiatric diseases. Something biological goes wrong, presumably in the brain, and this causes certain symptoms. Different pathologies cause different symptoms - in other words, there is specificity in the relationship between brain dysfunction and mental illness.
Psychiatric diagnosis rests on this assumption. If and only if we can use a given patient's symptoms to infer what kind of underlying illness they have (schizophrenia, bipolar disorder, depression), diagnosis makes sense. This is why we have DSM-IV which consists of a long list of disorders, and the symptoms they cause. Soon we'll have DSM-V.
The medical model has been criticized and defended at great length, but Dean doesn't do either. He simply notes that modern psychiatry has in practice mostly abandoned the medical model, and the irony is, it's done this because of medicines.
If there are distinct psychiatric disorders, there ought to be drugs that treat them specifically. So if depression is a brain disease, say, and schizophrenia is another, there ought to be drugs that only work on depression, and have no effect on schizophrenia (or even make it worse.) And vice versa.
But, increasingly, psychiatric drugs are being prescribed for multiple different disorders. Antidepressants are used in depression, but also all kinds of anxiety disorders (panic, social anxiety, general anxiety), obsessive-compulsive disorder, PTSD, and more. Antipsychotics are also used in mania and hypomania, in kids with behaviour problems, and increasingly in depression, leading some to complain that the term "antipsychotics" is misleading. And so on.
So, Dean argues, in clinical practice, psychiatrists don't respect the medical model - yet that model is their theoretical justification for using psychiatric drugs in the first place.
He looks in detail at one particularly curious case: the use of atypical antipsychotics in depression. Atypicals, like quetiapine (Seroquel) and olanzapine (Zyprexa), were originally developed to treat schizophrenia and other psychotic states. They are reasonably effective, though most of them are no more so than older "typical" antipsychotics.
Recently, atypicals have become very popular for other indications, most of all mood disorders: mania and depression. Their use in mania is perhaps not so surprising, because severe mania has much in common with psychosis. Their use in depression, however, throws up many paradoxes (above and beyond how one drug could treat both mania and its exact opposite, depression.)
Antipsychotics block dopamine D2 receptors. Psychosis is generally considered to be a disorder of "too much dopamine", so that makes sense. The dopamine hypothesis of psychosis and antipsychotic action is 50 years old, and still the best explanation going.
But depression is widely considered to involve too little dopamine, and there is lots of evidence that almost all antidepressants (indirectly) increase dopamine release. Wouldn't that mean that antidepressants could cause psychosis (they don't?). And why, Dean asks, would atypicals, that block dopamine, help treat depression?
Maybe it's because they also act on other systems? On top of being D2 antagonists, atypicals are also serotonin 5HT2A/C receptor blockers. Long-term use of antidepressants reduces 5HT2 levels, and some antidepressants are also 5HT2 antagonists, so this fits. However, it creates a paradox for the many people who believe that 5HT2 antagonism is important for the antipsychotic effect of atypicals as well - if that were true, antidepressants should be antipsychotics as well (they're not.) And so on.
There may be perfectly sensible answers. Maybe atypicals treat depression by some mechanism that we don't understand yet, a mechanism which is not inconsistent with their also treating psychosis. The point is that there are many such questions standing in need of answers, yet psychopharmacologists almost never address them. Dean concludes:
it seems increasingly obvious that clinicians are actually operating from a dimensional paradigm, and not from the classic paradigm based on specificity of disease or drug... the disjunction between those paradigms and our approach to treatment needs to be recognized and investigated... Bench scientists need to be more familiar with current clinical studies, and stop using outmoded clinical research as a basis for drawing conclusions about the relevance of neurochemical processes to drug efficacy. Bench and clinical scientists need to fully address the question of whether the molecular/cellular/anatomical findings, even if interesting and novel, have anything to do with clinical outcome.


You're (Brain Is) So Immature


How mature are you? Have you ever wanted to find out, with a 5 minute brain scan? Of course you have. And now you can, thanks to a new Science paper, Prediction of Individual Brain Maturity Using fMRI.This is another clever application of the support vector machine (SVM) method, which I've written about previously, most recently regarding "the brain scan to diagnose autism". An SVM is a machine learning algorithm: give it a bunch of data, and it'll find patterns in it.
In this case, the input data was brain scans from children, teenagers and adults, and the corresponding ages of each brain. The pattern the SVM was asked to find was the relationship between age and some complex set of parameters about the brain.
The scan was resting state functional connectivity fMRI. This measures the degree to which different areas of the brain tend to activate or deactivate together while you're just lying there (hence "resting"). A high connectivity between two regions means that they're probably "talking to each other", although not necessarily directly.
It worked fairly well:Out of 238 people aged 7 to 30, the SVM was able to "predict" age pretty nicely on the basis of the resting state scan. This graph shows chronological age against predicted brain age (or "fcMI" as they call it). The correlation is strong: r2=0.55.
The authors then tested it on two other large datasets: one was resting state, but conducted on a less powerful scanner (1.5T vs 3.0T) (n=195), and the other was not designed as a resting state scan at all, but did happen to include some resting state-like data (n=186). Despite the fact that these data were, therefore, very different to the original dataset, the SVM was able to predict age with r2 over 0.5 as well.
What use would this be? Well, good question. It would be all too easy to, say, find a scan of your colleague's brain, run it through the Mature-O-Meter, and announce with glee that they have a neurological age of 12, which explains a lot. For example.
However, while this would be funny, it wouldn't necessarily tell you anything about them. We already know everyone's neurological age. It's... their age. Your brain is an old as you are. These data raise the interesting possibility that people with a higher Maturity Index, for their age, are actually more "mature" people, whatever that means. But that might not be true at all. We'll have to wait and see.
How does this help us to understand the brain? An SVM is an incredibly powerful mathematical tool for detecting non-linear correlations in complex data. But just running an SVM on some data doesn't mean we've learned anything: only the SVM has. It's a machine learning algorithm, that's what it does. There's a risk that we'll get "science without understanding" as I've written a while back.
In fact the authors did make a start on this and the results were pretty neat. They found that as the brain matures, long-range functional connections within the brain become stronger, but short-range interactions between neighbours get weaker and this local disconnection with age is the most reliable change.

It's like how when you're a kid, you play with the kids next door, but when you grow up you spend all your time on the internet talking to people thousands of miles away, and never speak to your neighbours. Kind of.
Link: Also blogged about here.


"Koran Burning"


Koran protests sweep Afghanistan... Thousands of protesters have taken to the streets across Afghanistan... Three people were shot when a protest near a Nato base in the north-east of the country turned violent.Wow. That's a lot of fuss about, literally, nothing - the Koran burning hasn't happened. So what are they angry about? The "Koran Burning" - the mere idea of it. That has happened, of course - it's been all over the news.
Why? Well, obviously, it's a big deal. People are getting shot protesting about it in Afghanistan. It's news, so of course the media want to talk about it. But all they're talking about is themselves: the news is that everyone is talking about the news which is that everyone is talking about...
A week ago no-one had heard of Pastor Jones. The only way he could become newsworthy is if he did something important. But what he was proposing to do was not, in itself, important: he was going to burn a Koran in front of a handful of like-minded people.
No-one would have cared about that, because the only people who'd have known about it would have been the participants. Muslims wouldn't have cared, because they would never have heard about it. "Someone You've Never Heard Of Does Something" - not much of a headline.
But as soon as it became news, it was news. Once he'd appeared on CNN, say, every other news outlet was naturally going to cover the story because by then people did care. If something's on CNN, it's news, by definition. Clever, eh?
What's odd is that Jones actually announced his plans way back in July; no-one took much notice at the time. Google Trends shows that interest began to build only in late August, peaking on August 22nd, but then falling off almost to zero.
What triggered the first peak? It seems to have been the decision of the local fire department to deny a permit for the holy book bonfire, on August 18th. (There were just 6 English-language news hits between the 1st and the 17th.)
It all kicked off when the Associated Press reported about the fire department's decision on August 18th and was quickly followed up by everyone else; the AP credit the story to the local paper The Gainsville Sun who covered the story on the same day.
But in their original article, the Sun wrote that Pastor Jones had already made "international headlines" over the event. Indeed there were a number of articles about it in late July following Jones's original Facebook announcement. But interest then disappeared - there was virtually nothing about it in the first half of August, remember.
So there was, it seems, nothing inevitable about this story going global. It had a chance to become a big deal in late July - and it didn't. It had another shot in mid-August, and it got a bit of press that time, but then it all petered out.
Only this week has the story become massive. US commander in Afghanistan General Petraeus spoke out on September 6th; ironically, just before the story finally exploded, since as you can see on the Google Trends above, searches were basically zero up until September 7th when they went through the roof.
So the "Koran Burning" story had three chances to become front-page global news and it only succeeded on the third try. Why? The easy answer is that it's an immediate issue now, because the burning is planned for 11th September - tomorrow. But I wonder if that's one of those post hoc explanations that makes whatever random stuff that happened seem inevitable in retrospect.
The whole story is newsworthy only because it's news, remember. The more attention it gets, the more it attracts. Presumably, therefore, there's a certain critical mass, the famous Tipping Point, after which it's unstoppable. This happened around September 6th, and not in late July or mid August.
But there's a random factor: every given news outlet who might run the story, might decide not to; maybe it doesn't have space because something more important happened, or because the Religion correspondent was off sick that day, etc. Whether a story reaches the critical mass is down to luck, in other words.
The decision of a single journalist on the 5th or the 6th might well have been what finally tipped it.

Autistic Toddlers Like Screensavers


Young children with autism prefer looking at geometric patterns over looking at other people. At least, some of them do. That's according to a new study - Preference for Geometric Patterns Early in Life As a Risk Factor for Autism.Pierce et al took 110 toddlers (age 14 to 42 months). Some of them had autism, some had "developmental delay" but not autism, and some were normally developing.
The kids were shown a one-minute video clip. One half of the screen showed some kids doing yoga, while the other was a set of ever-changing complex patterns. A bit like a screensaver or a kaleidoscope. Eye-tracking apparatus was used to determine which side of the screen each child was looking at.
What happened? Both the healthy control children, and the developmentally delayed children, showed a strong preference for the "social" stimuli - the yoga kids. However, the toddlers with an autism spectrum disorder showed a much wider range of preferences. 40% of them preferred the geometric patterns. Age wasn't a factor.This makes intuitive sense because one of the classic features of autism is a fascination with moving shapes such as wheels, fans, and so on. The authors conclude that
A preference for geometric patterns early in life may be a novel and easily detectable early signature of infants and toddlers at risk for autism.But only a minority of the autism group showed this preference, remember. As you can see from the plot above, they spanned the whole range - and over half behaved entirely normally.
There was no difference between the "social" and "geometrical" halves of the autism group on measures of autism symptoms or IQ, so it wasn't just that only "more severe" autism was associated with an abnormal preference.
They re-tested many of the kids a couple of weeks later, and found a strong correlation between their preference on both occasions, suggesting that it is a real fondness for one over the other - rather than just random eye-wandering.
So this is an interesting result, but it's not clear that it would be of much use for diagnosis.


The Horror, The Horror


You're watching a horror movie.The characters are going about their lives, blissfully unaware that something horrifying is about to happen. You the viewer know that things are going to end badly, though, because you know it's a horror movie.
Someone opens a closet - a bloody corpse could fall out! Or they're drinking a glass of water - which could be infected with a virus! Or they're talking to some guy - who's probably a serial killer! And so on.
The effect of this - and a good director can get a lot of mileage from it - is that scenes which would otherwise be entirely mundane, are experienced as scary, purely because you know that something scary is going to happen, so you see potential horror in every innocent little thing. An expectation as to what's going to happen, leads to you interpreting events in a certain way, and this creates certain emotions.
In a medical context, that would be called a placebo effect. Or a nocebo effect when expectations make people feel worse rather than better.
The horror movie analogy is useful, because it shows that placebo effects don't just happen to other people. We all like to think that if we were given a placebo treatment, we wouldn't be fooled. Unlike all those silly, suggestible, placebo responders, we'd stay as sick as ever until we got a proper cure.
I wouldn't be so sure. We're always interpreting the world around us, and interpreting our own thoughts and feelings, on the basis of our expectations and beliefs about what's going on. We don't suddenly stop doing this when it comes to health.
Suppose you have the flu. You feel terrible, and you're out of aspirin. You don't think you'll be able to make that meeting this afternoon, so you phone in sick.
Now, clearly, flu is a real disease, and it really does make you feel ill. But how do you know that you wouldn't be able to handle the meeting? Unless you have an extensive history of getting the flu in all its various forms, this is an interpretation, a best guess as to what you'll feel in the future, and it might be too pessimistic.
Maybe, if you tried, you'd get on OK. Maybe if you had some aspirin that would reassure you enough to give it a go. And just maybe it would still have worked even if those "aspirins" were just sugar pills...
Link: See my previous posts I Feel X, Therefore Y and How Blind is Double Blind?
