An unusually gripping genetics paper from Biological Psychiatry: Pagnamenta et al.
The authors discuss a family where two out of the three children were diagnosed with autism. In 2009, they detected a previously unknown copy number variant mutation in the two affected brothers: a 594 kb deletion knocking out two genes, called DOCK4 and IMMP2L.
Yet this mutation was also carried by their non-autistic mother and sister, suggesting that it wasn't responsible for the autism. The mother's side of the family, however, have a history of dyslexia or undiagnosed "reading difficulties"; all of the 8 relatives with the mutation "performed poorly on reading assessment".
Further investigation revealed that the affected boys also carried a second, entirely separate, novel deletion, affecting the gene CNTNAP5. Their mother and sister did not. This mutation came from their father, who was not diagnosed with autism but apparently had "various autistic traits".
Perhaps it was the combination of the two mutations that caused autism in the two affected boys. The mother's family had a mutation that caused dyslexia; the father's side had one that caused some symptoms of autism but was not, by itself, enough to cause the disorder per se.
However, things aren't so clear. There were cases of diagnosed autism spectrum disorders in the father's family, although few details are given and DNA was only available from one of the father's relatives. So it may have been that the autism was all about the CNTNAP5, and this mutation just has a variable penetrance, causing "full-blown" autism in some people and merely traits in others (like the father).
In order to try to confirm whether these two mutations do indeed cause dyslexia and autism, they searched for them in several hundred unrelated autism and dyslexia patients as well as healthy controls. They detected the a DOCK4 deletion in 1 out of 600 dyslexics (and in his dyslexic father, but not his unaffected sister), but not in 2000 controls. 3 different CNTNAP5 mutations were found in the affected kids from 3 out of 143 autism families, although one of them was also found in over 1000 controls.
This is how psychiatric genetics is shaping up: someone finds a rare mutation in one family, they follow it up, and it's only carried by one out of several hundred other cases. So there are almost certainly hundreds of genes "for" disorders like autism, and it only takes a mutation in one (or two) to cause autism.
Here's another recent example: they found PTCHD1 variants in a full 1% of autism cases. It seems to me that autism, for example, is one of the things that happens when something goes wrong during brain development. Hundreds of genes act in synchrony to build a brain; it only takes one playing out of tune to mess things up, and autism is one common result.
Mental retardation and epilepsy are the other main ones, and we know that there are dozens or hundreds of different forms of these conditions each caused by a different gene or genes. The million dollar question is what it is that makes the autistic brain autistic, as opposed to, say, epileptic.
The "rare variants" model has some interesting implications. The father in the Pagnamenta et al. study had never been diagnosed with anything. He had what the authors call "autistic traits", but presumably he and everyone just thought of those as part of who he was - and they could have been anything from shyness, to preferring routine over novelty, to being good at crosswords.
Had he not carried the CNTNAP5 mutation, he'd have been a completely different person. He might well have been drawn to a very different career, he'd probably never have married the woman he did, etc.
Of course, that doesn't mean that it's "the gene for being him"; all of his other 23,000 genes, and his environment, came together to make him who he was. But the point is that these differences don't just pile up on top of each other; they interact. One little change can change everything.
Link: BishopBlog on why behavioural genetics is more complicated than some people want you to think.
Pagnamenta, A., Bacchelli, E., de Jonge, M., Mirza, G., Scerri, T., Minopoli, F., Chiocchetti, A., Ludwig, K., Hoffmann, P., & Paracchini, S. (2010). Characterization of a Family with Rare Deletions in CNTNAP5 and DOCK4 Suggests Novel Risk Loci for Autism and Dyslexia Biological Psychiatry, 68 (4), 320-328 DOI: 10.1016/j.biopsych.2010.02.002
A Tale of Two Genes
Stopping Antidepressants: Not So Fast
People who quit antidepressants slowly, by gradually decreasing the dose, are much less likely to suffer a relapse, according to Baldessarini et al. in the American Journal of Psychiatry.
They describe a large sample (400) of patients from Sardinia, Italy, who had responded well to antidepressants, and then stopped taking them. The antidepressants had been prescribed for either depression, or panic attacks.
People who quit suddenly (over 1-7 days) were more likely to relapse, and relapsed sooner, than the ones who stopped gradually (over a period of 2 weeks or more).
This graph shows what % of the patients in each group remained well at each time point (in terms of days since their final pill.) As you can see, the two lines separate early, and then remain apart by about the same distance (20%) for the whole 12 months.
What this means is that rapid discontinuation didn't just accelerate relapses that were "going to happen anyway". It actually caused more relapses - about 1 in 5 "extra" people. These "extra" relapses all happened in the first 3 months, because after that, the slope of the lines is identical.
On the other hand, they rarely happened immediately - it's not as if people relapsed within days of their last pill. The pattern was broadly similar for older antidepressants (tricyclics) and newer ones (SSRIs).
The authors note that these data throw up important questions about "relapse prevention" trials comparing people who stay on antidepressants vs. those who are switched - abruptly - to placebo. People who stay on the drug usually do better, but is this because the drug works, or because the people on placebo were withdrawn too fast?
This was an observational study, not an experiment. There was no randomization. People quit antidepressants for various "personal or clinical reasons"; 80% of the time it was their own decision, and only 20% of the time was it due to their doctor's advice.
So it's possible that there was some underlying difference between the two groups, that could explain the differences. Regression analysis revealed that the results weren't due to differences in dose, duration of treatment, diagnosis, age etc., but you can't measure every possible confound.
Only randomized controlled trials could provide a final answer, but there's little chance of anyone doing one. Drug companies are unlikely to fund a study about how to stop using their products. So we have only observational data to go on. These data fit in with previous studies showing that there's a similar story when it comes to quitting lithium and antipsychotics. Gradual is better.
But that's common sense. Tapering medications slowly is a good idea in general, because it gives your system more time to adapt. Of course, sometimes there are overriding medical reasons to quit quickly, but apart from in such cases, I'd always want to come off anything as gradually as possible.
Baldessarini RJ, Tondo L, Ghiani C, & Lepri B (2010). Illness risk following rapid versus gradual discontinuation of antidepressants. The American journal of psychiatry, 167 (8), 934-41 PMID: 20478876
Shotgun Psychiatry
There's a paradox at the heart of modern psychiatry, according to an important new paper by Dr Charles E. Dean, Psychopharmacology: A house divided.
It's a long and slightly rambling article, but Dean's central point is pretty simple. The medical/biological model of psychiatry assumes that there are such things as psychiatric diseases. Something biological goes wrong, presumably in the brain, and this causes certain symptoms. Different pathologies cause different symptoms - in other words, there is specificity in the relationship between brain dysfunction and mental illness.
Psychiatric diagnosis rests on this assumption. If and only if we can use a given patient's symptoms to infer what kind of underlying illness they have (schizophrenia, bipolar disorder, depression), diagnosis makes sense. This is why we have DSM-IV which consists of a long list of disorders, and the symptoms they cause. Soon we'll have DSM-V.
The medical model has been criticized and defended at great length, but Dean doesn't do either. He simply notes that modern psychiatry has in practice mostly abandoned the medical model, and the irony is, it's done this because of medicines.
If there are distinct psychiatric disorders, there ought to be drugs that treat them specifically. So if depression is a brain disease, say, and schizophrenia is another, there ought to be drugs that only work on depression, and have no effect on schizophrenia (or even make it worse.) And vice versa.
But, increasingly, psychiatric drugs are being prescribed for multiple different disorders. Antidepressants are used in depression, but also all kinds of anxiety disorders (panic, social anxiety, general anxiety), obsessive-compulsive disorder, PTSD, and more. Antipsychotics are also used in mania and hypomania, in kids with behaviour problems, and increasingly in depression, leading some to complain that the term "antipsychotics" is misleading. And so on.
So, Dean argues, in clinical practice, psychiatrists don't respect the medical model - yet that model is their theoretical justification for using psychiatric drugs in the first place.
He looks in detail at one particularly curious case: the use of atypical antipsychotics in depression. Atypicals, like quetiapine (Seroquel) and olanzapine (Zyprexa), were originally developed to treat schizophrenia and other psychotic states. They are reasonably effective, though most of them are no more so than older "typical" antipsychotics.
Recently, atypicals have become very popular for other indications, most of all mood disorders: mania and depression. Their use in mania is perhaps not so surprising, because severe mania has much in common with psychosis. Their use in depression, however, throws up many paradoxes (above and beyond how one drug could treat both mania and its exact opposite, depression.)
Antipsychotics block dopamine D2 receptors. Psychosis is generally considered to be a disorder of "too much dopamine", so that makes sense. The dopamine hypothesis of psychosis and antipsychotic action is 50 years old, and still the best explanation going.
But depression is widely considered to involve too little dopamine, and there is lots of evidence that almost all antidepressants (indirectly) increase dopamine release. Wouldn't that mean that antidepressants could cause psychosis (they don't?). And why, Dean asks, would atypicals, that block dopamine, help treat depression?
Maybe it's because they also act on other systems? On top of being D2 antagonists, atypicals are also serotonin 5HT2A/C receptor blockers. Long-term use of antidepressants reduces 5HT2 levels, and some antidepressants are also 5HT2 antagonists, so this fits. However, it creates a paradox for the many people who believe that 5HT2 antagonism is important for the antipsychotic effect of atypicals as well - if that were true, antidepressants should be antipsychotics as well (they're not.) And so on.
There may be perfectly sensible answers. Maybe atypicals treat depression by some mechanism that we don't understand yet, a mechanism which is not inconsistent with their also treating psychosis. The point is that there are many such questions standing in need of answers, yet psychopharmacologists almost never address them. Dean concludes:
it seems increasingly obvious that clinicians are actually operating from a dimensional paradigm, and not from the classic paradigm based on specificity of disease or drug... the disjunction between those paradigms and our approach to treatment needs to be recognized and investigated... Bench scientists need to be more familiar with current clinical studies, and stop using outmoded clinical research as a basis for drawing conclusions about the relevance of neurochemical processes to drug efficacy. Bench and clinical scientists need to fully address the question of whether the molecular/cellular/anatomical findings, even if interesting and novel, have anything to do with clinical outcome.Dean CE (2010). Psychopharmacology: A house divided. Progress in neuro-psychopharmacology & biological psychiatry PMID: 20828593
You're (Brain Is) So Immature
How mature are you? Have you ever wanted to find out, with a 5 minute brain scan? Of course you have. And now you can, thanks to a new Science paper, Prediction of Individual Brain Maturity Using fMRI.
This is another clever application of the support vector machine (SVM) method, which I've written about previously, most recently regarding "the brain scan to diagnose autism". An SVM is a machine learning algorithm: give it a bunch of data, and it'll find patterns in it.
In this case, the input data was brain scans from children, teenagers and adults, and the corresponding ages of each brain. The pattern the SVM was asked to find was the relationship between age and some complex set of parameters about the brain.
The scan was resting state functional connectivity fMRI. This measures the degree to which different areas of the brain tend to activate or deactivate together while you're just lying there (hence "resting"). A high connectivity between two regions means that they're probably "talking to each other", although not necessarily directly.
It worked fairly well:
Out of 238 people aged 7 to 30, the SVM was able to "predict" age pretty nicely on the basis of the resting state scan. This graph shows chronological age against predicted brain age (or "fcMI" as they call it). The correlation is strong: r2=0.55.
The authors then tested it on two other large datasets: one was resting state, but conducted on a less powerful scanner (1.5T vs 3.0T) (n=195), and the other was not designed as a resting state scan at all, but did happen to include some resting state-like data (n=186). Despite the fact that these data were, therefore, very different to the original dataset, the SVM was able to predict age with r2 over 0.5 as well.
What use would this be? Well, good question. It would be all too easy to, say, find a scan of your colleague's brain, run it through the Mature-O-Meter, and announce with glee that they have a neurological age of 12, which explains a lot. For example.
However, while this would be funny, it wouldn't necessarily tell you anything about them. We already know everyone's neurological age. It's... their age. Your brain is an old as you are. These data raise the interesting possibility that people with a higher Maturity Index, for their age, are actually more "mature" people, whatever that means. But that might not be true at all. We'll have to wait and see.
How does this help us to understand the brain? An SVM is an incredibly powerful mathematical tool for detecting non-linear correlations in complex data. But just running an SVM on some data doesn't mean we've learned anything: only the SVM has. It's a machine learning algorithm, that's what it does. There's a risk that we'll get "science without understanding" as I've written a while back.
In fact the authors did make a start on this and the results were pretty neat. They found that as the brain matures, long-range functional connections within the brain become stronger, but short-range interactions between neighbours get weaker and this local disconnection with age is the most reliable change.
You can see this on the pic above: long connections get stronger (orange) while short ones get weaker (green), in general. This is true all across the brain.
It's like how when you're a kid, you play with the kids next door, but when you grow up you spend all your time on the internet talking to people thousands of miles away, and never speak to your neighbours. Kind of.
Link: Also blogged about here.
Dosenbach NU, Nardos B, Cohen AL, Fair DA, Power JD, Church JA, Nelson SM, Wig GS, Vogel AC, Lessov-Schlaggar CN, Barnes KA, Dubis JW, Feczko E, Coalson RS, Pruett JR Jr, Barch DM, Petersen SE, & Schlaggar BL (2010). Prediction of individual brain maturity using fMRI. Science (New York, N.Y.), 329 (5997), 1358-61 PMID: 20829489
"Koran Burning"
Koran protests sweep Afghanistan... Thousands of protesters have taken to the streets across Afghanistan... Three people were shot when a protest near a Nato base in the north-east of the country turned violent.Wow. That's a lot of fuss about, literally, nothing - the Koran burning hasn't happened. So what are they angry about? The "Koran Burning" - the mere idea of it. That has happened, of course - it's been all over the news.
Why? Well, obviously, it's a big deal. People are getting shot protesting about it in Afghanistan. It's news, so of course the media want to talk about it. But all they're talking about is themselves: the news is that everyone is talking about the news which is that everyone is talking about...
A week ago no-one had heard of Pastor Jones. The only way he could become newsworthy is if he did something important. But what he was proposing to do was not, in itself, important: he was going to burn a Koran in front of a handful of like-minded people.
No-one would have cared about that, because the only people who'd have known about it would have been the participants. Muslims wouldn't have cared, because they would never have heard about it. "Someone You've Never Heard Of Does Something" - not much of a headline.
But as soon as it became news, it was news. Once he'd appeared on CNN, say, every other news outlet was naturally going to cover the story because by then people did care. If something's on CNN, it's news, by definition. Clever, eh?
What's odd is that Jones actually announced his plans way back in July; no-one took much notice at the time. Google Trends shows that interest began to build only in late August, peaking on August 22nd, but then falling off almost to zero.
What triggered the first peak? It seems to have been the decision of the local fire department to deny a permit for the holy book bonfire, on August 18th. (There were just 6 English-language news hits between the 1st and the 17th.)
It all kicked off when the Associated Press reported about the fire department's decision on August 18th and was quickly followed up by everyone else; the AP credit the story to the local paper The Gainsville Sun who covered the story on the same day.
But in their original article, the Sun wrote that Pastor Jones had already made "international headlines" over the event. Indeed there were a number of articles about it in late July following Jones's original Facebook announcement. But interest then disappeared - there was virtually nothing about it in the first half of August, remember.
So there was, it seems, nothing inevitable about this story going global. It had a chance to become a big deal in late July - and it didn't. It had another shot in mid-August, and it got a bit of press that time, but then it all petered out.
Only this week has the story become massive. US commander in Afghanistan General Petraeus spoke out on September 6th; ironically, just before the story finally exploded, since as you can see on the Google Trends above, searches were basically zero up until September 7th when they went through the roof.
So the "Koran Burning" story had three chances to become front-page global news and it only succeeded on the third try. Why? The easy answer is that it's an immediate issue now, because the burning is planned for 11th September - tomorrow. But I wonder if that's one of those post hoc explanations that makes whatever random stuff that happened seem inevitable in retrospect.
The whole story is newsworthy only because it's news, remember. The more attention it gets, the more it attracts. Presumably, therefore, there's a certain critical mass, the famous Tipping Point, after which it's unstoppable. This happened around September 6th, and not in late July or mid August.
But there's a random factor: every given news outlet who might run the story, might decide not to; maybe it doesn't have space because something more important happened, or because the Religion correspondent was off sick that day, etc. Whether a story reaches the critical mass is down to luck, in other words.
The decision of a single journalist on the 5th or the 6th might well have been what finally tipped it.
Autistic Toddlers Like Screensavers
Young children with autism prefer looking at geometric patterns over looking at other people. At least, some of them do. That's according to a new study - Preference for Geometric Patterns Early in Life As a Risk Factor for Autism.
Pierce et al took 110 toddlers (age 14 to 42 months). Some of them had autism, some had "developmental delay" but not autism, and some were normally developing.
The kids were shown a one-minute video clip. One half of the screen showed some kids doing yoga, while the other was a set of ever-changing complex patterns. A bit like a screensaver or a kaleidoscope. Eye-tracking apparatus was used to determine which side of the screen each child was looking at.
What happened? Both the healthy control children, and the developmentally delayed children, showed a strong preference for the "social" stimuli - the yoga kids. However, the toddlers with an autism spectrum disorder showed a much wider range of preferences. 40% of them preferred the geometric patterns. Age wasn't a factor.
This makes intuitive sense because one of the classic features of autism is a fascination with moving shapes such as wheels, fans, and so on. The authors conclude that
A preference for geometric patterns early in life may be a novel and easily detectable early signature of infants and toddlers at risk for autism.But only a minority of the autism group showed this preference, remember. As you can see from the plot above, they spanned the whole range - and over half behaved entirely normally.
There was no difference between the "social" and "geometrical" halves of the autism group on measures of autism symptoms or IQ, so it wasn't just that only "more severe" autism was associated with an abnormal preference.
They re-tested many of the kids a couple of weeks later, and found a strong correlation between their preference on both occasions, suggesting that it is a real fondness for one over the other - rather than just random eye-wandering.
So this is an interesting result, but it's not clear that it would be of much use for diagnosis.
Pierce K, Conant D, Hazin R, Stoner R, & Desmond J (2010). Preference for Geometric Patterns Early in Life As a Risk Factor for Autism. Archives of general psychiatry PMID: 20819977
The Horror, The Horror
You're watching a horror movie.
The characters are going about their lives, blissfully unaware that something horrifying is about to happen. You the viewer know that things are going to end badly, though, because you know it's a horror movie.
Someone opens a closet - a bloody corpse could fall out! Or they're drinking a glass of water - which could be infected with a virus! Or they're talking to some guy - who's probably a serial killer! And so on.
The effect of this - and a good director can get a lot of mileage from it - is that scenes which would otherwise be entirely mundane, are experienced as scary, purely because you know that something scary is going to happen, so you see potential horror in every innocent little thing. An expectation as to what's going to happen, leads to you interpreting events in a certain way, and this creates certain emotions.
In a medical context, that would be called a placebo effect. Or a nocebo effect when expectations make people feel worse rather than better.
The horror movie analogy is useful, because it shows that placebo effects don't just happen to other people. We all like to think that if we were given a placebo treatment, we wouldn't be fooled. Unlike all those silly, suggestible, placebo responders, we'd stay as sick as ever until we got a proper cure.
I wouldn't be so sure. We're always interpreting the world around us, and interpreting our own thoughts and feelings, on the basis of our expectations and beliefs about what's going on. We don't suddenly stop doing this when it comes to health.
Suppose you have the flu. You feel terrible, and you're out of aspirin. You don't think you'll be able to make that meeting this afternoon, so you phone in sick.
Now, clearly, flu is a real disease, and it really does make you feel ill. But how do you know that you wouldn't be able to handle the meeting? Unless you have an extensive history of getting the flu in all its various forms, this is an interpretation, a best guess as to what you'll feel in the future, and it might be too pessimistic.
Maybe, if you tried, you'd get on OK. Maybe if you had some aspirin that would reassure you enough to give it a go. And just maybe it would still have worked even if those "aspirins" were just sugar pills...
Link: See my previous posts I Feel X, Therefore Y and How Blind is Double Blind?
Normal? You're Weird - Psychiatrists
Almost everyone is pretty screwed up. That's not my opinion, that's official - according to a new paper in the latest British Journal of Psychiatry.
Make sure you're sitting down for this. No less than 48% of the population have "personality difficulties", and on top of that 21% have a full blown "personality disorder", and another 7% have it even worse with "complex" or "severe" personality disorders.
That's quite a lot of people. Indeed it only leaves an elite 22.5% with no personality disturbances whatsoever. You're as likely to have a "simple PD" as you are to have a normal personality, and fully half the population fall into the "difficulties" category.
I have difficulties with this.
Where do these results come from? The Adult Psychiatric Morbidity Survey, which is a government study of the British population. They phoned up a random sample of several thousand people, and gave them the SCID interview, in other words they asked them questions. 116 questions in fact.
48% of people answered "yes" to enough questions such that, according to their criteria, they had "personality difficulties". They defined "personality difficulties", which is not a term in common use, as being "one criterion less than the threshold for personality disorder (PD)" according to DSM-IV criteria.
So what? Well, as far as I'm concerned, that means simply that "personality difficulties" is a crap category, which labels normality as pathological. I can tell that most of people with "difficulties" are in fact normal because they are the literally the norm. It's not rocket science.
So we can conclude that "personality difficulties" should either be scrapped or renamed "normal". In which case the weird minority of people without any such features should be relabelled. Maybe they are best known as "saints", or "Übermenschen", or perhaps "people who lie on questionnaires".
This, however, is not what the authors say. They defend their category of Personality Difficulties on the grounds that this group are slightly more likely to have a history of "issues" than the elite 22.5 percent, e.g. homelessness (3.0% vs. 1.6%), 'financial crisis' (10.1% vs. 6.8%), or having had treatment for mental illness (11% vs 6%).
They say:
The finding that 72% of the population has at least some degree of personality disturbance is counterintuitive, but the evidence that those with ‘personality difficulty’ covering two out of five of the population [it's actually closer to half], differs significantly from those with no personality disturbance in the prevalence of a history of running away from home, police contacts, homelessness... shows that this separation is useful from both clinical and societal viewpoints.
Here's what I think is going on:
The "difficulties" group and the "none" group are essentially the same in terms of the levels of crap stuff happening to them - because they are the same, normal, everyday people - except that a small % of the "difficulties" group do have some moderate degree of problems, because they are close to being "PD".
This does not mean that the "difficulties" category is good. Quite the reverse, it means it's rubbish, because it spans so many diverse people and lumps them all together. What you should do, if you insist on drawing lines in the sand, would be this:
Now I don't know that that's how things work, but it seems plausible. Bearing in mind that the categories they used are entirely arbitrary, it would be very odd if they did correspond to reality.
To be fair to the authors, this is not the only argument in their paper. Their basic point is that personality disturbance is a spectrum: rather than it being a black-and-white question of "normal" vs."PD", there are degrees, ranging from "simple PD" which is associated with a moderate degree of life crap, up to "complex PD" which has much more and "severe PD" which is worst of all.
They suggest that in the upcoming DSM-V revision of psychiatric diagnosis, it would be useful to formally incorporate the severity spectrum in some way - unlike the current DSM-IV, there everything is either/or. They also argue that with more severe cases of PD, it is not very useful to assign individual PD diagnoses (DSM-IV has no less than 10 different PDs) - severe PD is just severe PD.
That's all fine, as long as it doesn't lead to pathologizing 78% of the population - but this is exactly what it might do. The authors do admit that "the SCID screen for personality disorder, like almost all screening instruments, overdiagnoses personality pathology", but provide little assurance that a "spectrum" approach won't do the same thing.
Yang M, Coid J, & Tyrer P (2010). Personality pathology recorded by severity: national survey. The British Journal of Psychiatry 197, 193-9 PMID: 20807963
Are "Antipsychotics" Antipsychotics?
This is the question asked by Tilman Steinert & Martin Jandl in a letter to the journal Psychopharmacology.
They point out that in the past 20 years, the word "antipsychotic" has exploded in popularity. Less than 100 academic papers were published with that word in the title in 1990, but now it's over 600 per year.
The older term for the same drugs was "neuroleptics". This terminology, however, has slowly but surely fallen into disuse over the same time period.
To illustrate this they have a nice graph of PubMed hits. Neuroskeptic readers will be familiar with these as I have often posted my own and I recently wrote a bash script to harvest this data automatically. Now you too can be a historian of medicine from the comfort of your own home...
Why does it matter what we call them? A name is just a name, right? No, that's the problem. Actually, neuroleptic is just a name, because it doesn't mean anything. The term derives from the Greek "neuron", meaning... neuron, and "lambanō" meaning "to take hold of". However, no-one knows that unless they look it up on Wikipedia because it's just a name.
Antipsychotic, on the other hand, means something: it means they treat psychosis. But whether or not this is an accurate description of what "antipsychotics" actually do, is controversial. For one thing, these drugs are also used to treat many non-psychotic illnesses, like depression, and PTSD.
More fundamentally, it's not universally accepted that they have a direct anti-psychotic effect. All antipsychotics are powerful sedatives. There's a school of thought that says that this is in fact all they are, and rather than treating psychosis, they just sedate people until they stop being obviously psychotic.
Personally, I don't believe that, but that's not really the point: the point is that it's controversial, and calling them antipsychotics makes it hard to think about that controversy in a sensible way. To say that antipsychotics aren't actually antipsychotic is a contradiction in terms. To say they are antipsychotic is a tautology. Names shouldn't dictate the terms of a debate in that way. A name should just be a name.
The same point applies to more than just antipsychotics - I mean neuroleptics - of course. Perhaps the worst example is "antidepressants". Prozac, for example, is called an antidepressant. Implying that it treats depression.
But according to clinical trials, Prozac and other SSRIs are a lot more effective, relative to placebo, in obsessive-compulsive disorders (OCD) than they are in depression (though this is not necessarily true of all "antidepressants", yet more evidence that the word is unhelpful.)
So, as I asked in a previous post: "Are SSRIs actually antiobsessives that happen to be helpful in some cases of depression?" Personally, I think the only name for them which doesn't make any questionable assumptions, is simply 'SSRIs'.
Tilman Steinert and Martin Jandl (2010). Are antipsychotics antipsychotics? Psychopharmacology DOI: 10.1007/s00213-010-1927-3
Marc Hauser's Scapegoat?
The dust is starting to settle after the Hauser-gate scandal which rocked psychology a couple of weeks back.
Harvard Professor Marc Hauser has been investigated by a faculty committee and the verdict was released on the 20th August: Hauser was "found solely responsible... for eight instances of scientific misconduct." He's taking a year's "leave", his future uncertain.
Unfortunately, there has been no official news on what exactly the misconduct was, and how much of Hauser's work is suspect. According to Harvard, only three publications were affected: a 2002 paper in Cognition, which has been retracted; a 2007 paper which has been "corrected" (see below), and another 2007 Science paper, which is still under discussion.
But what happened? Cognition editor Gerry Altmann writes that he was given access to some of the Harvard internal investigation. He concludes that Hauser simply invented some of the crucial data in the retracted 2002 paper.
Essentially, some monkeys were supposed to have been tested on two conditions, X and Y, and their responses were videotaped. The difference in the monkey's behaviour between the two conditions was the scientifically interesting outcome.
In fact, the videos of the experiment showed them being tested only on condition X. There was no video evidence that condition Y was even tested. The "data" from condition Y, and by extension the differences, were, apparently, simply made up.
If this is true, it is, in Altmann's words, "the worst form of academic misconduct." As he says, it's not quite a smoking gun: maybe tapes of Y did exist, but they got lost somehow. However, this seems implausible. If so, Hauser would presumably have told Harvard so in his defence. Yet they found him guilty - and Hauser retracted the paper.
So it seems that either Hauser never tested the monkeys on condition B at all, and just made up the data, or he did test them, saw that they weren't behaving the "right" way, deleted the videos... and just made up the data. Either way it's fraud.
Was this a one-off? The Cognition paper is the only one that's been retracted. But another 2007 paper was "replicated", with Hauser & a colleague recently writing:
In the original [2007] study by Hauser et al., we reported videotaped experiments on action perception with free ranging rhesus macaques living on the island of Cayo Santiago, Puerto Rico. It has been discovered that the video records and field notes collected by the researcher who performed the experiments (D. Glynn) are incomplete for two of the conditions.Luckily, Hauser said, when he and a colleague went back to Puerto Rico and repeated the experiment, they found "the exact same pattern of results" as originally reported. Phew.
This note, however, was sent to the journal in July, several weeks before the scandal broke - back when Hauser's reputation was intact. Was this an attempt by Hauser to pin the blame on someone else - David Glynn, who worked as a research assistant in Hauser's lab for three years, and has since left academia?
As I wrote in my previous post:
Glynn was not an author on the only paper which has actually been retracted [the Cognition 2002 paper that Altmann refers to]... according to his resume, he didn't arrive in Hauser's lab until 2005.Glynn cannot possibly have been involved in the retracted 2002 paper. And Harvard's investigation concluded that Hauser was "solely responsible", remember. So we're to believe that Hauser, guilty of misconduct, was himself an innocent victim of some entirely unrelated mischief in 2007 - but that it was all OK in the end, because when Hauser checked the data, it was fine.
Maybe that's what happened. I am not convinced.
Personally, if I were David Glynn, I would want to clear my name. He's left science, but still, a letter to a peer reviewed journal accuses him of having produced "incomplete video records and field notes", which is not a nice thing to say about someone.
Hmm. On August 19th, the Chronicle of Higher Education ran an article about the case, based on a leaked Harvard document. They say that "A copy of the document was provided to The Chronicle by a former research assistant in the lab who has since left psychology."
Hmm. Who could blame them for leaking it? It's worth remembering that it was a research assistant in Hauser's lab who originally blew the whistle on the whole deal, according to the Chronicle.
Apparently, what originally rang alarm bells was that Hauser appeared to be reporting monkey behaviours which had never happened, according to the video evidence. So at least in that case, there were videos, and it was the inconsistency between Hauser's data and the videos that drew attention. This is what makes me suspect that maybe there were videos and field notes in every case, and the "inconvenient" ones were deleted to try to hide the smoking gun. But that's just speculation.
What's clear is that science owes the whistle-blowing research assistant, whoever it is, a huge debt.