Predicting Psychosis

"Prevention is better than cure", so they say. And in most branches of medicine, preventing diseases, or detecting early signs and treating them pre-emptively before the symptoms appear, is an important art.

Not in psychiatry. At least not yet. But the prospect of predicting the onset of psychotic illnesses like schizophrenia, and of "early intervention" to try to prevent them, is a hot topic at the moment.

Schizophrenia and similar illnesses usually begin with a period of months or years, generally during adolescence, during which subtle symptoms gradually appear. This is called the "prodrome" or "at risk mental state". The full-blown disorder then hits later. If we could detect the prodromal phase and successfully treat it, we could save people from developing the illness. That's the plan anyway.

But many kids have "prodromal symptoms" during adolescence and never go on to get ill, so treating everyone with mild symptoms of psychosis would mean unnecessarily treating a lot of people. There's also the question of whether we can successfully prevent progression to illness at all, and there have been only a few very small trials looking at whether treatments work for that - but that's another story.

Stephan Ruhrmann et al. claim to have found a good way of predicting who'll go on to develop psychosis in their paper Prediction of Psychosis in Adolescents and Young Adults at High Risk. This is based on the European Prediction of Psychosis Study (EPOS) which was run at a number of early detection clinics in Britain and Europe. People were referred to the clinics through various channels if someone was worried they seemed a bit, well, prodromal

Referral sources included psychiatrists, psychologists, general practitioners, outreach clinics, counseling services, and teachers; patients also initiated contact. Knowledge about early warning signs (e.g., concentration and attention disturbances, unexplained functional decline) and inclusion criteria was disseminated to mental health professionals as well as institutions and persons who might be contacted by at-risk persons seeking help.
245 people consented to take part in the study and met the inclusion criteria meaning they were at "high risk of psychosis" according to at least one of two different systems, the Ultra High Risk (UHR) or the COGDIS criteria. Both class you as being at risk if you show short lived or mild symptoms a bit like those seen in schizophrenia i.e.
COGDIS: inability to divide attention; thought interference, pressure, and blockage; and disturbances of receptive and expressive speech, disturbance of abstract thinking, unstable ideas of reference, and captivation of attention by details of the visual field...
UHR: unusual thought content/delusional ideas, suspiciousness/persecutory ideas, grandiosity, perceptual abnormalities/hallucinations, disorganized communication, and odd behavior/appearance... Brief limited intermittent psychotic symptoms (BLIPS) i.e. hallucinations, delusions, or formal thought disorders that resolved spontaneously within 1 week...
Then they followed up the 245 kids for 18 months and saw what happened to them.

What happened was that 37 of them developed full-blown psychosis: 23 suffered schizophrenia according to DSM-IV criteria, indicating severe and prolonged symptoms; 6 had mood disorders, i.e depression or bipolar disorder, with psychotic features, and the rest mostly had psychotic episodes too short to be classed as schizophrenia. 37 people is 19% of the 183 for whom full 18 month data was available; the others dropped out of the study, or went missing for some reason.

Is 19% high or low? Well, it's much higher than the rate you'd see in randomly selected people, because the risk of getting schizophrenia is less than 1% lifetime and this was only 18 months; the risk of a random person developing psychosis in any given year has been estimated at 0.035% in Britain. So the UHR and COGDIS criteria are a lot better than nothing.

On the other hand 19% is far from being "all": 4 out of 5 of the supposedly "high risk" kids in this study didn't in fact get ill, although some of them probably developed illness after the 18 month period was over.

The authors also came up with a fancy algorithm for predicting risk based on your score on various symptom rating scales, and they claim that this can predict psychosis much better, with 80% accuracy. As this graph shows, the rate of developing psychosis in those scoring highly on their Prognostic Index is really high. (In case you were wondering the Prognostic Index is [1.571 x SIPS-Positive score >16] + [0.865 x bizarre thinking score] + [0.793 x sleep disturbances score] + [1.037 x SPD score] + [0.033 x (highest GAF-M score in the past year – 34.64)] + [0.250 x (years of education – 12.52)]. Use it on your friends for hours of psychiatric fun!)

However they came up with the algorithm by putting all of their dozens of variables into a big mathematical model, crunching the numbers and picking the ones that were most highly correlated with later psychosis - so they've specifically selected the variables that best predict illness in their sample, but that doesn't mean they'll do so in any other case. This is basically the "voodoo" non-independence problem that has so troubled fMRI, although the authors, to their credit, recognize this and issue the appropriate cautions.

So overall, we can predict psychosis, sometimes, but far from perfectly. More research is needed. One of the proposed additions to the new DSM-V psychiatric classification system is "Psychosis Risk Syndrome" i.e. the prodrome; it's not currently a disorder in DSM-IV. This idea has been attacked as an invitation to push antipsychotic drugs on kids who aren't actually ill and don't need them. On the other hand though, we shouldn't forget that we're talking about terrible illnesses here: if we could successfully predict and prevent psychosis, we'd be doing a lot of good.

ResearchBlogging.orgRuhrmann, S., Schultze-Lutter, F., Salokangas, R., Heinimaa, M., Linszen, D., Dingemans, P., Birchwood, M., Patterson, P., Juckel, G., Heinz, A., Morrison, A., Lewis, S., Graf von Reventlow, H., & Klosterkotter, J. (2010). Prediction of Psychosis in Adolescents and Young Adults at High Risk: Results From the Prospective European Prediction of Psychosis Study Archives of General Psychiatry, 67 (3), 241-251 DOI: 10.1001/archgenpsychiatry.2009.206

How Blind is Double-Blind?

There's a rather timely article in the current American Journal of Psychiatry: Assuring That Double-Blind Is Blind.

Generally, when the list of the authors' conflicts of interest (550 words) is nearly as long as the text of the paper (740 words), it's not a good sign, but this one isn't bad. Perlis et al remind us that if you do a double-blind placebo controlled trial:
The blind may be compromised in a variety of ways, however, beginning with differences in medication taste or smell. Of particular concern may be the emergence of adverse effects, particularly when those adverse effects are known to be associated with a specific medication ... Indeed, when the degree of unblinding is assessed in antidepressant trials, multiple reports suggest that it is extensive: at least three-quarters of patients are typically able to correctly guess at their treatment assignment.
The point of a placebo-controlled trial is that neither the patients nor their doctors know whether they're getting the placebo or the real drug. Hence the strength of the placebo effect should be the same in each group, allowing the "real" drug effect to be measured.

But if the drug causes side effects, as pretty much all do, then people could work out which group they're in by noticing whether they're feeling side effects or not. This might enhance the placebo effect in the drug group, and make the drug seem to work better than it really does. Or it might not. But the possibility that it might is worrying.

This is called the active placebo effect. It's why I'm skeptical of claims that scopolamine and ketamine have rapid-acting but short lived antidepressant effects. I may be wrong, but while both of these drugs have been shown to work better than placebo, both have very pronounced subjective effects, so there's no chance the blind will have been intact.

Whether the active placebo effect also underlies the efficacy of established antidepressants like Prozac is very controversial. There have been 9 trials comparing antidepressants to active placebos, i.e. drugs that have similar side effects to antidepressants and that should therefore help to preserve the blind. (The active placebos were all atropine which is basically the same as scopolamine.)

The trials were reviewed by antidepressant critic Joanna Moncrieff et al who found that the overall effect size of antidepressants vs. active placebos was d = 0.39. That's not very high, although it's not too bad, and ironically it's actually higher than the effect that Moncrieff's friend and fellow Prozac-baiter Irving Kirsch found in his famous 2008 antidepressant vs. sugar pill placebo meta-analysis, d=0.32. So if you take that seriously, the active placebo effect plays no part in antidepressant efficacy. However the active placebo trials are mostly small and old, so to be honest, we don't really know.

*

A point that's often overlooked is that a drug could have an active placebo effect via having a "real" psychoactive treatment effect. Diazepam (Valium), for example, has basically no peripheral side effects at all: unlike scopolamine it doesn't cause dry mouth, nausea, etc. But it is a tranquillizer; it causes calmness and, at higher doses, sleep. They're pretty noticeable. So if you were to give a depressed person Valium and tell them that it's not only a tranquillizer, it's an antidepressant, then the active placebo problem would arise.

In fact, any active drug will also produce active placebo effects - almost by definition, if you think about it. These may be hard to disentangle from the "real" effects. Say you're anxious about giving a speech so you take some diazepam hoping to feel calmer. A short while later you feel the lovely warm tranquillizing feeling setting in. Phew, you're calm now, anxiety's gone, the speech will be no worry. That thought might well be tranquillizing in itself. In other words the anti-anxiety effects of diazepam are partially driven by active placebo responses due to... the anti-anxiety effects of diazepam.

*

This leads onto another point. Suppose a drug has a genuine effect which improves some of the symptoms of a disease. Does that drug "treat" that disease? In a weak sense, yes, and it might be a helpful drug, but it's not a specific treatment. Morphine's very helpful in cancer, because it treats pain, but it doesn't cure cancer. Likewise insomnia is a symptom of depression, but we feel that in order to qualify as an antidepressant a drug has to treat the core symptoms: mood, anxiety, etc. rather than just being a sleeping pill.

But suppose someone suffered from low mood and you gave them a treatment which stopped them feeling any moods or emotions. That solves their low mood problem: no mood, no problem. But is that a specific treatment for depression? It's a bit of a grey area, but many would say no.

Many people say that this is exactly what SSRI antidepressants do: they blunt your emotions. That doesn't mean they're not helpful in depression: a lot of people find them very useful. I did. But then are they really "antidepressants", or just anti-mood? SSRIs are the drugs of choice not just for depression but also most anxiety disorders, and obsessive-compulsive disorder, etc. In fact they work better in OCD than they do in depression, relative to placebo. So are SSRIs actually antiobsessives that happen to be helpful in some cases of depression? Good question.

Here's Chris Rock on the issue of non-specific effects...


*

Perhaps an ideal clinical trial of a drug for a psychiatric condition should have 4 groups: the drug you're studying, another psychotropic with non-specific effects (e.g. Valium, or caffeine if you want a stimulant), an active placebo with purely peripheral side effects, and sugar pills. But even then, if the active drug performed better than the other 3 groups, a die-hard skeptic could say that maybe it's just more effectively causing non-specific sedation, or blunting, or whatever, than the Valium. Ultimately, a randomized controlled trial can never prove that a psychotropic drug has a specific as opposed to a non-specific effect.

So where do we stand? Does that mean we don't know what drugs do? No - unless we're some cloistered soul who only reads papers as opposed to talking to people, reading subjective reports, or taking drugs themselves. I know what alcohol does, not because I've read papers about it, but because I've drunk it. I've also been depressed and taken antidepressants, and for what it's worth, in my experience, some of the drugs currently marketed as antidepressants do have a specific anti-depression effect, although others don't. Overall, though, my view is that we know surprisingly little about what antidepressants actually do.

ResearchBlogging.orgPerlis RH, Ostacher M, Fava M, Nierenberg AA, Sachs GS, & Rosenbaum JF (2010). Assuring that double-blind is blind. The American journal of psychiatry, 167 (3), 250-2 PMID: 20194487

Moncrieff J, Wessely S, & Hardy R (2004). Active placebos versus antidepressants for depression. Cochrane database of systematic reviews (Online) (1) PMID: 14974002

DSM-V, a Prenatal Health Check

Last month the proposed draft of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) came out.

In my post at the time I was pretty critical of several aspects of the new DSM. Many many other blogs have discussed DSM-V, as have older media. As you'd expect with such a complex and controversial issue as psychiatric diagnosis, opinions have varied widely, but one thing stands out: people are debating this. Everyone's got something to say about it, professionals and laypeople.

Debate is usually thought to be healthy, but I think in this case, it's a very bad sign for DSM-V. The previous editions, like DSM-IV, were presented to the world as a big list of mental disorders carrying the authority of the American Psychiatric Association. That's why people called the DSM the Bible of psychiatry - it was supposedly revealed truth as handed down by a consensus group of experts. If not infallible, it was at least something to take note of. There have always been critics of the DSM, but until recently, they were the underdogs, chipping away at an imposing edifice.

But DSM-V won't be imposing. People are criticizing it before it's been finalized, and even bystanders can see that there's really no consensus on many important issues. The very fact that everyone's discussing the proposed changes to the Manual is also telling: if the DSM is a Bible, why does it need to be revised so often?

My prediction is that when DSM-V does arrive (May 2013 is the current expected birth date) , it will be a non-event. By then the debates will have happened. I suspect that few researchers are going to end up deciding to invest their time, money and reputation in the new disorders added in DSM-V. Why study "temper regulation disorder with dysphoria" (TDDD) when it was controversial before it even officially existed? Despite the shiny new edition, we may be using DSM-IV for all intents and purposes for a long time to come.

Absinthe Fact and Fiction

Absinthe is a spirit. It's very strong, and very green. But is it something more?

I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impact

Absinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.

It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.

Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.

It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.

But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.

As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.

I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.

Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.

Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.

ResearchBlogging.orgPadosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551

Can We Rely on fMRI?

Craig Bennett (of Prefrontal.org) and Michael Miller, of dead fish brain scan fame, have a new paper out: How reliable are the results from functional magnetic resonance imaging?


Tal over at the [citation needed] blog has an excellent in-depth discussion of the paper, and Mind Hacks has a good summary, but here's my take on what it all means in practical terms.

Suppose you scan someone's brain while they're looking at a picture of a cat. You find that certain parts of their brain are activated to a certain degree by looking at the cat, compared to when they're just lying there with no picture. You happily publish your results as showing The Neural Correlates of Cat Perception.

If you then scanned that person again while they were looking at the same cat, you'd presumably hope that exact same parts of the brain would light up to the same degree as they did the first time. After all, you claim to have found The Neural Correlates of Cat Perception, not just any old random junk.

If you did find a perfect overlap in the area and the degree of activation that would be an example of 100% test-retest reliability. In their paper, Bennett and Miller review the evidence on the test-retest reliability of fMRI studies. They found 63 of them. On average, they found that the reliability of fMRI falls quite far short of perfection: the areas activated (clusters) had a mean Dice overlap of 0.476, while the strength of activation was correlated with a mean ICC of 0.50.

But those numbers, taken out of context, do not mean very much. Indeed, what is a Dice overlap? You'll have to read the whole paper to find out, but even when you do, they still don't mean that much. I suspect this is why Bennett and Miller don't mention them in the Abstract of the paper, and in fact they don't spend more than a few lines discussing them at all.

A Dice overlap of 0.476 and an ICC of 0.50 are what you get if average over all of the studies that anyone's done looking at the test-retest reliability of any particular fMRI experiment. But different fMRI experiments have different reliabilities. Saying that the average reliability of fMRI is 0.5 is rather like saying that the mean velocity of a human being is 0.3 km per hour. That's probably about right, averaging over everyone in the world, including those who are asleep in bed and those who are flying on airplanes - but it's not very useful. Some people are moving faster than others, and some scans are more reliable than others.


Most of this paper is not concerned with "how reliable fMRI is", but rather, with how to make any given scanning experiment more reliable. And this is an important thing to write about, because even the most optimistic cognitive neuroscientist would agree that many fMRI results are not especially reliable, and as Bennett and Miller say, reliability matters for lots of reasons:

Scientific truth. While it is a simple statement that can be taken straight out of an undergraduate research methods course, an important point must be made about reliability in research studies: it is the foundation on which scientific knowledge is based. Without reliable, reproducible results no study can effectively contribute to scientific knowledge.... if a researcher obtains a different set of results today than they did yesterday, what has really been discovered?
Clinical and Diagnostic Applications. The longitudinal assessment of changes in regional brain activity is becoming increasingly important for the diagnosis and treatment of clinical disorders...
Evidentiary Applications. The results from functional imaging are increasingly being submitted as evidence into the United States legal system...
Scientific Collaboration. A final pragmatic dimension of fMRI reliability is the ability to share data between researchers...
So what determines the reliability of any given fMRI study? Lots of things. Some of them are inherent to the nature of the brain, and are not really things we can change: activation in response to basic perceptual and motor tasks is probably always going to be more reliable than activation related to "higher" functions like emotions.

But there are lots of things we can change. Although it's rarely obvious from the final results, researchers make dozens of choices when designing and analyzing an fMRI experiment, many of which can at least potentially have a big impact on the reliability of their findings. Bennett and Miller cover lots of them:
voxel size... repetition time (TR), echo time (TE), bandwidth, slice gap, and k-space trajectory... spatial realignment of the EPI data can have a dramatic effect on lowering movement-related variance ... Recent algorithms can also help remove remaining signal variability due to magnetic susceptibility induced by movement... simply increasing the number of fMRI runs improved the reliability of their results from ICC = 0.26 to ICC = 0.58. That is quite a large jump for an additional ten or fifteen minutes of scanning...
The details get extremely technical, but then, when you do an fMRI scan you're using a superconducting magnet to image human neural activity by measuring the quantum spin properties of protons. It doesn't get much more technical.

Perhaps the central problem with modern neuroimaging research is that it's all too easy for researchers to write off the important experimental design issues as "merely" technicalities, and just put some people in a scanner using the default scan sequence and see what happens. This is something few fMRI users are entirely innocent of, and I'm certainly not, but it is a serious problem. As Bennett and Miller point out, the devil is in the technical details.
The generation of highly reliable results requires that sources of error be minimized across a wide array of factors. An issue within any single factor can significantly reduce reliability. Problems with the scanner, a poorly designed task, or an improper analysis method could each be extremely detrimental. Conversely, elimination of all such issues is necessary for high reliability. A well maintained scanner, well designed tasks, and effective analysis techniques are all prerequisites for reliable results.
ResearchBlogging.orgBennett CM, Miller MB. (2010). How reliable are the results from functional magnetic resonance imaging? Annals of the New York Academy of Sciences

Life Without Serotonin

Via Dormivigilia, I came across a fascinating paper about a man who suffered from a severe lack of monoamine neurotransmitters (dopamine, serotonin etc.) as a result of a genetic mutation: Sleep and Rhythm Consequences of a Genetically Induced Loss of Serotonin


Neuroskeptic readers will be familiar with monoamines. They're psychiatrists' favourite neurotransmitters, and are hence very popular amongst psych drug manufacturers. In particular, it's widely believed that serotonin is the brain's "happy chemical" and that clinical depression is caused by low serotonin while antidepressants work by boosting it.

Critics charge that there is no evidence for any of this. My own opinion is that it's complicated, but that while there's certainly no simple relation between serotonin, antidepressants and mood, they are linked in some way. It's all rather mysterious, but then, the functions of serotonin in general are; despite 50 years of research, it's probably the least understood neurotransmitter.

The new paper adds to the mystery, but also provides some important new data. Leu-Semenescu et al report on the case of a 28 year old man, with consanguineous parents, who suffers from a rare genetic disorder, sepiapterin reductase deficiency (SRD). SRD patients lack an enzyme which is involved, indirectly, in the production of the monoamines serotonin and dopamine, and also melatonin and noradrenaline which are produced from these two. SRD causes a severe (but not total) deficiency of these neurotransmitters.

The most obvious symptoms of SRD are related to the lack of dopamine, and include poor coordination and weakness, very similar to Parkinson's Disease. An interesting feature of SRD is that these symptoms are mild in the morning, worsen during the day, and improve with sleep. Such diurnal variation is also a hallmark of severe depression, although in depression it's usually the other way around (better in the evening).

The patient reported on in this paper suffered Parkinsonian symptoms from birth, until he was diagnosed with dystonia at age 5 and started on L-dopa to boost his dopamine levels. This immediately and dramatically reversed the problems.

But his serotonin synthesis was still impaired, although doctors didn't realize this until age 27. As a result, Leu-Semenescu et al say, he suffered from a range of other, non-dopamine-related symptoms. These included increased appetite - he ate constantly, and was moderately obese - mild cognitive impairment, and disrupted sleep:

The patient reported sleep problems since childhood. He would sleep 1 or 2 times every day since childhood and was awake during more than 2 hours most nights since adolescence. At the time of the first interview, the night sleep was irregular with a sleep onset at 22:00 and offset between 02:00 and 03:00. He often needed 1 or 2 spontaneous, long (2- to 5-h) naps during the daytime.
After doctors did a genetic test and diagnosed SRD, they treated him with 5HTP, a precursor to serotonin. The patient's sleep cycle immediately normalized, his appetite was reduced and his concentration and cognitive function improved (although that may have been because he was less tired). Here's his before and after hypnogram:

Disruptions in sleep cycle and appetite are likewise common in clinical depression. The direction of the changes in depression varies: loss of appetite is common in the most severe "melancholic" depression, while increased appetite is seen in many other people.

For sleep, both daytime sleepiness and night-time insomnia, especially waking up too early, can occur in depression. The most interesting parallel here is that people with depression often show a faster onset of REM (dreaming) sleep, which was also seen in this patient before 5HTP treatment. However, it's not clear what was due to serotonin and what was due to melatonin because melatonin is known to regulate sleep.

Overall, though, the biggest finding here was a non-finding: this patient wasn't depressed, despite having much reduced serotonin levels. This is further evidence that serotonin isn't the "happy chemical" in any simple sense.

On the other hand, the similarities between his symptoms and some of the symptoms of depression suggest that serotonin is doing something in that disorder. This fits with existing evidence from tryptophan depletion studies showing that low serotonin doesn't cause depression in most people, but does re-activate symptoms in people with a history of the disease. As I said, it's complicated...

ResearchBlogging.orgSmaranda Leu-Semenescu et al. (2010). Sleep and Rhythm Consequences of a Genetically Induced Loss of Serotonin Sleep, 33 (03), 307-314

Is Your Brain A Communist?

Capitalists beware. No less a journal than Nature has just published a paper proving conclusively that the human brain is a Communist, and that it's plotting the overthrow of the bourgeois order and its replacement by the revolutionary Dictatorship of the Proletariat even as we speak.

Kind of. The article, Neural evidence for inequality-averse social preferences, doesn't mention the C word, but it does claim to have found evidence that people's brains display more egalitarianism than people themselves admit to.

Tricomi et al took 20 pairs of men. At the start of the study, both men got a $30 payment, but one member of each pair was then randomly chosen to get a $50 bonus. Thus, one guy was "rich", while the other was "poor". Both men then had fMRI scans, during which they were offered various sums of money and saw their partner being offered money too. They rated how "appealing" these money transfers were on a 10 point scale.

What happened? Unsurprisingly both "rich" and "poor" said that they were pleased at the prospect of getting more cash for themselves, the poor somewhat more so, but people also had opinions about payments to the other guy:

the low-pay group disliked falling farther behind the high-pay group (‘disadvantageous inequality aversion’), because they rated positive transfers to the high-pay participants negatively, even though these transfers had no effect on their own earnings. Conversely, the high-pay group seemed to value transfers [to the poor person] that closed the gap between their earnings and those of the low-pay group (‘advantageous inequality aversion’)
What about the brain? When people received money for themselves, activity in the ventromedial prefrontal cortex (vmPFC) and the ventral striatum correlated with the size of their gain.

However, when presented with a payment to the other person, these areas seemed to be rather egalitarian. Activity rose in rich people when their poor colleagues got money. In fact, it was greater in that case than when they got money themselves, which means the "rich" people's neural activity was more egalitarian than their subjective ratings were. Whereas in "poor" people, the vmPFC and the ventral striatum only responded to getting money, not to seeing the rich getting even richer.


The authors conclude that this
indicates that basic reward structures in the brain may reflect even stronger equity considerations than is necessarily expressed or acted on at the behavioural level... Our results provide direct neurobiological evidence in support of the existence of inequality-averse social preferences in the human brain.
Notice that this is essentially a claim about psychology, not neuroscience, even though the authors used neuroimaging in this study. They started out by assuming some neuroscience - in this case, that activity in the vmPFC and the ventral striatum indicates reward i.e. pleasure or liking - and then used this to investigate psychology, in this case, the idea that people value equality per se, as opposed to the alternative idea, that "dislike for unequal outcomes could also be explained by concerns for social image or reciprocity, which do not require a direct aversion towards inequality."

This is known as reverse inference, i.e. inference from data about the brain to theories about the mind. It's very common in neuroimaging papers - we've all done it - but it is problematic. In this case, the problem is that the argument relies on the idea that activity in the vmPFC and ventral striatum is evidence for liking.

But while there's certainly plenty of evidence that these areas are activated by reward, and the authors confirmed that activity here correlated with monetary gain, that doesn't mean that they only respond to reward. They could also respond to other things. For example, there's evidence that the vmPFC is also activated by looking at angry and sad faces.

Or to put it another way: seeing someone you find attractive makes your pupils dilate. If you were to be confronted by a lion, your pupils would dilate. Fortunately, that doesn't mean you find lions attractive - because fear also causes pupil dilation.

So while Tricomi et al argue that people, or brains, like equality, on the basis of these results, I remain to be fully convinced. As Russell Poldrack noted in 2006
caution should be exercised in the use of reverse inference... In my opinion, reverse inference should be viewed as another tool (albeit an imperfect one) with which to advance our understanding of the mind and brain. In particular, reverse inferences can suggest novel hypotheses that can then be tested in subsequent experiments.
ResearchBlogging.orgTricomi E, Rangel A, Camerer CF, & O'Doherty JP (2010). Neural evidence for inequality-averse social preferences. Nature, 463 (7284), 1089-91 PMID: 20182511

The Crazies

I just watched The Crazies, a remake of Romero's 1973 original of the same name, about a small town struck by an outbreak of insanity following a biological weapon accident. It's not for the faint of heart: I was unsettled by a number of the scenes and I watch a lot of horror movies.

Which is to say, it's excellent. It maintains a high pitch of tension through the whole 100 minutes, something that a lot of horror doesn't manage. All too often, I find, a movie will start out scary enough, but then by some point about half way through it's effectively turned into an action movie.

This happens when the nature of the monster/killer/zombies have been revealed and all the protagonists have to do is fight it out - with the uncertainty gone, the horror goes, too. Without giving too much away, The Crazies avoids this trap. (The last great horror movie I saw, Paranormal Activity, does too, although in a very different way).

Of course the real reason I liked this movie is that it's got some neuroscience. The Crazies is (spoilers) about an engineered virus that infects the brain. Early symptoms include fever, blank stares, flattened emotions and stereotypies. This then progresses, over the course of about 48 hours, to psychopathic aggression, at least in some cases, although other victims just become confused. The "crazies" are somewhat like zombies - they have a Zombie Spectrum Disorder, one might say - but they retain enough of their personality and intelligence to be capable of much more elaborate and calculating violence than the average braaaaaaains-muncher, which is what makes them so disturbing.

Could a virus do that? Rabies, notoriously, causes aggression in animals and humans, although the incubation period is weeks rather than days, and aggression is only one of many neurological symptoms of the disease. But maybe an engineered virus could achieve a more specific effect if it was able to selectively infect the area of the brain reported on in this rather scary paper:

The authors report a patient with advanced PD, successfully treated by bilateral stimulation of the subthalamic nucleus, who developed acute transient aggressive behavior during intraoperative electrical test stimulation. The electrode responsible for this abnormal behavior was located within the lateral part of the posteromedial hypothalamic region (triangle of Sano). The authors suggest that affect can be dramatically modulated by the selective manipulation of deep brain structures.

 
powered by Blogger