Premature Brain Diagnosis in Japan?

Nature has a disturbing article from their Asian correspondent David Cyranoski: Thought experiment. It's open access.

In brief: a number of top Japanese psychiatrists have started offering a neuroimaging method called NIRS to their patients as a diagnostic tool. They claim that NIRS shows the neural signatures of different mental illnesses.

The technology was approved by the Japanese authorities in April 2009, and since then it's been used on at least 300 patients, who pay $160 for the privilege. However, it's not clear that it works.

To put it mildly.

*

NIRS is Near Infra-Red Spectroscopy. It measures blood flow and oxygenation in the brain. In this respect, it's much like fMRI, but whereas fMRI uses superconducting magnets and quantum wizardry to achieve this, NIRS simply shines a near-infra-red light into the head, and records the light reflected back

It's a lot cheaper and easier than MRI. However, the images it provides are a lot less detailed, and it can only image the surface of the brain. NIRS has a small but growing number of users in neuroscience research; it's especially popular in Japan, for some reason, but it's also found plenty of users elsewhere.

The clinical use of NIRS in psychiatry was pioneered by one Dr Masato Fukuda, and he's been responsible for most of the trials. So what are these trials?

As far as I can see (correct me if I'm wrong), these are all the trials comparing patients and controls that he's been an author on:
There are also a handful of Fukuda's papers in Japanese, which I can't read, but as far as I can tell they're general discussions rather than data papers.

So we have 342 people in all. Actually, a bit less, because some of them were included in more than one study. That's still quite a lot - but there were only 5 panic patients, 30 depressed (including 9 elderly, who may be different), 38 eating disordered and just 17 bipolar in the mix.

And the bipolar people were currently feeling fine, or just a little bit down, at the time of the NIRS. There are quite a lot of other trials from other Japanese groups, but sticking with bipolar disorder as an example, no trials that I could find examined people who were currently ill. The only other two trials, both very small, were in recovered people (1,2).

Given that the whole point of diagnosis is to find out what any given patient has, when they're ill, this matters to every patient. Anyone could be psychotic, or depressed, or eating disordered, or any combination thereof.

Worse yet, in many of these studies the patients were taking medications. In the 2006 depression/bipolar paper, for example, all of the bipolars were on heavy-duty mood stabilizers, mostly lithium; plus a few antipsychotics, and lots of antidepressants. The depressed people were on antidepressants.

There's a deeper problem. Fukuda says that NIRS corresponds with the clinical diagnosis in 80% of cases. Let's assume that's true. Well, if the NIRS agrees with the clinical diagnosis, it doesn't tell us anything we didn't already know. If the NIRS disagrees, who do you trust?

I think you'd have to trust the clinician, because the clinician is the "gold standard" against which the NIRS is compared. Psychiatric diseases are defined clinically. If you had to choose between 80% gold and pure gold, it's not a hard choice.

Now NIRS could, in theory, be better than clinical diagnosis: it could provide more accurate prognosis, and more useful treatment recommendations. That would be cool. But as far as I can see there's absolutely no published evidence on that.

To find out you'd have to compare patients diagnosed with NIRS to patients diagnosed normally - or better, to those randomized to get fake placebo NIRS, like the authors of this trial from last year should have done. To my knowledge, there have been no such tests at all.

*

So what? NIRS is harmless, quick, and $160 is not a lot. Patients like it: “They want some kind of hard evidence,” [Fukuda says], especially when they have to explain absences from work. If it helps people to come to terms with their illness - no mean feat in many cases - what's the problem?

My worry is that it could mean misdiagnosing patients, and therefore mis-treating them. Here's the most disturbing bit of the article:
...when Fukuda calculates his success rates, NIRS results that match the clinical diagnosis are considered a success. If the results don’t match, Fukuda says he will ask the patient and patient’s family “repeatedly” whether they might have missed something — for example, whether a depressed patient whose NIRS examination suggests schizophrenia might have forgotten to mention that he was experiencing hallucinations.
Quite apart from the implication that the 80% success rate might be inflated, this suggests that some dubious clinical decisions might be going on. The first-line treatments for schizophrenia are quite different, and rather less pleasant, than those for depression. A lot of perfectly healthy people report "hallucinations" if you probe hard enough. "Seek, and ye shall find". So be careful what you seek for.

While NIRS is a Japanese speciality, other brain-based diagnostic or "treatment personalization" tools are being tested elsewhere. In the USA, EEG has been proposed by a number of groups. I've been rather critical of these methods, but at least they've done some trials to establish whether this actually improves patient outcomes.

In my view, all of these "diagnostic" or "predictive" tools should be subject to exactly the same tests as treatments are: double blind, randomized, sham-controlled trials.

ResearchBlogging.orgCyranoski, D. (2011). Neuroscience: Thought experiment Nature, 469 (7329), 148-149 DOI: 10.1038/469148a

fMRI Scanning Salmon - Seriously.

Back in 2009, a crack team of neuroscientists led by Craig Bennett (blog) famously put a dead fish into an MRI scanner and showed it some pictures.



They found some blobs of activation - when they used an inappropriately lenient statistical method. Their point, of course, was to draw attention to the fact that you really shouldn't use that method for fMRI. You can read the whole paper here. The Atlantic Salmon who heroically volunteered for the study was no more than a prop. In fact, I believe he ended up getting eaten.

But now, a Japanese team have just published a serious paper which actually used fMRI to measure brain activity in some salmon: Olfactory Responses to Natal Stream Water in Sockeye Salmon by BOLD fMRI.

How do you scan a fish? Well, like this:

A total of 6 fish were scanned. The salmon were immobilized by adding an anaesthetic (eugenol) and a muscle relaxant (gallamine) to their tank of water. Then, they were carefully clamped into place to make sure they really wouldn't move, while a stream of oxygenated water was pumped through their tank.

Apart from that, it was pretty much a routine fMRI scan.

Why would you want to scan a fish? This is where the serious science comes in. Salmon are born in rivers but they swim out to live in the ocean once they reach maturity. However, they return to the river to breed. What's amazing is that salmon will return to the same river that they were born in - even if they have to travel thousands of miles to get there.

How they manage this is unclear, but the smell (or maybe taste) of the water from their birth river has long been known to be crucial at least once they've reached the right general area (see here for a good overview). Every river contains a unique mixture of chemicals, both natural and artificial (pollutants). Salmon seem to be attracted to whatever chemicals were present in the water when they were young.

In this study, the fMRI revealed that relative to pure water, home-stream water activated a part of the salmon's telencephalon - the most "advanced" part (in humans, it constitutes the vast majority of the brain; in fish, it's tiny). By contrast, a control scent (the amino acid L-serine) did not activate this area, even though the concentration of L-serine was far higher than that of anything in the home-stream water. How this happens is unclear, but further studies of the identified telencephalon area ought to shed more light on it.

So fishMRI is clearly a fast-developing area of neuroscience. In fact, as this graph shows, it's enjoying exponential growth and, if current trends continue, could become almost as popular as scanning people...

Link: Also blogged at NeuroDojo.

ResearchBlogging.orgBandoh H, Kida I, & Ueda H (2011). Olfactory Responses to Natal Stream Water in Sockeye Salmon by BOLD fMRI. PloS one, 6 (1) PMID: 21264223

When "Healthy Brains" Aren't

There's a lot of talk, much of it rather speculative, about "neuroethics" nowadays.

But there's one all too real ethical dilemma, a direct consequence of modern neuroscience, that gets very little attention. This is the problem of incidental findings on MRI scans.

An "incidental finding" is when you scan someone's brain for research purposes, and, unexpectedly, notice that something looks wrong with it. This is surprisingly common: estimates range from 2–8% of the general population. It will happen to you if you regularly use MRI or fMRI for research purposes, and when it does, it's a shock. Especially when the brain in question belongs to someone you know. Friends, family and colleagues are often the first to be recruited for MRI studies.

This is why it's vital to have a system in place for dealing with incidental findings. Any responsible MRI scanning centre will have one, and as a researcher you ought to be familiar with it. But what system is best?

Broadly speaking there are two extreme positions:

  1. Research scans are not designed for diagnosis, and 99% of MRI researchers are not qualified to make a diagnosis. What looks "abnormal" to Joe Neuroscientist BSc or even Dr Bob Psychiatrist is rarely a sign of illness, and likewise they can easily miss real diseases. So, we should ignore incidental findings, pretend the scan never happened, because for all clinical purposes, it didn't.
  2. You have to do whatever you can with an incidental finding. You have the scans, like it or not, and if you ignore them, you're putting lives at risk. No, they're not clinical scans, they can still detect many diseases. So all scans should be examined by a qualified neuroradiologist, and any abnormalities which are possibly pathological should be followed-up.
Neither of these extremes is very satisfactory. Ignoring incidental findings sounds nice and easy, until you actually have to do it, especially if it's your girlfriend's brain. On the other hand, to get every single scan properly checked by a neuroradiologist would be expensive and time-consuming. Also, it would effectively turn your study into a disease screening program - yet we know that screening programs can cause more harm than good, so this is not necessarily a good idea.

Most places adopt a middle-of-the-road approach. Scans aren't routinely checked by an expert, but if a researcher spots something weird, they can refer the scan to a qualified clinician to follow up. Almost always, there's no underlying disease. Even large, OMG-he-has-a-golf-ball-in-his-brain findings can be benign. But not always.

This is fine but it doesn't always work smoothly. The details are everything. Who's the go-to expert for your study, and what are their professional obligations? Are they checking your scan "in a personal capacity", or is this a formal clinical referral? What's their e-mail address? What format should you send the file in? If they're on holiday, who's the backup? At what point should you inform the volunteer about what's happening?

Like fire escapes, these things are incredibly boring, until the day when they're suddenly not.

A new paper from the University of California Irvine describes a computerized system that made it easy for researchers to refer scans to a neuroradiologist. A secure website was set up and publicized in University neuroscience community.

Suspect scans could be uploaded, in one of two common formats. They were then anonymized and automatically forwarded to the Department of Radiology for an expert opinion. Email notifications kept everyone up to date with the progress of each scan.

This seems like a very good idea, partially because of the technical advantages, but also because of the "placebo effect" - the fact that there's an electronic system in place sends the message: we're serious about this, please use this system.

Out about 5,000 research scans over 5 years, there were 27 referrals. Most were deemed benign... except one which turned out to be potentially very serious - suspected hydrocephalus, increased fluid pressure in the brain, which prompted an urgent referral to hospital for further tests.

There's no ideal solution to the problem of incidental findings, because by their very nature, research scans are kind of clinical and kind of not. But this system seems as good as any.

ResearchBlogging.orgCramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V (2011). A system for addressing incidental findings in neuroimaging research. NeuroImage PMID: 21224007

Retract That Seroxat?

Should a dodgy paper on antidepressants be retracted? And what's scientific retraction for, anyway?


Read all about it in a new article in the BMJ: Rules of Retraction. It's about the efforts of two academics, Jon Jureidini and Leemon McHenry. Their mission - so far unsuccesful - is to get this 2001 paper retracted: Efficacy of paroxetine in the treatment of adolescent major depression.

Jureidini is a member of Healthy Skepticism, a fantastic Australian organization that Neuroskeptic readers have encountered before. They've got lots of detail on the ill-fated "Study 329", including internal drug company documents, here.

So what's the story? Study 329 was a placebo-controlled trial of the SSRI paroxetine (Paxil, Seroxat) in 275 depressed adolescents. The paper concluded: that "Paroxetine is generally well tolerated and effective for major depression in adolescents." It was published in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP).

There's two issues here: whether paroxetine worked, and whether it was safe. On safety, the paper concluded that "Paroxetine was generally well tolerated...and most adverse effects were not serious." Technically true, but only because there were so many mild side effects.

In fact, 11 patients on paroxetine reported serious adverse events, including suicidal ideation or behaviour, and 7 were hospitalized. Just 2 patients in the placebo group had such events. Yet we are reassured that "Of the 11, only headache (1 patient) was considered by the treating investigator to be related to paroxetine treatment."

The drug company argue that it didn't become clear that paroxetine caused suicidal ideation in adolescents until after the paper was published. In 2002, British authorities reviewed the evidence and said that paroxetine should not be given in this age group.

That's as maybe; the fact remains that in this paper there was a strongly raised risk. However, in fairness, all that data was there in the paper, for readers to draw their own conclusions from. The paper downplays it, but the numbers are there.


*

The efficacy question is where the allegations of dodgy practices are most convincing. The paper concludes that paroxetine worked, while imipramine, an older antidepressant, didn't.

Jureidini and McHenry say that paroxetine only worked on a few of the outcomes - ways of measuring depression and how much the patients improved. On most of the outcomes, it didn't work, but the paper focusses on the ones where it did. According to the BMJ

Study 329’s results showed that paroxetine was no more effective than the placebo according to measurements of eight outcomes specified by Martin Keller, professor of psychiatry at Brown University, when he first drew up the trial.

Two of these were primary outcomes...the drug also showed no significant effect for the initial six secondary outcome measures. [it] only produced a positive result when four new secondary outcome measures, which were introduced following the initial data analysis, were used... Fifteen other new secondary outcome measures failed to throw up positive results.

Here's the worst example. In the original protocol, two "primary" endpoints were specified: the change in the total Hamilton Scale (HAMD) score, and % of patients who 'responded', defined as either an improvement of more than 50% of their starting HAMD score or a final HAMD of 8 or below.

On neither of these measures did paroxetine work better than placebo at the p=0.05 significance level. It did work if you defined 'responded' to mean only a final HAMD of 8 or below, but this was not how it was defined in the protocol. In fact, the Methods section of the paper follows the protocol faithfully. Yet in the Results section, the authors still say that:
Of the depression-related variables, paroxetine separated statistically from placebo at endpoint among four of the parameters: response (i.e., primary outcome measure)...
It may seem like a subtle point. But it's absolutely crucial. Paroxetine just did not work on either pre-defined primary outcome measure, and the paper says that it did.

Finally, there were also issues of ghostwriting. I've never been that concerned by this in itself. If the science is bad, it's bad whoever wrote it. Still, it's hardly a good thing.

*

Does any of this matter? In one sense, no. Authorities have told doctors not to use paroxetine in adolescents with depression since 2002 (in the UK) and 2003 (in the USA). So retracting this paper wouldn't change much in the real world of treatment.

But in another sense, the stakes are enormous. If this paper were retracted, it would set a precedent and send a message: this kind of p-value fishing to get positive results, is grounds for retraction.

This would be huge, because this kind of fishing is sadly very common. Retracting this paper would be saying: selective outcome reporting is a form of misconduct. So this debate is really not about Seroxat, but about science.


There are no Senates or Supreme Courts in science. However, journal editors are in a unique position to help change this. They're just about the only people (grant awarders being the others) who have the power to actually impose sanctions on scientists. They have no official power. But they have clout.

Were the JAACAP to retract this paper, which they've so far said they have no plans to do, it would go some way to making these practices unacceptable. And I think no-one can seriously disagree that they should be unacceptable, and that science and medicine would be much better off if they were. Do we want more papers like this, or do we want fewer?

So I think the question of whether to retract or not boils down to whether it's OK to punish some people "to make an example of them", even though we know of plenty of others who have done the same, or worse, and won't be punished.

My feeling is: no, it's not very fair, but we're talking about multi-billion pound companies and a list of authors whose high-flying careers are not going to crash and burn just because one paper from 10 years ago gets pulled. If this were some poor 24 year old's PhD thesis, it would be different, but these are grown-ups who can handle themselves.

So I say: retract.

ResearchBlogging.orgNewman, M. (2010). The rules of retraction BMJ, 341 (dec07 4) DOI: 10.1136/bmj.c6985

Keller MB, et al. (2001). Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry, 40 (7), 762-72 PMID: 11437014

Psychoanalysis: So Bad It's Good?

Many of the best things in life are terrible.


We all know about the fun to be found in failure, as exemplified by Judge A Book By Its Cover and of course FailBlog. The whole genre of B-movie appreciation is based on the maxim of: so bad, it's good.

But could the same thing apply to psychotherapies?

Here's the argument. Freudian psychoanalysis is a bit silly. Freud had pretensions to scientific respectability, but never really achieved it, and with good reason. You can believe Freud, and if you do, it kind of make sense. But to anyone else, it's a bit weird. If psychoanalysis were a person, it would be the Pope.

By contrast, cognitive-behavioural therapy is eminently reasonable. It relies on straightforward empirical observations of the patient's symptoms, and on trying to change people's beliefs by rational arguments and real-life examples ("behavioural experiments"). CBT practitioners are always keen to do randomized controlled trials to provide hard evidence for their success. CBT is Richard Dawkins.

But what if the very irrationality of psychoanalysis is its strength? Mental illness is irrational. So's life, right? So maybe you need an irrational kind of therapy to deal with it.

This is almost the argument advanced by Robert Rowland Smith in a short piece In Defence of Psychoanalysis:

...The irony is that in becoming more “scientific”, CBT becomes less therapeutic. Now, Freud himself liked to be thought of as a scientist (he began his career in neurology, working on the spinal ganglia), but it’s the non-scientific features that make psychoanalysis the more, not the less, powerful.

I’m referring to the therapeutic relationship itself. Although like psychoanalysis largely a talking cure, CBT prefers to set aside the emotions in play between doctor and patient. Psychoanalysis does the reverse. To the annoyance no doubt of many a psychoanalytic patient, the very interaction between the two becomes the subject-matter of the therapy.

The respected therapist and writer Irvin Yalom, among others, argues that depression and associated forms of sadness stem from an inability to make good contact with others. Relationships are fundamental to happiness. And so a science that has the courage to include the doctor’s relationship with the patient within the treatment itself, and to work with it, is a science already modelling the solution it prescribes. What psychoanalysis loses in scientific stature, it gains in humanity.
Rowland Smith's argument is that psychoanalysis offers a genuine therapeutic relationship complete with transference and countertransference, while CBT doesn't. He also suggests that analysis is able to offer this relationship precisely because it's unscientific.

Human relationships aren't built on rational, scientific foundations. They can be based on lots of stuff, but reason and evidence ain't high on the list. Someone who agrees with you on everything, or helps you to discover things, is a colleague, but not yet a friend unless you also get along with them personally. Working too closely together on some technical problem can indeed prevent friendships forming, because you never have time to get to know each other personally.

Maybe CBT is just too sensible: too good at making therapists and patients into colleagues in the therapeutic process. It provides the therapist with a powerful tool for understanding and treating the patient's symptoms, at least on a surface level, and involving the patient in that process. But could this very rationality make a truly human relationship impossible?

I'm not convinced. For one thing, there can be no guarantee that psychoanalysis does generate a genuine relationship in any particular case. But you might say that you can never guarantee that, so that's a general problem with all such therapy.

More seriously, psychoanalysis still tries to be scientific, or at least technical, in that it makes use of a specialist vocabulary and ideas ultimately derived from Sigmund Freud. Few psychoanalysts today agree with Freud on everything, but, by definition, they agree with him on some things. That's why they're called "psychoanalysts".

But if psychoanalysis works because of the therapeutic relationship, despite, or even because, Freud was wrong about most things... why not just chat about the patient's problems with the minimum of theoretical baggage? Broadly speaking, counselling is just that. Rowland Smith makes an interesting point, but it's far from clear that it's an argument for psychoanalysis per se.

Note:
A truncated version of this post briefly appeared earlier because I was a wrong-button-clicking klutz this morning. Please ignore that if you saw it.

Fat Genes Make You Happy?

Does being heavier make you happier?

An interesting new paper from a British/Danish collaboration uses a clever trick based on genetics to untangle the messy correlation between obesity and mental health.

They had a huge (53,221) sample of people from Copenhagen, Denmark. It measured people's height and weight to calculate their BMI, and asked them some simple questions about their mood, such as "Do you often feel nervous or stressed?"

Many previous studies have found that being overweight is correlated with poor mental health, or at least with unhappiness ("psychological distress"). And this was exactly what the authors found in this study, as well.

Being very underweight was also correlated with distress; perhaps these were people with eating disorders or serious medical illnesses. But if you set those small number of people aside, there was a nice linear correlation between BMI and unhappiness. When they controlled for various other variables like income, age, and smoking, the effect of BMI became smaller but it was still significant.

But that's just a correlation, and as we all know, "correlation doesn't imply causation". Actually, it does; something must be causing the correlation, it didn't just magically appear out of nowhere. The point is that shouldn't make simplistic assumptions about what the causal direction is.

It would be easy to make these assumptions. Maybe being miserable makes you fat, due to comfort eating. Or maybe being fat makes you miserable, because overweight is considered bad in our society. Or both. Or neither. We don't know.

Finding this kind of correlation and then speculating about it is where a lot of papers finish, but for these authors, it was just the start. They genotyped everyone for two different genetic variants known, from lots of earlier work, to consistently affect body weight (FTO rs9939609 and MC4R rs17782313).

They confirmed that they were indeed associated with BMI; no surprise there. But here's the surprising bit: the "fat" variants of each gene were associated with less psychological distress. The effects were very modest, but then again, their effects on weight are small too (see the graph above; the effects are in terms of z scores and anything below 0.3 is considered "small".)

The picture was very similar for the other gene.

This allows us to narrow down the possibilities about causation. Being depressed clearly can't change your genotype. Nothing short of falling into a nuclear reactor can change your genotype. It also seems unlikely that genotype was correlated with something else which protects against depression. That's not impossible; it's the problem of population stratification, and it's a serious issue with multi-ethnic samples, but this paper only included white Danish people.

So the author's conclusion is that being slightly heavier causes you to be slightly happier, even though overall, weight is strongly correlated with being less happy. This seems paradoxical, but that's what the data show.

That conclusion would fall apart, though, if these genes directly effect mood, and also, separately, make you fatter. The authors argue that this is unlikely, but I wonder. Both FTO and MC4R are active in the brain: they influence weight by making you eat more. If they can affect appetite, they might also affect mood. A quick PubMed search only turns up a couple of rather speculative papers about MC4R and its possible links to mood, so there's no direct evidence for this, but we can't rule it out.

But this paper is still an innovative and interesting attempt to use genetics to help get beneath the surface of complex correlations. It doesn't explain the observed correlation between BMI and unhappiness - it actually makes it more mysterious. But that's a whole lot better than just speculating about it.

ResearchBlogging.orgLawlor DA, Harbord RM, Tybjaerg-Hansen A, Palmer TM, Zacho J, Benn M, Timpson NJ, Smith GD, & Nordestgaard BG (2011). Using genetic loci to understand the relationship between adiposity and psychological distress: a Mendelian Randomization study in the Copenhagen General Population Study of 53,221 adults. Journal of internal medicine PMID: 21210875

Antidepressants Still Don't Work In Mild Depression

A new paper has added to the growing ranks of studies finding that antidepressant drugs don't work in people with milder forms of depression: Efficacy of antidepressants and benzodiazepines in minor depression.


It's in the British Journal of Psychiatry and it's a meta-analysis of 6 randomized controlled trials on three different drugs. Antidepressants were no better than placebo in patients with "minor depressive disorder", which is like the better-known Major Depressive Disorder but... well, not as major, because you only need to have 2 symptoms instead of 5 from this list.

They also wanted to find out whether benzodiazepines (like Valium) worked in these people, but there just weren't any good studies out there.

The results look solid, and they fit with the fact that antidepressants don't work in people diagnosed with "major" depression, but who fall at the "milder" end of that range, something which several recent studies have shown. Neuroskeptic readers will, if they've been paying attention, find this entirely unsurprising.

But in fact, it's not just not news, it's positively ancient. 50 years ago, at the dawn of the antidepressant era, it was commonly said that most antidepressants don't work in everyone with "depression", they work best in people with endogenous depression, and less well, or not at all, in those with "neurotic" or "reactive" depressions (see, e.g. 1, 2, 3, but the literature goes back even further).

"Endogenous" is not strictly the same as "severe", however, in practice, these two concepts have never really been clearly seperated, and they're largely equivalent today, because the leading measure of "severity", the Hamilton Scale, measures symptoms, and arguably these symptoms are mostly (though not entirely) the symptoms of the old concept of endogenous depression. The Hamilton Scale was formulated in 1960 when modern concepts of "minor depressive disorder" and "major depressive disorder" were unknown.

Why then are we only now working out that antidepressants only work in some people? There's one obvious answer: Prozac, which arrived in 1987. Before Prozac, antidepressants were serious stuff. They could easily kill you in overdose, and they had a lot of side effects. Many of them even meant that you couldn't eat cheese. As a result, they weren't used lightly.

Prozac and the other SSRIs changed the game completely. They're much less toxic, the side effects are milder, and you can eat as much cheese as you want. So it's very easy to prescribe an SSRI - maybe it won't work, but it can't hurt, so why not try it...?

As a result, I think, the concept of "depression" broadened. Before Prozac, depression was inherently serious, because the treatments were serious. After Prozac, it didn't have to be. Drug company marketing no doubt helped this process along, but marketing has to have something to work with. Over the past 25 years, terms like "endogenous", "neurotic" etc. largely disappeared from the literature, replaced by the single construct of "Major Depression".

For nearly 1,000 years, the great scientific and philosophical work of the ancient Greeks and Romans were lost to Europeans. Only when Christian scholars rediscovered them in the libraries of the Islamic world did Europe begin to remember what it had forgotten. We call those the Dark Ages. Will the past 25 years be remembered as psychiatry's Dark Age?

ResearchBlogging.orgBarbui, C., Cipriani, A., Patel, V., Ayuso-Mateos, J., & van Ommeren, M. (2011). Efficacy of antidepressants and benzodiazepines in minor depression: systematic review and meta-analysis The British Journal of Psychiatry, 198 (1), 11-16 DOI: 10.1192/bjp.bp.109.076448

Left Wing vs. Right Wing Brains

So apparently: Left wing or right wing? It's written in the brain

People with liberal views tended to have increased grey matter in the anterior cingulate cortex, a region of the brain linked to decision-making, in particular when conflicting information is being presented...

Conservatives, meanwhile, had increased grey matter in the amygdala, an area of the brain associated with processing emotion.

This was based on a study of 90 young adults using MRI to measure brain structure. Sadly that press release is all we know about the study at the moment, because it hasn't been published yet. The BBC also have no fewer than three radio shows about it here, here and here.

Politics blog Heresy Corner discusses it...
Subjects who professed liberal or left-wing opinions tended to have a larger anterior cingulate cortex, an area of the brain which, we were told, helps process complex and conflicting information. (Perhaps they need this extra grey matter to be able to cope with the internal contradictions of left-wing philosophy.)
This kind of story tends to attract chuckle-some comments.

In truth, without seeing the full scientific paper, we can't know whether the differences they found were really statistically solid, or whether they were voodoo or fishy. The authors, Geraint Rees and Ryota Kanai, have both published a lot of excellent neuroscience in the past, but that's no guarantee.

In fact, however, I suspect that the brain is just the wrong place to look if you're interested in politics, because most political views don't originate in the individual brain, they originate in the wider culture and are absorbed and regurgitated without much thought. This is a real shame, because all of us, left or right, have a brain, and it's really quite nifty:

But when it comes to politics we generally don't use it. The brain is a powerful organ designed to help you deal with reality in all its complexity. For a lot of people, politics doesn't take place there, it happens in fairytale kingdoms populated by evil monsters, foolish jesters, and brave knights.

Given that the characters in this story are mindless stereotypes, there's no need for empathy. Because the plot comes fully-formed from TV or a newspaper, there's no need for original ideas. Because everything is either obviously right or obviously wrong, there's not much reasoning required. And so on. Which is why this happens amongst other things.

I don't think individual personality is very important in determining which political narratives and values you adopt: your family background, job, and position in society is much more important.

Where individual differences matter, I think, is in deciding how "conservative" or "radical" you are within whatever party you find yourself. Not in the sense of left or right, but in terms of how keen you are on grand ideas and big changes, as opposed to cautious, boring pragmatism.

In this sense, there are conservative liberals (i.e. Obama) and radical conservatives (i.e. Palin), and that's the kind of thing I'd be looking for if I were trying to find political differences in the brain.

Links: If right wingers have bigger amygdalae, does that mean patient SM, the woman with no amygdalae at all, must be a communist? Then again, Neuroskeptic readers may remember that the brain itself is a communist...

 
powered by Blogger