MAOis For Dummies (And British Pundits)

Allegedly, British Prime Minister Gordon Brown takes a monoamine oxidase inhibitor (MAOi) antidepressant.

That's the rumor, based on the rumored fact that he is unable to eat certain things, notably cheese and Chianti wine. These are foods rich in tyramine, a chemical that's normally harmless, but can be toxic in people taking MAOis. So, if Brown is indeed on a Chianti-and-cheeseless regime, he almost certainly is taking one of the several MAOis on the market today.

The original source for this idea is this blogger, who claims to have heard it from an unnamed Brown aide. Is he to be believed? A glance over his website shows he is hardly an impartial commentator, and he goes on to demonstrate his psychological insight with statements like

"Obsessive Compulsive Disorder (OCD) is relatively common. Most of us display some obsessive features in everyday life, but under stress a minority of people become borderline or actual OCD in their behaviour, and need medication to control both this and the depression which almost always presents soon afterwards. ... Gordon Brown's symptoms are obvious when viewed in this light: the constant repetition of phrases, and an almost embarrassing (for his Party) need to spray every Parliamentary answer with statistics... they - and the constant speech repetition - represent Brown's unconscious means of controlling the severe anxiety that accompanies depression with OCD."
So one might think that his credibility is somewhat questionable. This hasn't stopped certain corners of the British blogosphere from getting very excited, however, and even respected political journalist Andrew Marr yesterday quizzed Brown about the issue.

Unfortunately, while many are eager to write about Brown and his possible pills, few of them seem to know anything about psychiatry or antidepressants, which has led to some embarrassing errors. So, for the benefit of British pundits, here are some helpful facts.

MAOis -
  • are not "powerful", "heavy duty" antidepressants. In terms of effectiveness, they are no better, on average, than Prozac. In fact, no antidepressant is much better than any other one. They differ in terms of side effects, but not "strength". For what it's worth, current opinion is that if there is a best antidepressant, it is escitalopram, a modern Prozac-like SSRI with very mild side effects, which is just about as unlike a MAOi as you can imagine.
  • do not "impair" or "affect judgment". Antidepressants don't. Except that they treat depression, and someone who's happy might make different judgments to someone who's depressed. But these drugs do not affect judgment in the way that intoxicants like alcohol or cocaine do. You don't get high on them. This is why they have no street value. Most drugs which impair judgment get used recreationally, because having your judgment impaired can be fun. Antidepressants aren't.
  • are not exclusively used in "severe depression". They are usually reserved for when a patient has not responded to other drugs. This is because of their troublesome side effects, including high blood pressure, and the fact that you can't eat cheese. But "treatment-resistant" depression is not the same as "severe" depression. In fact, the more severe the depression, the more likely it is to respond to treatment with conventional drugs. If Brown is on MAOis, he has probably tried at least two or three other drugs, but this is by no means uncommon because antidepressants just don't work especially well. According to the largest trial in a real-world setting, the STAR*D project, only 30% of people fully recover on their first antidepressant and only 30% of the rest respond to the second one.
  • are not especially effective in OCD, as the source of the rumor claimed - "this older class of drugs has one huge advantage: for severe depression and obsessive compulsive disorder it remains very effective", emphasis in the original. This is just flat-out wrong. Other antidepressants are more useful in OCD. Here's a recent review of drug therapy for OCD. MAOis get a mention... right at the end, after (deep breath) SSRIs, clomipramine, atypical antipsychotics, SNRIs, pregabalin, tricyclic antidepressants, and benzodiazepines. Here's the only published trial comparing a monoamine oxidase inhibitor to another drug, Prozac, for OCD. The MAOi didn't work, Prozac did.
  • were the first class of antidepressants to be discovered; the very first, iproniazid, was discovered in 1952. Others followed, such as tranylcypromine, phenelzine, and selegiline. Today, there are a handful of MAOis on the market. These include some newer drugs such as moclobemide (which has milder side effects) and the selegiline transdermal patch (which carries fewer dietary restrictions). MAOis are primarily used to treat depression, but are also used in Parkinson's disease.
So, even if Brown is taking MAOis, this has no implications regarding his mental state or competence to govern. What about the possibility that he is depressed? This could be relevant, but considering that the most popular British leader of all time famously suffered from severe depressive episodes throughout his life, including his time in office, the historical precedents are not unfavourable.

Realistically, none of this is going to change people's minds. No-one is really concerned about the possibility that Gordon Brown is using MAOis, or even the possibility that he's depressed. Rather, a lot of people just really don't like him, and this rumor is the latest stick with which to beat him. Blogger Guido Fawkes has been asking "Is Brown Bonkers?" for months. As one journalist put it, "Whether literally the case or not, however, this rumor carries the kind of psychological truth that tends to be more damaging than fact." Which didn't stop him from repeating the rumor uncritically.


Panic! In the fMRI Scanner

Continuing the theme of interesting single case reports, I was pleased to see a paper about brain activity in someone who suffered a panic attack in the middle of an fMRI brain scan experiment.

The unfortunate volunteer, a 46 year old woman, was taking part in an experiment looking at restless-leg syndrome. The scan lasted 40 minutes, and everything was going smoothly until quite near the end, when out of the blue, she had a panic attack.

Obviously, the scan had to be abandoned - as soon as the volunteer pressed the emergency "panic button", they stopped the scan and got her out of the MRI. (This kind of thing is why we have such buttons!) However, they decided to see what happened in the woman's brain as the panic started using the data they acquired up to that point.

Here's what they found: the top graph here shows her heart rate. It starts increasing a bit and then spikes, which shows exactly when the attack occurred. What about the brain? Well, amygdala and left insula activity sort of increase around this time. A bit. If you stare at the lines hard enough.

If you believe they did, it makes sense because the amygdala is known to be involved in anxiety (amongst other things) while the insula is responsible for the perception of the body's internal state, which is rather out of whack during a panic attack.

What doesn't make sense is the middle temporal gyrus bit, which was statistically the only part of the brain where activity was significantly correlated with heart rate (in whole-brain analysis). That region is not believed to have anything to do with panic, and to be honest, it's probably just a fluke.

This is only the second published report about panic during fMRI. There was one previous paper from 2006 about an attack in someone with a history of panic, which also found amygdala activation. But there are sure to be others out there which haven't made it into print - anxiety and panic during scans is not unheard of (the scanner is rather claustrophobic). It would be interesting to get more data on this, because it's obviously rather hard to research real-life panic attacks, on account of them being unpredictable.

ResearchBlogging.orgSpiegelhalder, K., Hornyak, M., Kyle, S., Paul, D., Blechert, J., Seifritz, E., Hennig, J., Tebartz van Elst, L., Riemann, D., & Feige, B. (2009). Cerebral correlates of heart rate variations during a spontaneous panic attack in the fMRI scanner Neurocase, 1-8 DOI: 10.1080/13554790903066909

fMRI Gets Slap in the Face with a Dead Fish

A reader drew my attention to this gem from Craig Bennett, who blogs at

Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction

This is a poster presented by Bennett and colleagues at this year's Human Brain Mapping conference. It's about fMRI scanning on a dead fish, specifically a salmon. They put the salmon in an MRI scanner and "the salmon was shown a series of photographs depicting human individuals in social situations. The salmon was asked to determine what emotion the individual in the photo must have been experiencing."

I'd say that this research was justified on comedic grounds alone, but they were also making an important scientific point. The (fish-)bone of contention here is multiple comparisons correction. The "multiple comparisons problem" is simply the fact that if you do a lot of different statistical tests, some of them will, just by chance, give interesting results.

In fMRI, the problem is particularly severe. An MRI scan divides the brain up into cubic units called voxels. There are over 40,000 in a typical scan. Most fMRI analysis treats every voxel independently, and tests to see if each voxel is "activated" by a certain stimulus or task. So that's at least 40,000 separate comparisons going on - potentially many more, depending upon the details of the experiment.

Luckily, during the 1990s, fMRI pioneers developed techniques for dealing with the problem: multiple comparisons correction. The most popular method uses Gaussian Random Field Theory to calculate the probability of falsely "finding" activated areas just by chance, and to keep this acceptably low (details), although there are other alternatives.

But not everyone uses multiple comparisons correction. This is where the fish comes in - Bennett et al show that if you don't use it, you can find "neural activation" even in the tiny brain of dead fish. Of course, with the appropriate correction, you don't. There's nothing original about this, except the colourful nature of the example - but many fMRI publications still report "uncorrected" results (here's just the last one I read).

Bennett concludes that "the vast majority of fMRI studies should be utilizing multiple comparisons correction as standard practice". But he says on his blog that he's encountered some difficulty getting the results published as a paper, because not everyone agrees. Some say that multiple comparisons correction is too conservative, and could lead to genuine activations being overlooked - throwing the baby salmon out with the bathwater, as it were. This is a legitimate point, but as Bennett says, in this case we should report both corrected and uncorrected results, to make it clear to the readers what is going on.

Most People Experience "Mental Illness" By Age 32

Mental illness: how common is it? A popular answer is one in four - 25% of people will experience it at least once in their lives. In fact, most published research suggests that the lifetime rate is higher, around 30-50%, in Western nations.

That's a lot. But even this may be a serious underestimate, according to a new paper, How common are common mental disorders? The study compared the proportion of people reporting mental illness under two different research methods: retrospective and prospective.

Retrospective means asking people to think back and remember whether they ever have felt a certain way. A prospective study, however, recruits people and then follows them up for a certain length of time, asking them how they feel at regular intervals.

The obvious advantage of prospective studies is that there is less chance of forgetting. In a retrospective study, people are required to remember how they were feeling years, or even decades, ago. Human memory just isn't that good. A prospective study requires some remembering, as people are generally asked to report how they've felt over the last year, but this is clearly less problematic.

The prospective study in question here included 1,000 people from Dunedin, New Zealand. The volunteers were followed from birth to age 32, and were interviewed at ages 18, 21, 26 and 32. The results were compared to three large retrospective lifetime studies, two American and one from NZ. (1,2,3).

50% of the Dunedin prospective cohort reported at least one "anxiety disorder", 41% reported "depression", 32% confessed to "alcohol dependence" and 18% to "cannabis dependence". (Those were the only conditions studied.) For some reason, we're not told how much overlap there was, but even assuming there was a lot, well over half of all the cohort will have experienced at least one disorder. If the overlap was low, it could be almost all of them. And remember, this is just up to age 32. And there still may have been some forgetting...

Compared to the retrospective studies, these rates are all about twice as high. What does this mean for psychiatry?

First, it suggests that retrospective studies, which are by far the most common, are flawed. People just tend to forget a lot of "mental illness" when asked to remember across the lifetime. More evidence for this comes from the fact that the ratio of past-year to lifetime reported disorders was 38% in the prospective study compared to about 60% in the retrospective ones.

But there's a more profound implication. A growing number of critics have argued that the very high reported lifetime rates of mental disorders mean that the way most psychiatrists diagnose mental illness is flawed. The "Bible" of modern psychiatric diagnosis is the Diagnostic and Statistical Manual (DSM) of Mental Disorders of the American Psychiatric Association. DSM diagnostic criteria were used in the studies in question here.

These results suggest that DSM diagnoses are even more common than previously believed, which only strengthens the critics' case. According to DSM criteria, at least 40% of people experience "Major Depressive Disorder" by age 32.

In which case, what is it? A fairly usual part of human life. So, calling it a disease and treating it with drugs or therapy seems rather presumptuous. Especially since so many people who "suffer" from it manage to not only get over it, but actually forget it ever happened. (Of course, this shouldn't be taken to mean that real, serious clinical depression doesn't exist.)

The authors conclude - listen carefully -
This article is uninformative (and agnostic) about the validity of diagnoses as defined by DSM-IV ... [rather], objections voiced to surveys’ higher than expected lifetime prevalence of disorder are objections to prevalence that is only half what it could be in reality...

Researchers might begin to ask why so many people experience a DSM-defined disorder at least once during their lifetimes, and what this prevalence means for etiological theory, the construct validity of the DSM approach to defining disorder, service-delivery policy, the economic burden of disease, and public perceptions of the stigma of mental disorder.
That hammering sound you hear is another nail sealing the coffin of DSM's credibility. If many* DSM "disorders" are simply descriptions of normal parts of human life, we need to take a long, hard look at those "disorders", and rethink whether they need to labelled and treated as medical problems.

The newest edition of DSM, DSM-5, is currently in development. This would seem like a great opportunity to do just that. Unfortunately, the development process is rapidly degenerating into farce. If DSM-5 does not address the issues raised here, many people will be tempted to give up on DSM entirely.

* Not all: the great majority of people will never meet criteria for schizophrenia or bipolar disorder, for example.

ResearchBlogging.orgMoffitt, T., Caspi, A., Taylor, A., Kokaua, J., Milne, B., Polanczyk, G., & Poulton, R. (2009). How common are common mental disorders? Evidence that lifetime prevalence rates are doubled by prospective versus retrospective ascertainment Psychological Medicine DOI: 10.1017/S0033291709991036

Trauma Alters Brain Function... So What?

According to a new paper in the prestigous journal PNAS, High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China.

The earthquake, you'll remember, happened on 12th May last year in central China. Over 60,000 people died. The authors of this paper took 44 earthquake survivors, and 32 control volunteers who had not experienced the disaster.

The volunteers underwent a "resting state" fMRI scan; survivors were scanned between 13 and 25 days after the earthquake. Resting state fMRI is simply a scan conducted while lying in the scanner, not doing anything in particular. Previous work has shown that fMRI can be used to measure resting state neural activity in the form of low-frequency oscillations.

The authors found differences in the resting state low-frequency activity (ALFF) between the trauma survivors and the controls. In survivors, resting state activity was increased in several areas:

"The whole-brain analysis indicated that, vs. controls, survivors showed significantly increased ALFF in the left prefrontal cortex and the left precentral gyrus, extending medially to the left presupplementary motor area... [and] region of interest (ROI) analyses revealed significantly increased ALFF in bilateral insula and caudate and the left putamen in the survivor group..."
They also reported correlations between resting activity in some of these areas and self-reported anxiety and depression symptoms in the survivors.

Finally, survivors showed reduced functional connectivity between a wide range of areas ("a distributed network that included the bilateral amygdala, hippocampus, caudate, putamen, insula, anterior cingulate cortex, and cerebellum.") Functional connectivity analysis measures the correlation in activity across different areas of the brain - whether the areas tend to activate at the same time or not.

Now - what does all this mean? And does it help us understand the brain?

The fact that there are differences between the two groups is neither informative nor surprising. "Resting state" neural activity presumably reflects whatever is going through a person's mind. Recent earthquake survivors are going to be thinking about rather different things compared to luckier people who didn't experience such trauma. It doesn't take a brain scan to tell you that, but that's all these scans really tell us.

But these weren't just any differences - they were particular differences in particular brain regions. Does that make knowing about them more interesting and useful?

Not as such, because we don't know what they represent, or what causes them. So living through an earthquake gives you "Increased ALFF in the left prefrontal cortex" - but what does that mean? It could mean almost anything. The left prefrontal cortex is a big chunk of the brain, and its functions probably include most complex cognitive processes. Ditto for the other areas mentioned.

The authors link their findings to previous work with frankly vague statements such as "The increased regional activity and reduced functional connectivity in frontolimbic and striatal regions occurred in areas known to be important for emotion processing". But anatomically speaking, most of the brain is either "fronto-limbic" or "striatal", and almost everywhere is involved in "emotion processing" in one way or another.

So I don't think we understand the brain much better for reading this paper. Further work, building on these results, might give insights. We might, say, learn that decreased connectivity between Regions X and Y is because trauma decreases serotonin levels, which prevents signals being communicated between these areas, which is why trauma victims can't use X to deliberately stop recalling traumatic memories, which is what Y does.

I just made that up. But that's a theory which could be tested. Much of today's neuroimaging research doesn't involve testable theories - it is merely the exploratory search for neural differences between two groups. Neuroimaging technology is powerful, and more advanced techniques are always being developed. What with resting state, functional connectivity, pattern-classification analysis, and other fancy methods, the scope for finding differences between groups is enormous and growing. I'm being rather unfair in criticizing this paper; there are hundreds like it. I picked this one because it was published last week in a good journal.

Exploratory work can be useful as a starting point, but at least in my opinion, there is too much of it. If you want to understand the brain, as opposed to simply getting published papers to your name, you need a theory sooner or later. That's what science is about.

ResearchBlogging.orgLui, S., Huang, X., Chen, L., Tang, H., Zhang, T., Li, X., Li, D., Kuang, W., Chan, R., Mechelli, A., Sweeney, J., & Gong, Q. (2009). High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0812751106

Predicting Antidepressant Response with EEG

One of the limitations of antidepressants is that they don't always work. Worse, they don't work in an unpredictable way. Some people benefit from some drugs, and others don't, but there's no way of knowing in advance what will happen in any particular case - or of telling which pill is right for which person.

As a result, drug treatment for depression generally involves starting with a cheap medication with relatively mild side-effects, and if that fails, moving onto a series of other drugs until one helps. But since it can take several weeks for any new drug to work, this can be a frustrating process for patients and doctors alike.

Some means of predicting the antidepressant response would thus be very useful. Many have been proposed, but none have entered widespread clinical use. Now, a pair of papers(1,2) from UCLA's Andrew Leuchter et al make the case for prediction using quantitative EEG (QEEG).

EEG, electroencephalography, is a crude but effective way of recording electrical activity in the brain via electrodes attached to the head. "Quantitative" EEG just means using EEG to precisely measure the level of certain kinds of activity in the brain.

Leuchter et al's system is straightforward: it uses six electrodes on the front of the head. The patient simply relaxes with their eyes closed for a few minutes while neural activity is recorded.

This procedure is performed twice, once just before antidepressant treatment begins and then again a week later. The claim is that by examining the changes in the EEG signal after one week of drug treatment, the eventual benefit of the drug can be predicted. It's not an implausible idea, and if it did work, it would be rather helpful. But does it?

Leuchter et al say: yes! The first paper reports that in 73 depressed patients who were given the antidepressant escitalopram 10mg/day, QEEG changes after one week predicted clinical improvement six weeks later. Specifically, people who got substantially better at seven weeks had a higher "Antidepressant Treatment Response Index" (ATR) at one week than people who didn't: 59.0 ± 10.2 vs 49.8 ± 7.8, which is highly significant (
p less than 0.001).

In the companion paper, the authors examined patients who started on escitalopram and then either kept taking it or switched to a different antidepressant, bupropion. They found that patients who had a high ATR after a week of escitalopram tended to do well if they stayed on it, while patients who had a low ATR to escitalopram did better when they switched to the other drug.

These are interesting results, and they follow from ten years of previous work (mostly, but not exclusively, from the same group) on the topic. Because the current study didn't include a placebo group, we can't say that the QEEG predicts antidepressant response as such, only that it predicts improvement in depression symptoms. But even this is pretty exciting, if it really works.

In order to verify that it does, other researchers need to replicate this experiment. But they may find this a little difficult. What is the Antidepressant Treatment Response Index use in this study? It's derived from an analysis of the EEG signal, and we're told that you get it from this formula:

Some of the terms here are common parameters that any EEG expert will understand. But "A", "B", and "C" are not. They're constants, which are not given in the paper. They're secret numbers. Without knowing what those numbers are, no-one can calculate the "ATR" even if they have an EEG machine.

keep them secret? Well...

"Financial support of this project was provided by Aspect Medical Systems. Aspect participated in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation and review of the manuscript."
Aspect is a large medical electronics company who developed the system used here. Presumably, they want to patent it (or already have). We're told that
"To facilitate independent replication of the work reported here, Aspect intends to make available a limited number of investigational systems for academic researchers. Please contact Scott Greenwald, Ph.D... for further information."
All very nice of them, but if they'd told us the three magic numbers, academics could start trying to independently replicate these results tomorrow. As it is, anyone who wants to do so will have to get Aspect's blessing, which, with the best will in the world, means they will not be entirely "independent".


ResearchBlogging.orgLeuchter AF, Cook IA, Gilmer WS, Marangell LB, Burgoyne KS, Howland RH, Trivedi MH, Zisook S, Jain R, Fava M, Iosifescu D, & Greenwald S (2009). Effectiveness of a quantitative electroencephalographic biomarker for predicting differential response or remission with escitalopram and bupropion in major depressive disorder. Psychiatry research PMID: 19709754

Leuchter AF, Cook IA, Marangell LB, Gilmer WS, Burgoyne KS, Howland RH, Trivedi MH, Zisook S, Jain R, McCracken JT, Fava M, Iosifescu D, & Greenwald S (2009). Comparative effectiveness of biomarkers and clinical indicators for predicting outcomes of SSRI treatment in Major Depressive Disorder: Results of the BRITE-MD study. Psychiatry research PMID: 19712979

powered by Blogger