Serotonin, Psychedelics and Depression

Note: This post is part of a Nature Blog Focus on hallucinogenic drugs in medicine and mental health, inspired by a recent Nature Reviews Neuroscience paper, The neurobiology of psychedelic drugs: implications for the treatment of mood disorders, by Franz Vollenweider & Michael Kometer. That article will be available, free (once you register), until September 23. For more information on this Blog Focus, see the "Table of Contents" here.

Neurophilosophy is covering the history of psychedelic psychiatry, while Mind Hacks provides a personal look at one particular drug, DMT. The Neurocritic discusses ketamine, an anesthetic with hallucinogenic properties, which is attracting a lot of interest at the moment as a treatment for depression.

Ketamine, however, is not a "classical" psychedelic like the drugs that gave the 60s its unique flavor and left us with psychedelic rock, acid house and colorful artwork. Classical psychedelics are the focus of this post.

The best known are LSD ("acid"), mescaline, found in the peyote and a few other species of cactus, and psilocybin, from "magic" mushrooms of the Psilocybe genus. Yet there are literally hundreds of related compounds. Most of them are described in loving detail in the two heroic epics of psychopharmacology, PIKHaL and TIKHaL, written by chemists and trip veterans Alexander and Ann Shulgin.

The chemistry of psychedelics is closely linked with that of depression and antidepressants. All classical psychedelics are 5HT2A receptor agonists. Most of them have other effects on the brain as well, which contribute to the unique effects of each drug, but 5HT2A agonism is what they all have in common.

5HT2A receptors are excitatory receptors expressed throughout the brain, and are especially dense in the key pyramidal cells of the cerebral cortex. They're normally activated by serotonin (5HT), which is the neurotransmitter that's most often thought of as being implicated in depression. The relationship between 5HT and mood is very complicated, and depression isn't simply a disorder of "low serotonin", but there's strong evidence that it is involved.

There's one messy detail, which is that not quite all 5HT2A agonists are hallucinogenic. Lisuride, a drug used in Parkinson's disease, is closely related to LSD, and is a strong 5HT2A agonist, but it has no psychedelic effects. It's recently been shown that LSD and lisuride have different molecular effects on cortical cells, even though they act on the same receptor - in other words, there's more to 5HT2A than simply turning it "on" and "off".

*

How could psychedelics help to treat mental illness? On the face of it, the acute effects of these drugs - hallucinations, altered thought processes and emotions - sound rather like the symptoms of mental illness themselves, and indeed psychedelics have been referred to as "psychotomimetic" - mimicking psychosis.

There are two schools of thought here: psychological and neurobiological.

The psychological approach ruled the first wave of psychedelic psychiatry, in the 50s and 60s. Psychiatry, especially in America, was dominated by Freudian theories of the unconscious. On this view, mental illness was a product of conflicts between unconscious desires and the conscious mind. The symptoms experienced by a particular patient were distressing, of course, but they also provided clues to the nature of their unconscious troubles.

It was tempting to see the action of psychedelics as a weakening of the filters which kept the unconscious, unconscious - allowing repressed material to come into awareness. The only other time this happened, according to Freud, was during dreams. That's why Freud famously called the interpretation of dreams the "royal road to the unconscious".

Psychedelics offered analysts the tantalizing prospect of confronting the unconscious face-to-face, while awake, instead of having to rely on the patient's memory of their previous dreams. To enthusiastic Freudians, this promised to revolutionize therapy, in the same way that the x-ray had done so much for surgery. The "dreamlike" nature of many aspects of the psychedelic experience seemed to confirm this.

Not all psychedelic therapists were orthodox Freudians, however. There were plenty of other theories in circulation, many of them inspired by the theorists' own drug experiences. Stanislav Grof, Timothy Leary and others saw the psychedelic state of consciousness as the key to attaining spiritual, philosophical and even mystical insights, whether one was "ill" or "healthy" - and indeed, they often said that mental "illness" was itself a potential source of spiritual growth.

Like many things, psychiatry has changed since the 60s. Psychotherapy is currently dominated by cognitive-behavioural (CBT) theory, and Freudian ideas have gone distinctly out of fashion. It remains to be seen what CBT would make of LSD, but the basic idea - that carefully controlled use of drugs could help patients to "break through" psychological barriers to treatment - seems likely to remain at the heart of their continued use.

*

The other view is that these drugs could have direct biological effects which lead to improvements in mood. Repeated use of LSD, for example, has been shown to rapidly induce down-regulation of 5HT2A receptors. Presumably, this is the brain's way of "compensating" for prolonged 5HT2A activation. This is probably why tolerance to the effects of psychedelics rapidly develops, something that's long been known (and regretted) by heavy users.

Vollenweider and Kometeris note that this is interesting, because 5HT2A blockers are used as antidepressants - the drugs nefazadone and mirtazapine are the best known today, but most of the older tricyclic antidepressants are also 5HT2A antagonists. Atypical antipsychotics, which are also used in depression, are potent 5HT2A antagonists as well.

So indirectly suppressing 5HT2A might be one biological mechanism by which psychedelics improve mood. However, questions remain about how far this could explain any therapeutic effects of these drugs. Psychedelic-induced 5HT2A down-regulation is presumably temporary - and if all we need to do is to knock out 5HT2A, it would surely be easiest to just use an antagonist...

ResearchBlogging.orgVollenweider FX, & Kometer M (2010). The neurobiology of psychedelic drugs: implications for the treatment of mood disorders. Nature Reviews Neuroscience, 11 (9), 642-51 PMID: 20717121

fMRI Analysis in 1000 Words

Following on from fMRI in 1000 words, which seemed to go down well, here's the next step: how to analyze the data.

There are many software packages available for fMRI analysis, such as FSL, SPM, AFNI, and BrainVoyager. The following principles, however, apply to most. The first step is pre-processing, which involves:

  • Motion Correction aka Realignment – during the course of the experiment subjects often move their heads slightly; during realignment, all of the volumes are automatically adjusted to eliminate motion.
  • Smoothing – all MRI signals contain some degree of random noise. During smoothing, the image of the whole brain is blurred. This tends to smooth out random fluctuations. The degree of smoothing is given by the “Full Width to Half Maximum” (FWHM) of the smoother. Between 5 and 8 mm is most common.
  • Spatial Normalization aka Warping – Everyone’s brain has a unique shape and size. In order to compare activations between two or more people, you need to eliminate these differences. Each subject’s brain is warped so that it fits with a standard template (the Montreal Neurological Institute or MNI template is most popular.)
Other techniques are also sometimes used, depending on the user’s preference and the software package.

Then the real fun begins: the stats. By far the most common statistical approach for detecting task-related neural activation is that based upon the General Linear Model (GLM), though there are alternatives.

We first need to define a model of what responses we’re looking for, which makes predictions as to what the neural signal should look like. The simplest model would be that the brain is more active at certain times, say, when a picture is on the screen. So our model would be simply a record of when the stimulus was on the screen. This is called a "boxcar" function (guess why):
In fact, we know that the neural response has a certain time lag. So we can improve our model by adding the canonical (meaning “standard”) haemodynamic response function (HRF).
Now consider a single voxel. The MRI signal in this voxel (the brightness) varies over time. If there were no particular neural activation in this area, we’d expect the variation to be purely noise:Now suppose that this voxel was responding to a stimulus present from time-point 40 to 80.
While the signal is on average higher during this period of activation, there’s still a lot of noise, so the data doesn’t fit with the model exactly.
The GLM is a way of asking, for each voxel, how closely it fits a particular model. It estimates a parameter, β, representing the “goodness-of-fit” of the model at that voxel, relative to noise. Higher β, better fit. Note that a model could be more complex than the one above. For example, we could have two kinds of pictures, Faces and Houses, presented on the screen at different times:
In this case, we are estimating two β scores for each voxel, β-faces and β-houses. Each stimulus type is called an explanatory variable (EV). But how do we decide which β scores are high enough to qualify as “activations”? Just by chance, some voxels which contain pure noise will have quite high β scores (even a stopped clock’s right twice per day!)

The answer is to calculate the t score, which for each voxel is β / standard deviation of β across the whole brain. The higher the t score, the more unlikely it is that the model would fit that well by chance alone. It’s conventional to finally convert the t score into the closely-related z score.

We therefore end up with a map of the brain in terms of z. z is a statistical parameter, so fMRI analysis is a form of statistical parametric mapping (even if you don’t use the "SPM" software!) Higher z scores mean more likely activation.

Note also that we are often interested in the difference or contrast between two EVs. For example, we might be interested in areas that respond to Faces more than Houses. In this case, rather than comparing β scores to zero, we compare them to each other – but we still end up with a z score. In fact, even an analysis with just one EV is still a contrast: it’s a contrast between the EV, and an “implicit baseline”, which is that nothing happens.

Now we still need to decide how high of a z score we consider “high enough”, in other words we need to set a threshold. We could use conventional criteria for significance: p less than 0.05. But there are 10,000 voxels in a typical fMRI scan, so that would leave us with 500 false positives.

We could go for a p value 10,000 times smaller, but that would be too conservative. Luckily, real brain activations tend to happen in clusters of connected voxels, especially when you’ve smoothed the data, and clusters are unlikely to occur due to chance. So the solution is to threshold clusters, not voxels.

A typical threshold would be “z greater than 2.3, p less than 0.05”, meaning that you're searching for clusters of voxels, all of which has a z score of at least 2.3, where there's only a 5% chance of finding a cluster that size by chance (based on this theory.) This is called a cluster corrected analysis. Not everyone uses cluster correction, but they should. This is what happens if you don't.

Thus, after all that, we hopefully get some nice colorful blobs for each subject, each blob representing a cluster and colour representing voxel z scores:

This is called a first-level, or single-subject, analysis. Comparing the activations across multiple subjects is called the second-level or group-level analysis, and it relies on similar principles to find clusters which significantly activate across most people.

This discussion has focused on the most common method of model-based detection of activations. There are other "data driven" or "model free" approaches, such as this. There are also ways of analyzing fMRI data to find connections and patterns rather than just activations. But that's another story...

Drugs for Starcraft Addiction

Are you addicted to Starcraft? Do you want to get off Battle.net and on a psychoactive drug?

Well, South Korean psychiatrists Han et al report that Bupropion sustained release treatment decreases craving for video games and cue-induced brain activity in patients with Internet video game addiction.

They took 11 people with "Internet Game Addiction" - the game being Starcraft, this being South Korea - and gave them the drug bupropion (Wellbutrin), an antidepressant that's also used in drug addiction and smoking cessation. These guys (because, predictably, they were all guys) were seriously hooked, playing on average at least 4 hours per day.

Six were absent from school because of playing Internet video game in Internet cafes for more than 2 months. Two IAGs had been divorced because of excessive Internet use at night.
They helpfully summarize Starcraft for the layperson:
As a military leader for one of three species, players must gather resources for training and expanding their species’ forces. Utilizing various strategies and alliances with other species, players attempt to lead their own species to victory.
Which is all true, but it doesn't quite communicate the sheer obsessiveness that's require to win this game. As Penny Arcade said "it is OCD masquerading as recreation", and that's coming from someone who literally plays video games for a living.

Anyway, apparently the drug worked:
After 6 weeks of bupropion SR treatment in the IAG group, there were significant decreases in terms of craving for playing StarCraft (23.6%), total playing game time (35.4%), and Internet Addiction Scale scores (15.4%)
They also did some fMRI and found that the addict's brains responded more strongly to pictures of Zerglings than did control people, and that the drug reduced activity a bit. But there was no placebo group, so we have no idea whether this was the drug or not.

Sadly, the point is moot, because Starcraft II has just come out, and it's more addictive than ever. I'm off to try and optimize my Terran build order, and by God I will get those 10 marines out in the first 5 minutes if it takes me all night...

ResearchBlogging.orgHan DH, Hwang JW, & Renshaw PF (2010). Bupropion sustained release treatment decreases craving for video games and cue-induced brain activity in patients with Internet video game addiction. Experimental and clinical psychopharmacology, 18 (4), 297-304 PMID: 20695685

Very Severely Stupid About Depression

An unassuming little paper in the latest Journal of Affective Disorders may change everything in the debate over antidepressants: Not as golden as standards should be: Interpretation of the Hamilton Rating Scale for Depression.

Bear with me and I'll explain. It's less boring than it looks, trust me.

The Hamilton Scale (HAMD) is the most common system for rating the severity of depression. If you're only a bit down you get a low score, if you're extremely ill you get a high one. The maximum score's 52 but in practice it's extremely rare for someone to score more than 30.

First published in 1960, the HAMD is used in most depression research including almost all clinical trials of antidepressants. It's come under much criticism recently, but that's not the point here. The authors of the new paper, Kristen & von Wolff, simply asked: what does a given HAMD score mean in terms of severity?

It turns out that people have proposed no less than 5 different systems for interpreting HAMD scores. Do they all agree? Ha. Guess.

The pretty colors are mine. Just a glance shows a lot of variability, but the obvious outlier is the second one. That's the American Psychiatric Association (APA)'s official 2000 recommendations. Their interpretations of a given point on the scale tend to be worse than everyone else's.

This is most apparent at the top end. The APA use the terminology "Very Severe", which doesn't even appear on other scales. Much of what they class as "Very Severe" (23-26), two other scales class as "Moderate" depression! Amusingly, British authorities NICE seem to have been so unimpressed with this that they simply copied the APA's scale and toned everything down a notch for their 2009 criteria.

*

Why does this purely terminological debate matter? Well. A number of recent studies, most notoriously Kirsch et al (2008), have shown that antidepressants work better in more severe cases. The cut-off for antidepressants being substantially better than placebo generally comes out as about 26 on the HAMD in these studies.

Under the APA's 2000 terminology, this is well into the "Very Severe" band. Hence why Kirsch et al wrote - in a phrase that launched a thousand "Prozac Doesn't Work" headlines -
antidepressants reach... conventional criteria for clinical significance only for patients at the upper end of the very severely depressed category.
But for Bech, 26 is simply middle-of-the-road "major depression". For Furukawa, it's borderline "moderate" or "severe". Hmm. So if they'd gone with those criteria, Kirsch et al would have written instead
antidepressants reach... conventional criteria for clinical significance only for patients with major depression, of moderate-to-severe severity.
All of these terminological criteria are arbitrary, so this isn't necessarily more accurate, but it's no less so. The irony of the fact that Kirsch et al used the American Psychiatric Associations own criteria to skewer modern psychiatry isn't lost on me and probably wasn't lost on them either.

*

But where did the APA get their system from? This is the most extraordinary thing. Here's the paper they based their approach on. It's an 1982 British study by Kearns et al. The authors wanted to see how the HAMD compared to other depression scales. So they used lots of scales on the same bunch of depressed patients and compared them to each other, and to their own judgments of severity. Here's what they found:

You'll recognize the APA's categories, kind of, but they're all shifted. Why? We can only guess. Here's my guess. The scores in that Kearns et al graph were the average HAMD scores of people who fell into each severity band. The APA must have decided that they could use these to create cutoffs for severity.

How? It's not at all clear. The mean score for "Moderate" was 18, but that's the top end of Moderate in the APA's book; ditto for "Mild". The average "Very Severe" was 30 and the average "Severe" was 21 so the cut-off should have been 25 or 26 if you just went for the midpoint, in fact the APA went with 23. And so on.

That's before we get into the question of whether you should be using these results to make cutoffs at all (you shouldn't.) And the APA seem to have ignored the fact that the HAMD did not statistically significantly distinguish between "Severe" and "Moderate" depression anyway (p=0.1). Kearns et al's graph shows that other scales, like the Melancholia Subscale ("MS"), would be better. But everyone's been using the HAMD for the past 50 years regardless.

In Summary: Interpreting the Hamilton Scale is a minefield of controversy and the HAMD is far from a perfect scale of depression. Yet almost everything we know about depression and its treatment relies on the HAMD. Don't believe everything you read.

ResearchBlogging.orgKriston, L., & von Wolff, A. (2010). Not as golden as standards should be: Interpretation of the Hamilton Rating Scale for Depression Journal of Affective Disorders DOI: 10.1016/j.jad.2010.07.011

Kearns, N., Cruickshank, C., McGuigan, K., Riley, S., Shaw, S., & Snaith, R. (1982). A comparison of depression rating scales The British Journal of Psychiatry, 141 (1), 45-49 DOI: 10.1192/bjp.141.1.45

Hauser Of Cards

Update: Lots of stuff has happened since I wrote this post: see here for more.

A major scandal looks to be in progress involving Harvard Professor Marc Hauser, a psychologist and popular author whose research on the minds of chimpanzees and other primates is well-known and highly respected. The Boston Globe has the scoop and it's well worth a read (though you should avoid reading the comments if you react badly to stupid.)

Hauser's built his career on detailed studies of the cognitive abilities of non-human primates. He's generally argued that our closest relatives are smarter than people had previously believed, with major implications for evolutionary psychology. Now one of his papers has been retracted, another has been "corrected" and a third is under scrutiny. Hauser has also announced that he's taking a year off from his position at Harvard.

It's not clear what exactly is going on, but the problems seem to centre around videotapes of the monkeys that took part in Hauser's experiments. The story begins with a 2007 paper published in Proceedings of the Royal Society B. That paper has just been amended in a statement that appeared in the same journal last month:

In the original study by Hauser et al., we reported videotaped experiments on action perception with free ranging rhesus macaques living on the island of Cayo Santiago, Puerto Rico. It has been discovered that the video records and field notes collected by the researcher who performed the experiments (D. Glynn) are incomplete for two of the conditions.
The authors of the original paper were Hauser, David Glynn and Justin Wood. In the amendment, which is authored by Hauser and Wood i.e. not Glynn, they say that upon discovering the issues with Glynn's data, they went back to Puerto Rico, did the studies again, and confirmed that the original results were valid. Glynn left academia in 2007, to work for a Boston company, Innerscope Research, according to this online resume.

If that was the whole of the scandal it wouldn't be such a big deal, but according to the Boston Globe, that was just the start. David Glynn was also an author on a second paper which is now under scrutiny. It was published in Science 2007, with the authors listed as Wood, Glynn, Brenda Phillips and Hauser.

However, crucially, Glynn was not an author on the only paper which has actually been retracted, "Rule learning by cotton-top tamarins". This appeared in the journal Cognition in 2002. The three authors were Hauser, Daniel Weiss and Gary Marcus. David Glynn wasn't mentioned in the acknowledgements section either, and according to his resume, he didn't arrive in Hauser's lab until 2005.

So the problem, whatever it is, is not limited to Glynn.

Not was Glynn an author on the final paper mentioned in the Boston Globe, a 1995 article by Hauser, Kralik, Botto-Mahan, Garrett, and Oser. Note that the Globe doesn't say that this paper is formally under investigation, but rather, that it was mentioned in an interview by researcher Gordon G. Gallup who says that when he viewed the videotapes of the monkeys from that study, he didn't observe the behaviours which Hauser et al. said were present. Gallup is famous for his paper "Does Semen Have Antidepressant Properties?" in which he examined the question of whether semen... oh, guess.

The crucial issue for scientists is whether the problems are limited to the three papers that have so far been officially investigated or whether it goes further: that's an entirely open question right now.

In Summary: We don't know what is going on here and it would be premature to jump to conclusions. However, the only author who appears on all of the papers known to be under scrutiny, is Marc Hauser himself.

ResearchBlogging.orgHauser MD, Weiss D, & Marcus G (2002). Rule learning by cotton-top tamarins. Cognition, 86 (1) PMID: 12208654

Hauser MD, Glynn D, & Wood J (2007). Rhesus monkeys correctly read the goal-relevant gestures of a human agent. Proceedings. Biological sciences / The Royal Society, 274 (1620), 1913-8 PMID: 17540661

Wood JN, Glynn DD, Phillips BC, & Hauser MD (2007). The perception of rational, goal-directed action in nonhuman primates. Science (New York, N.Y.), 317 (5843), 1402-5 PMID: 17823353

Hauser MD, Kralik J, Botto-Mahan C, Garrett M, & Oser J (1995). Self-recognition in primates: phylogeny and the salience of species-typical features. Proceedings of the National Academy of Sciences of the United States of America, 92 (23), 10811-14 PMID: 7479889

A Time to Cry, and a Time to Laugh

This was trending on Twitter last night:

I feel really groggy and tired in the middle afternoon, but awake and energetic late at night. #idothistoo
I don't do Twitter but, ugh, fine, #idothistoo. However, in my case, the effect is sometimes more dramatic. If I'm in a depressive episode, my mood follows the same cycle, worse in the afternoon and better later in the evening, often to the point that some symptoms entirely disappear at nighttime.

In medical terms, this is called diurnal mood variation and it's considered a hallmark of clinical depression. The classical diurnal variation is progressive improvement throughout the day; waking up is said to be worst, especially when you wake up in the early hours of the morning (so-called "late insomnia").

In my experience, this is true but only when my depression is severe: I wake up two or three hours early feeling terrible, and then gradually improve. In milder episodes, I wake up at a normal time, or later than normal, and my mood is worse in the afternoon than the morning before recovering again.

Yet another phenomenon is the antidepressant effect of sleep deprivation. Staying awake the whole night often produces dramatic improvements in mood, though unfortunately the effect is transient and is lost when you do eventually fall asleep. This is unsurprising, if you think about classical diurnal mood variation: it's almost as if mood improves in proportion to the length of time spent awake. Again, I can confirm this from my personal experience.

Why does all this happen? No-one knows; many neurotransmitters and hormones have a circadian cycle - the best known being cortisol but almost everything is affected to some degree. Clearly a great many people experience diurnal cycles of energy - as Twitter shows - and the variations in depression are, presumably, an extreme form of the same phenomenon. The case of the man with almost no monoamines is also interesting: his symptoms showed a diurnal course, though it was reversed - better in the morning.

Diurnal variation is one of the few good things about depression. It's why the phrase "unrelenting misery" is not quite accurate: there is some relenting. You get to take a break, if only partial. It's even been suggested that it might be beneficial to schedule psychotherapy for the late evening, to maximize the mental energy available, and I can see how this would work, though it would rely on your therapist not having anything to better to do that night.

When depressed I've made use of this by staying up much later than usual; I generally go to bed around midnight but during an episode this often becomes more like 2 am, so as to squeeze as many hours of relative normality into the day.

Real Time fMRI

Wouldn't it be cool if you could measure brain activation with fMRI... right as it happens?

You could lie there in the scanner and watch your brain light up. Then you could watch your brain light up some more in response to seeing your brain light up, and watch it light up even more upon seeing your brain light up in response to seeing itself light up... like putting your brain between two mirrors and getting an infinite tunnel of activations.

Ok, that would probably get boring, eventually. But there'd be some useful applications too. Apart from the obvious research interest, it would allow you to attempt fMRI neurofeedback: training yourself to be able to activate or deactivate parts of your brain. Neurofeedback has a long (and controversial) history, but so far it's only been feasible using EEG because that's the only neuroimaging method that gives real-time results. EEG is unfortunately not very good at localizing activity to specific areas.

Now MIT neuroscientists Hinds et al present a new way of doing right-now fMRI:

Computing moment to moment BOLD activation for real-time neurofeedback. It's not in fact the first such method, but they argue that it's the only one that provides reliable, truly real-time signals.

Essentially the approach is closely related to standard fMRI analysis processes, except instead of waiting for all of the data to come in before starting to analyze it, it incrementally estimates neural activation every time a new scan of the brain arrives, while accounting for various forms of noise. They first show that it works well on some simulated data, and then discuss the results of a real experiment in which 16 people were asked to alternately increase or decrease their own neural response to hearing the noise of the MRI scanner (they are very noisy). Neurofeedback was given by showing them a "thermometer" representing activity in their auditory cortex.

The real-time estimates of activation turned out to be highly correlated with the estimates given by conventional analysis after the experiment was over - though we're not told how well people were able to use the neurofeedback to regulate their own brains.

Unfortunately, we're not given all of the technical details of the method, so you won't be able to jump into the nearest scanner and look into your brain quite yet, though they do promise that "this method will be made publicly available as part of a real-time functional imaging software package."

ResearchBlogging.orgHinds, O., Ghosh, S., Thompson, T., Yoo, J., Whitfield-Gabrieli, S., Triantafyllou, C., & Gabrieli, J. (2010). Computing moment to moment BOLD activation for real-time neurofeedback NeuroImage DOI: 10.1016/j.neuroimage.2010.07.060

Rowe No No

Neuroskeptic readers will know that I'm no fan of the American Psychiatric Association's DSM-4 system of psychiatric diagnosis. And judging by the draft version, DSM-5 is going to achieve an even lower place in my affections. The way things are going, I see it slotting in there just below pinworms, and just above celery.

But while there are many good reasons to criticize the DSM - see my numerous scribblings or try these books - there are plenty of bad reasons too. Psychologist and author Dorothy Rowe has just provided some in a recent Guardian article. I don't propose to spend much time on this confused piece, but one sentence is nonetheless instructive, as it exemplifies the danger of facile psychological explanations in psychiatry:

The people who come to the attention of psychiatrists and psychologists are feeling intense, often severe mental distress. Each of us has our own way of expressing anxiety and distress, but when under intense mental distress our typical ways become exaggerated. We become self-absorbed and behave in ways that the people around us find disturbing. Believing that when we're anxious it's best to keep busy can mean that our intense mental distress drives us into manic activity.
No it doesn't. No-one who has experienced mania or hypomania, or known someone who has, or... actually let's just say that no-one except Dorothy Rowe would be able to take that seriously as an account of mania.

Mania is when you write a letter to every one of your relatives proposing a grand family reunion. On a cruise ship in Hawaii. You'll pay for everything. Actually, you're broke. Mania is being literally unable to stop talking, because there are just so many interesting things to say. Actually, you're ranting at strangers on public transport.

The point is that when you're manic, these things don't seem weird, because mania is a mental state in which everything seems incredibly exciting and important, and you think you can do anything. It's like being on crack, without the link to reality of knowing that actually, you're not Jesus, you're on crack. Not all manic episodes are this extreme, and by definition hypomania is less dramatic, but the essential feeling is the same. That's what makes mania, mania.

You can be "manically" busy of course, or have a Manic Monday, but that's a figure of speech. Maybe some people's strategy for dealing with anxiety is by making themselves "manically" busy. If so, fair enough, but that's not mania. Mania is not a strategy; it's a mental state, and psychologically irreducible: you don't become manic about something, you just become manic.

It can certainly be triggered by things - stress, sleep deprivation, and crossing time zones are notorious - but it's not an understandable psychological response to them, it's a state that happens to result. If you drink some beer and get drunk, you're not drunk about beer, you're just drunk.
So Rowe's account of mania is spectacularly wrong. But take a look at the very next sentence:
A tendency to blame yourself and feel guilty can transmute into depression.
Now this sounds much more plausible. The very influential cognitive-behavioural accounts of depression propose that self-critical tendencies are a major risk factor for depression. Even if you're not familiar with CBT, you'll recognize that depressed people tend to blame themselves and feel guilty or inadequate all the time. That's got to be their underlying problem, right? It's common sense.

But is it? Rowe thinks so, but she's just completely missed the point of mania, and depression is the flip side of the coin that is bipolar disorder. The two states are fundamentally linked, polar opposites. So what are the chances that Rowe's right about one, when she's so wrong about the other? Not very good, if you ask me. Yet her explanation of depression seems much more plausible than her account of mania. Why?

I think it's because when you're depressed, you seek psychological explanations all the time: depressed people worry, ruminate and obsess endlessly about their "problems", and think that what they're feeling is a normal response to them. Of course I'm depressed, who wouldn't be in my situation?

This makes it very easy for psychologists to come along and offer a reappraisal which is in fact only slightly different: you're looking at things too negatively. Things aren't really as bad as you think, it's not really your fault, things really can and will improve. This is, certainly, often very helpful, and it's almost always true - because things generally aren't as bad as you think they are when you're depressed. Depression makes you see things negatively, just as mania makes you see them positively. That's kind of the point.

But this cognitive approach implicitly accepts the depressive notion that depression would have been an appropriate response to what you thought your situation was. It says that your feelings of depression were based on a mistake, but it does not dispute that depression is a healthy emotional state.

So the nature of depression means that it cries out for psychological explanations. But this doesn't mean that these explanations are in fact any more sensible than they would be if applied to mania. Depression may well be as much a psychologically irreducible, abnormal mental state as mania is. This is certainly not to say that cognitive theories of depression aren't useful or that CBT doesn't work. But we must be careful not to over-psychologize depression, however tempting it may be.

 
powered by Blogger