Tampilkan postingan dengan label history. Tampilkan semua postingan
Tampilkan postingan dengan label history. Tampilkan semua postingan

Black Bile and Black Dogs

Depression is black. That's been the view of Western culture ever since the ancient Greeks, with their concept of "melan cholia" (μελαγχολία) - black bile. The idea was that psychological states were associated with particular bodily fluids; melancholy was associated with the "black bile" of the spleen, as opposed to the go-getting, passionate "yellow bile" of the gall-bladder

What this "black bile" (melan chole) actually was is rather mysterious. The gall bladder does indeed produce bile, a digestive juice which is greenish-yellow, but the spleen doesn't secrete anything as such. It itself is a dark greyish-purple, which might have given rise to the idea that it contained something black. Here's another theory.

The other color associated with depression is blue, of course, as in The Blues. However, when picturing depression-blue, I think most people generally see it as something rather close to black. It's the sky at twilight, not a bright summer's day, right? It's not a happy blue.

Winston Churchill famously referred to his depression as his Black Dog. There's a rather nice correspondence here with Chinese, though I doubt Churchill knew it. Here's the Chinese character for black and (one of) the characters for dog:
Write these as two separate characters and it says, well, black dog (badly). But there's another character which consists of "black" & " dog" combined:

This means silence; quiet; speechless; mute.

This is as good a one-word description of depression as any. Churchill's metaphor has always struck me as slightly misleading in one sense (although it's excellent in others): depression is not a thing; not even a black one. It is a lack, of motivation, energy, joy, imagination; you don't wake up and feel depressed, you wake up depressed and feel terrible, but the depression is hidden, only evident in retrospect, just as you don't tend to notice how quiet it is until a noise breaks the silence.

WMDs vs MDD

Weapons of Mass Destruction. Nuclear, chemical and biological weapons. They're really nasty, right?

Well, some of them are. Nuclear weapons are Very Destructive Indeed. Even a tiny one, detonated in the middle of a major city, would probably kill hundreds of thousands. A medium-sized nuke could kill millions. The biggest would wipe a small country off the map in one go.

Chemical and biological weapons, on the other hand, while hardly nice, are just not on the same scale.

Sure, there are nightmare scenarios - a genetically engineered supervirus that kills a billion people - but they're hypothetical. If someone does design such a virus, then we can worry. As it is, biological weapons have never proven very useful. The 2001 US anthrax letters killed 5 people. Jared Loughner killed 6 with a gun he bought from a chain store.

Chemical weapons are little better. They were used heavily in WW1 and the Iran-Iraq War against military targets and killed many but never achieved a decisive victory, and the vast majority of deaths in these wars were caused by plain old bullets and bombs. Iraq's use of chemical weapons against Kurds in Halabja killed perhaps 5,000 - but this was a full-scale assault by an advanced air force, lasting several hours, on a defenceless population.

When a state-of-the-art nerve agent was used in the Tokyo subway attack, after much preparation by the cult responsible, who had professional chemists and advanced labs, 13 people died. In London on the 7th July 2005, terrorists killed 52 people with explosives made from haircare products.

Nuclear weapons aside, the best way to cause mass destruction is just to make an explosion, the bigger the better; yet conventional explosives, no matter how big, are not "WMDs", while chemical and biological weapons are.

So it seems to me that the term and the concept of "WMDs" is fundamentally unhelpful. It lumps together the apocalyptically powerful with the much less destructive. If you have to discuss everything except guns and explosives in one category, terms like "Unconventional weapons" are better as they avoid the misleading implication that all of these weapons are very, and equivalently, deadly; but grouping them together at all is risky.

That's WMDs. But there are plenty of other unhelpful concepts out there, some of which I've discussed previously. Take the concept of "major depressive disorder", for example. At least as the term is currently used, it lumps together extremely serious cases requiring hospitalization with mild "symptoms" which 40% of people experience by age 32.

Antidepressants Still Don't Work In Mild Depression

A new paper has added to the growing ranks of studies finding that antidepressant drugs don't work in people with milder forms of depression: Efficacy of antidepressants and benzodiazepines in minor depression.


It's in the British Journal of Psychiatry and it's a meta-analysis of 6 randomized controlled trials on three different drugs. Antidepressants were no better than placebo in patients with "minor depressive disorder", which is like the better-known Major Depressive Disorder but... well, not as major, because you only need to have 2 symptoms instead of 5 from this list.

They also wanted to find out whether benzodiazepines (like Valium) worked in these people, but there just weren't any good studies out there.

The results look solid, and they fit with the fact that antidepressants don't work in people diagnosed with "major" depression, but who fall at the "milder" end of that range, something which several recent studies have shown. Neuroskeptic readers will, if they've been paying attention, find this entirely unsurprising.

But in fact, it's not just not news, it's positively ancient. 50 years ago, at the dawn of the antidepressant era, it was commonly said that most antidepressants don't work in everyone with "depression", they work best in people with endogenous depression, and less well, or not at all, in those with "neurotic" or "reactive" depressions (see, e.g. 1, 2, 3, but the literature goes back even further).

"Endogenous" is not strictly the same as "severe", however, in practice, these two concepts have never really been clearly seperated, and they're largely equivalent today, because the leading measure of "severity", the Hamilton Scale, measures symptoms, and arguably these symptoms are mostly (though not entirely) the symptoms of the old concept of endogenous depression. The Hamilton Scale was formulated in 1960 when modern concepts of "minor depressive disorder" and "major depressive disorder" were unknown.

Why then are we only now working out that antidepressants only work in some people? There's one obvious answer: Prozac, which arrived in 1987. Before Prozac, antidepressants were serious stuff. They could easily kill you in overdose, and they had a lot of side effects. Many of them even meant that you couldn't eat cheese. As a result, they weren't used lightly.

Prozac and the other SSRIs changed the game completely. They're much less toxic, the side effects are milder, and you can eat as much cheese as you want. So it's very easy to prescribe an SSRI - maybe it won't work, but it can't hurt, so why not try it...?

As a result, I think, the concept of "depression" broadened. Before Prozac, depression was inherently serious, because the treatments were serious. After Prozac, it didn't have to be. Drug company marketing no doubt helped this process along, but marketing has to have something to work with. Over the past 25 years, terms like "endogenous", "neurotic" etc. largely disappeared from the literature, replaced by the single construct of "Major Depression".

For nearly 1,000 years, the great scientific and philosophical work of the ancient Greeks and Romans were lost to Europeans. Only when Christian scholars rediscovered them in the libraries of the Islamic world did Europe begin to remember what it had forgotten. We call those the Dark Ages. Will the past 25 years be remembered as psychiatry's Dark Age?

ResearchBlogging.orgBarbui, C., Cipriani, A., Patel, V., Ayuso-Mateos, J., & van Ommeren, M. (2011). Efficacy of antidepressants and benzodiazepines in minor depression: systematic review and meta-analysis The British Journal of Psychiatry, 198 (1), 11-16 DOI: 10.1192/bjp.bp.109.076448

The Town That Went Mad

Pont St. Esprit is a small town in southern France. In 1951 it became famous as the site of one of the most mysterious medical outbreaks of modern times.

As Dr's Gabbai, Lisbonne and Pourquier wrote to the British Medical Journal, 15 days after the "incident":

The first symptoms appeared after a latent period of 6 to 48 hours. In this first phase, the symptoms were generalized, and consisted in a depressive state with anguish and slight agitation.

After some hours the symptoms became more clearly defined, and most of the patients presented with digestive disturbances... Disturbances of the autonomic nervous system accompanied the digestive disorders-gusts of warmth, followed by the impression of "cold waves", with intense sweating crises. We also noted frequent excessive salivation.

The patients were pale and often showed a regular bradycardia (40 to 50 beats a minute), with weakness of the pulse. The heart sounds were rather muffled; the extremities were cold... Thereafter a constant symptom appeared - insomnia lasting several days... A state of giddiness persisted, accompanied by abundant sweating and a disagreeable odour. The special odour struck the patient and his attendants.
In most patients, these symptoms, including the total insomnia, persisted for several days. In some of the patients, these symptoms progressed to full-blown psychosis:
Logorrhoea [speaking a lot], psychomotor agitation, and absolute insomnia always presaged the appearance of mental disorders. Towards evening visual hallucinations appeared, recalling those of alcoholism. The particular themes were visions of animals and of flames. All these visions were fleeting and variable.

In many of the patients they were followed by dreamy delirium. The delirium seemed to be systematized, with animal hallucinations and self-accusation, and it was sometimes mystical or macabre. In some cases terrifying visions were followed by fugues, and two patients even threw themselves out of windows... Every attempt at restraint increased the agitation.

In severe cases muscular spasms appeared, recalling those of tetanus, but seeming to be less sustained and less painful... The duration of these periods of delirium was very varied. They lasted several hours in some patients, in others they still persist.
In total, about 150 people suffered some symptoms. About 25 severe cases developed the "delirium". 4 people died "in muscular spasm and in a state of cardiovascular collapse"; three of these were old and in poor health, but one was a healthy 25-year-old man.

At first, the cause was assumed to be ergotism - poisoning caused by chemicals produced by a fungus which can infect grain crops. Contaminated bread was, therefore, thought to be responsible. Ergotism produces symptoms similar to those reported at Pont St. Esprit, including hallucinations, because some of the toxins are chemically related to LSD.

However, there have been other theories. Some (including Albert Hofmann, the inventor of LSD) attribute the poisoning to pesticides containing mercury, or to the flour bleaching agent nitrogen trichloride.

More recently, journalist Hank Albarelli claimed that it was in fact a CIA experiment to test out the effects of LSD as a chemical weapon, though this is disputed. What really happened is, in other words, still a mystery.

Link: The Crazies (2010) is a movie about a remarkably similar outbreak of mass insanity in a small town.

ResearchBlogging.orgGABBAI, LISBONNE, & POURQUIER (1951). Ergot poisoning at Pont St. Esprit. British medical journal, 2 (4732), 650-1 PMID: 14869677

The Tree of Science

How do you know whether a scientific idea is a good one or not?


The only sure way is to study it in detail and know all the technical ins and outs. But good ideas and bad ideas behave differently over time, and this can provide clues as to which ones are solid; useful if you're a non-expert trying to evaluate a field, or a junior researcher looking for a career.

Today's ideas are the basis for tomorrow's experiments. A good idea will lead to experiments which provide interesting results, generating new ideas, which will lead to more experiments, and so on.

Before long, it will be taken as granted that it's true, because so many successful studies assumed it was. The mark of a really good idea is not that it's always being tested and found to be true; it's that it's an unstated assumption of studies which could only work if it were true. Good ideas grow onwards and upwards, in an expanding tree, with each exciting new discovery becoming the boring background of the next generation.

Astronomers don't go around testing whether light travels at a finite speed as opposed to an infinite one; rather, if it were infinite, their whole set-up would fail.

Bad ideas generate experiments too, but they don't work out. The assumptions are wrong. You try to explain why something happens, and you find that it doesn't happen at all. Or you come up with an "explanation", but next time, someone comes along and finds evidence suggesting the "true" explanation is the exact opposite.

Unfortunately, some bad ideas stick around, for political or historical reasons or just because people are lazy. What tends to happen is that these ideas are, ironically, more "productive" than good ideas: they are always giving rise to new hypotheses. It's just that these lines of research peter out eventually, meaning that new ones have to take their place.

As an example of a bad idea, take the theory that "vaccines cause autism". This hypothesis is, in itself, impossible to test: it's too vague. Which vaccines? How do they cause autism? What kind of autism? In which people? How often?

The basic idea that some vaccines, somewhere, somehow, cause some autism, has been very productive. It's given rise to a great many, testable, ideas. But every one which has been tested has proven false.

First there was the idea that the MMR vaccine causes autism, linked to a "leaky gut" or "autistic enterocolitis". It doesn't, and it's not linked to that. Then along came the idea that actually it's mercury preservatives in vaccines that cause autism. It doesn't. No problem - maybe it's aluminium? Or maybe it's just the Hep B vaccine? And so on.

At every turn, it's back to square one after a few years, and a new idea is proposed. "We know this is true; now we just need to work out why and how...". Except that turns out to be tricky. Hmm. Maybe, if you keep ending up back at square one, you ought to find a new square to start from.

Marc Hauser's Scapegoat?

The dust is starting to settle after the Hauser-gate scandal which rocked psychology a couple of weeks back.

Harvard Professor Marc Hauser has been investigated by a faculty committee and the verdict was released on the 20th August: Hauser was "found solely responsible... for eight instances of scientific misconduct." He's taking a year's "leave", his future uncertain.

Unfortunately, there has been no official news on what exactly the misconduct was, and how much of Hauser's work is suspect. According to Harvard, only three publications were affected: a 2002 paper in Cognition, which has been retracted; a 2007 paper which has been "corrected" (see below), and another 2007 Science paper, which is still under discussion.

But what happened? Cognition editor Gerry Altmann writes that he was given access to some of the Harvard internal investigation. He concludes that Hauser simply invented some of the crucial data in the retracted 2002 paper.

Essentially, some monkeys were supposed to have been tested on two conditions, X and Y, and their responses were videotaped. The difference in the monkey's behaviour between the two conditions was the scientifically interesting outcome.

In fact, the videos of the experiment showed them being tested only on condition X. There was no video evidence that condition Y was even tested. The "data" from condition Y, and by extension the differences, were, apparently, simply made up.

If this is true, it is, in Altmann's words, "the worst form of academic misconduct." As he says, it's not quite a smoking gun: maybe tapes of Y did exist, but they got lost somehow. However, this seems implausible. If so, Hauser would presumably have told Harvard so in his defence. Yet they found him guilty - and Hauser retracted the paper.

So it seems that either Hauser never tested the monkeys on condition B at all, and just made up the data, or he did test them, saw that they weren't behaving the "right" way, deleted the videos... and just made up the data. Either way it's fraud.

Was this a one-off? The Cognition paper is the only one that's been retracted. But another 2007 paper was "replicated", with Hauser & a colleague recently writing:

In the original [2007] study by Hauser et al., we reported videotaped experiments on action perception with free ranging rhesus macaques living on the island of Cayo Santiago, Puerto Rico. It has been discovered that the video records and field notes collected by the researcher who performed the experiments (D. Glynn) are incomplete for two of the conditions.
Luckily, Hauser said, when he and a colleague went back to Puerto Rico and repeated the experiment, they found "the exact same pattern of results" as originally reported. Phew.

This note, however, was sent to the journal in July, several weeks before the scandal broke - back when Hauser's reputation was intact. Was this an attempt by Hauser to pin the blame on someone else - David Glynn, who worked as a research assistant in Hauser's lab for three years, and has since left academia?

As I wrote in my previous post:
Glynn was not an author on the only paper which has actually been retracted [the Cognition 2002 paper that Altmann refers to]... according to his resume, he didn't arrive in Hauser's lab until 2005.
Glynn cannot possibly have been involved in the retracted 2002 paper. And Harvard's investigation concluded that Hauser was "solely responsible", remember. So we're to believe that Hauser, guilty of misconduct, was himself an innocent victim of some entirely unrelated mischief in 2007 - but that it was all OK in the end, because when Hauser checked the data, it was fine.

Maybe that's what happened. I am not convinced.

Personally, if I were David Glynn, I would want to clear my name. He's left science, but still, a letter to a peer reviewed journal accuses him of having produced "incomplete video records and field notes", which is not a nice thing to say about someone.

Hmm. On August 19th, the Chronicle of Higher Education ran an article about the case, based on a leaked Harvard document. They say that "A copy of the document was provided to The Chronicle by a former research assistant in the lab who has since left psychology."

Hmm. Who could blame them for leaking it? It's worth remembering that it was a research assistant in Hauser's lab who originally blew the whistle on the whole deal, according to the Chronicle.

Apparently, what originally rang alarm bells was that Hauser appeared to be reporting monkey behaviours which had never happened, according to the video evidence. So at least in that case, there were videos, and it was the inconsistency between Hauser's data and the videos that drew attention. This is what makes me suspect that maybe there were videos and field notes in every case, and the "inconvenient" ones were deleted to try to hide the smoking gun. But that's just speculation.

What's clear is that science owes the whistle-blowing research assistant, whoever it is, a huge debt.

Serotonin, Psychedelics and Depression

Note: This post is part of a Nature Blog Focus on hallucinogenic drugs in medicine and mental health, inspired by a recent Nature Reviews Neuroscience paper, The neurobiology of psychedelic drugs: implications for the treatment of mood disorders, by Franz Vollenweider & Michael Kometer. That article will be available, free (once you register), until September 23. For more information on this Blog Focus, see the "Table of Contents" here.

Neurophilosophy is covering the history of psychedelic psychiatry, while Mind Hacks provides a personal look at one particular drug, DMT. The Neurocritic discusses ketamine, an anesthetic with hallucinogenic properties, which is attracting a lot of interest at the moment as a treatment for depression.

Ketamine, however, is not a "classical" psychedelic like the drugs that gave the 60s its unique flavor and left us with psychedelic rock, acid house and colorful artwork. Classical psychedelics are the focus of this post.

The best known are LSD ("acid"), mescaline, found in the peyote and a few other species of cactus, and psilocybin, from "magic" mushrooms of the Psilocybe genus. Yet there are literally hundreds of related compounds. Most of them are described in loving detail in the two heroic epics of psychopharmacology, PIKHaL and TIKHaL, written by chemists and trip veterans Alexander and Ann Shulgin.

The chemistry of psychedelics is closely linked with that of depression and antidepressants. All classical psychedelics are 5HT2A receptor agonists. Most of them have other effects on the brain as well, which contribute to the unique effects of each drug, but 5HT2A agonism is what they all have in common.

5HT2A receptors are excitatory receptors expressed throughout the brain, and are especially dense in the key pyramidal cells of the cerebral cortex. They're normally activated by serotonin (5HT), which is the neurotransmitter that's most often thought of as being implicated in depression. The relationship between 5HT and mood is very complicated, and depression isn't simply a disorder of "low serotonin", but there's strong evidence that it is involved.

There's one messy detail, which is that not quite all 5HT2A agonists are hallucinogenic. Lisuride, a drug used in Parkinson's disease, is closely related to LSD, and is a strong 5HT2A agonist, but it has no psychedelic effects. It's recently been shown that LSD and lisuride have different molecular effects on cortical cells, even though they act on the same receptor - in other words, there's more to 5HT2A than simply turning it "on" and "off".

*

How could psychedelics help to treat mental illness? On the face of it, the acute effects of these drugs - hallucinations, altered thought processes and emotions - sound rather like the symptoms of mental illness themselves, and indeed psychedelics have been referred to as "psychotomimetic" - mimicking psychosis.

There are two schools of thought here: psychological and neurobiological.

The psychological approach ruled the first wave of psychedelic psychiatry, in the 50s and 60s. Psychiatry, especially in America, was dominated by Freudian theories of the unconscious. On this view, mental illness was a product of conflicts between unconscious desires and the conscious mind. The symptoms experienced by a particular patient were distressing, of course, but they also provided clues to the nature of their unconscious troubles.

It was tempting to see the action of psychedelics as a weakening of the filters which kept the unconscious, unconscious - allowing repressed material to come into awareness. The only other time this happened, according to Freud, was during dreams. That's why Freud famously called the interpretation of dreams the "royal road to the unconscious".

Psychedelics offered analysts the tantalizing prospect of confronting the unconscious face-to-face, while awake, instead of having to rely on the patient's memory of their previous dreams. To enthusiastic Freudians, this promised to revolutionize therapy, in the same way that the x-ray had done so much for surgery. The "dreamlike" nature of many aspects of the psychedelic experience seemed to confirm this.

Not all psychedelic therapists were orthodox Freudians, however. There were plenty of other theories in circulation, many of them inspired by the theorists' own drug experiences. Stanislav Grof, Timothy Leary and others saw the psychedelic state of consciousness as the key to attaining spiritual, philosophical and even mystical insights, whether one was "ill" or "healthy" - and indeed, they often said that mental "illness" was itself a potential source of spiritual growth.

Like many things, psychiatry has changed since the 60s. Psychotherapy is currently dominated by cognitive-behavioural (CBT) theory, and Freudian ideas have gone distinctly out of fashion. It remains to be seen what CBT would make of LSD, but the basic idea - that carefully controlled use of drugs could help patients to "break through" psychological barriers to treatment - seems likely to remain at the heart of their continued use.

*

The other view is that these drugs could have direct biological effects which lead to improvements in mood. Repeated use of LSD, for example, has been shown to rapidly induce down-regulation of 5HT2A receptors. Presumably, this is the brain's way of "compensating" for prolonged 5HT2A activation. This is probably why tolerance to the effects of psychedelics rapidly develops, something that's long been known (and regretted) by heavy users.

Vollenweider and Kometeris note that this is interesting, because 5HT2A blockers are used as antidepressants - the drugs nefazadone and mirtazapine are the best known today, but most of the older tricyclic antidepressants are also 5HT2A antagonists. Atypical antipsychotics, which are also used in depression, are potent 5HT2A antagonists as well.

So indirectly suppressing 5HT2A might be one biological mechanism by which psychedelics improve mood. However, questions remain about how far this could explain any therapeutic effects of these drugs. Psychedelic-induced 5HT2A down-regulation is presumably temporary - and if all we need to do is to knock out 5HT2A, it would surely be easiest to just use an antagonist...

ResearchBlogging.orgVollenweider FX, & Kometer M (2010). The neurobiology of psychedelic drugs: implications for the treatment of mood disorders. Nature Reviews Neuroscience, 11 (9), 642-51 PMID: 20717121

The World Turned Upside Down

This map is not “upside down”. It looks that way to us; the sense that north is up is a deeply ingrained one. It's grim up north, Dixie is away down south. Yet this is pure convention. The earth is a sphere in space. It has a north and a south, but no up and down.

There’s a famous experiment involving four guys and a door. An unsuspecting test subject is lured into a conversation with a stranger, actually a psychologist. After a few moments, two people appear carrying a large door, and they walk right between the subject and the experimenter.

Behind the door, the experimenter swaps places with one of the door carriers, who may be quite different in voice and appearance. Most subjects don't notice the swap. Perception is lazy: whenever it can get away with it, it merely tells us that things are as we expect, rather than actually showing us stuff. We often do not really perceive things at all. Did the subject really see the first guy? The second? Either?

The inverted map makes us actually see the Earth's geography, rather than just showing us the expected "countries" and "continents". I was struck by how parochial Europe is – the whole place is little more than a frayed end of the vast Eurasian landmass, no more impressive than the one at the other end, Russia's Chukotski. Africa dominates the scene: it can no longer be written off as that poor place at the bottom.

One of the most common observations in psychotherapy of people with depression or anxiety is that they hold themselves to impossibly high standards, although they have a perfectly sensible evaluation of everyone else. Their own failures are catastrophic; other people's are minor setbacks. Other people's successes are well-deserved triumphs; their own are never good enough, flukes, they don't count.

The first step in challenging these unhelpful patterns of thought is to simply point out the double-standard: why are you such a perfectionist about yourself, when you're not when it comes to other people? The idea being to help people to think about themselves in more like healthy way they already think about others. Turn the map of yourself upside down - what do you actually see?

The Fall of Freud

The works of Sigmund Freud were enormously influential in 20th century psychiatry, but they've now been reduced to little more than a fringe belief system. Armed with the latest version of my PubMed history script, and inspired by this classic gnxp post on the death of Marxism, postmodernism, and other stupid academic fads I decided to see how this happened.

As you can see, the number of published scientific papers related to Freud-y search terms like psychoanalytic has flat-lined for the past 50 years. That represents a serious collapse of influence, given the enormous expansion in the amount of research being published over this time.

Since 1960 the number of papers on schizophrenia has risen by a factor of 10 and anxiety by a factor of 80 (sic). The peak of Freud's fame was 1968, when almost as many papers referenced psychoanalytic (721) as did schizophrenia (989), and it was more than half as popular as antidepressants (1372). Today it's just 10% of either. Proportionally speaking, psychoanalysis has gone out with a whimper, though not a bang.

The rise of Cognitive Behavioral Therapy (CBT), however, is even more dramatic. From being almost unheard until the late 80's, it overtook psychoanalytic in 1993, and it's now more popular than antipsychotics and close on the heels of antidepressants.

What's going to happen in the future? If there is to be a struggle for influence it looks set to be fought between CBT and biological psychiatry, if only because they're pretty much the only games left in town. Yet one of the reasons behind CBT's widespread appeal is that it hasn't thus far overtly challenged biology, has adopted the methods of medicine (clinical trials etc.), and has presented itself as being useful as well as medication rather than instead of it.

One of the few exceptions was Richard Bentall's book Madness Explained (2003) in which he criticized psychiatry and presented a cognitive-behavioural alternative to orthodox biological theories of schizophrenia and bipolar disorder. Bentall remains on the radical wing of the CBT community but in the coming decades this kind of thing may become more common. Only time will tell...

Absinthe Fact and Fiction

Absinthe is a spirit. It's very strong, and very green. But is it something more?

I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impact

Absinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.

It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.

Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.

It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.

But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.

As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.

I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.

Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.

Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.

ResearchBlogging.orgPadosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551

The Decline and Fall of the Cannabinoid Antagonists

Cannabinoid Receptor, Type 1 (CB1) antagonists were supposed to be the next big thing.

They're weight loss drugs, and with obesity rates rising and the diet craze showing no signs of abating, that's a large and growing market (...sorry). They worked, at least in the short term, and they were at least as effective as existing pills. They may even have had health benefits over and above promoting weight loss, such as improving blood fat and sugar levels through metabolic effects.

It all started off well. Rimonabant, manufactured by Sanofi, was the first CB1 antagonist to become available for human use: it hit the European market in 2006, as Acomplia. Four large clinical trials showed convincingly that it helped people lose weight. Rival drug companies were hard at work developing other CB1 antagonists, and inverse agonists (similar, but even more potent). The "bants" included Merck's taranabant, Pfizer's otenabant, and more.

Even more excitingly, there were indications that CB1 antagonists could do more than help people lose weight: they might also be useful in helping people quit smoking, alcohol or drugs. The animal evidence that CB1 antagonists did this was strong. Human trials were underway. Optimists saw rimonabant and related drugs as offering something unprecedented: self-control in a pill, abstinence on demand.

*

But it ended in tears, literally. Rimonabant was pulled from the European market in late 2008; it was never approved in the USA at all. After rimonabant was withdrawn, drug companies abandoned the development of other CB1 antagonists.

The problem was that they made people depressed. In several large clinical trials of rimonabant it raised the risk of suffering depression and other psychiatric problems, like anxiety and irritability, compared to placebo. The reported rates of these symptoms ranged from a few % up to over 40% depending upon the population, but there have been no trials (except very small ones) in which these effects weren't seen. This means that CB1 antagonists cause depression rather more consistently than antidepressants treat it.

Merck have just released the data from a trial of taranabant: A clinical trial assessing the safety and efficacy of taranabant, a CB1R inverse agonist, in obese and overweight patients. It makes a fitting epitaph to the CB1 antagonists. They gave taranabant, at a range of doses, or placebo, to overweight people to go alongside diet and exercise to help them lose weight. The results were extremely similar to those seen with rimonabant; the drug worked:

But there were side effects. Alongside things like nausea, vomiting, and sweating, about 35% of people taking high doses of taranabant reported "psychiatric disorders". 20% of people on placebo also did, so this is not quite as bad as it first appears, but it's still striking, especially since a number of people on high doses of taranabant reported suicidal thoughts or behaviours...

Suicidal ideation was reported in three patients in the taranabant 6-mg group in year 1 and in one patient in the 4-mg group in year 2. There was one suicide attempt reported in a patient with a previous history of suicide attempts in the 6/2-mg group while the patient was receiving 2-mg, and one episode of suicidal behavior reported in a patient in the 6/2-mg group while the patient was receiving 6-mg. There were no completed suicides. The adjudication of possibly suicide-related adverse experiences during years 1 and 2 indicated an increased incidence of suicidality in the taranabant groups...
This is the kind of thing that gives drug companies nightmares, especially today, in the post-SSRI lawsuits era. This is why rimonabant was removed from the EU market in 2008 and why it was never approved in the US.

*

Safety concerns have plagued weight loss medications for decades. The problem is not that they don't work: plenty of drugs cause weight loss, at least for as long as you keep taking them. But unfortunately, there's always a 'but'.

Fenfluramine worked, but it caused heart valve defects, and was banned. Sibutramine works, but it's just been suspended from the European market due to concerns over heart disease (a different kind). Amphetamine-like stimulants such as phentermine work, but they're addictive and liable to abuse. What with rimonabant and sibutramine are gone, the only weight-loss drug approved for use in Europe is orlistat, which seems to be safe, but has some very unpleasant side effects...

Still, CB1 antagonists have a unique mechanism of action: they block the CB1 receptor, which is what gets activated by the cannabinoid ingredients in marijuana, and also the brain's own cannabinoids neurotransmitters
(endocannabinoids). The past five years has seen a huge amount of research showing that the CB1 receptor is involved in everything from memory and emotion to motivation, pain sensation and hormone secretion. We recently learned that there are even CB1 receptors on the tongue that regulate taste.

CB1 is able to do all this because it's found almost everywhere in the brain. To simplify, but only a little, the endocannabinoid system is a general feedback mechanism, which allows cells on the receiving end of neural transmission to "talk back" to the neuron sending them signals; if they're receiving lots of input, they tell the cell sending the signals to quiet down. In other words, endocannabinoids regulate the release of just about every other neurotransmitter. To be honest, given how important the system is in the brain, it's surprising that depression and anxiety are the biggest problems with CB1 antagonists.

For all that, we still don't know why they cause psychiatric symptoms, although a number of mechanisms have been suggested. Hopefully, someone will work this out sooner or later, since that would add an important piece to the puzzle of what goes on in the brain during depression...

ResearchBlogging.orgAronne, L., Tonstad, S., Moreno, M., Gantz, I., Erondu, N., Suryawanshi, S., Molony, C., Sieberts, S., Nayee, J., Meehan, A., Shapiro, D., Heymsfield, S., Kaufman, K., & Amatruda, J. (2010). A clinical trial assessing the safety and efficacy of taranabant, a CB1R inverse agonist, in obese and overweight patients: a high-dose study International Journal of Obesity DOI: 10.1038/ijo.2010.21

A Brief History of Bipolar Kids

Can children get bipolar disorder?

It depends who you ask. It's "controversial". Some say that, like schizophrenia, bipolar strikes in adolescence or after, and that pre-pubertal onset is extraordinarily rare. Others say that kids can be, and often are, bipolar, but their symptoms may differ from the ones seen in adults. You know a 20 year old's manic when they stay up for 3 days straight writing a book about how God's chosen them to save the world. A "bipolar" 10 year old, though, is more likely to show irritability and mood swings. Critics say that this isn't evidence of bipolar, it's evidence of... irritability and mood swings. Or, indeed, of being 10.

But what's not always appreciated is how new the concept of pediatric bipolar as a common disorder is, and how specific it is to American psychiatry. Here are a few graphs I put together to illustrate this, based on numbers of scientific publications.

First up, when did people start talking about it? Here's the number of PubMed hits for pediatric bipolar each year. As you can see, it was rarely talked about before the year 2000, after which its popularity shot up rapidly; it seems to have plateaued now, but it's hard to tell.

In fact, the true trend is even more dramatic, because many of the early hits were not about psychiatry at all. For example, in 1999, 5 of the 10 were nothing to do with manic-depression. One was about the growth pattern of a certain kind of bacteria (they're "bipolar", because they have two poles of growth.)

Is the post-2000 spike just a reflection of the fact that people are publishing more papers about bipolar in general? No. Here's a graph showing pediatric bipolar hits as a fraction of all "bipolar disorder" hits for that year. It's been rising for a while and it's now 5%.

Where are these publications coming from? America. Taking the first two pages of PubMed hits for pediatric bipolar, and excluding the non-psychiatric ones, 30 are from the USA, and just 4 are from elsewhere. For "bipolar disorder", it's 13 vs. 25. (This is in terms of the affiliation listed for the primary authors of the study.)

What about paediatric bipolar, the British spelling? It's almost unheard of. There are only 53 PubMed hits in total, as against 564 for pediatric bipolar. Of the first 20 hits, 9 are non-psychiatric, and 3 are from an Australian journal, criticizing the American concept of pediatric bipolar!

It's remarkable that the monthly British Journal of Psychiatry has never published a paper about "pediatric bipolar" or "paediatric bipolar": if you search their archives you get just 5 hits, and they are all in the references sections, not the papers themselves. The monthly American Journal of Psychiatry has published 37 papers mentioning "pediatric bipolar", of which 25 are not just in the references, and 10 are in the titles.

So, at least in terms of the literature, pediatric bipolar is overwhelmingly a 21st century American phenomenon. It barely existed before 2000, and it barely exists elsewhere. This corresponds to what some non-American psychiatrists have observed. In The Paediatric Bipolar Hypothesis: The View from Australia and New Zealand, Australian psychiatrists Peter Parry, Gareth Furber and Stephen Allison point out that

Traditionally, bipolar affective disorder has been considered rare in children and uncommon in adolescence ... However paediatric bipolar disorder (PBD) has become a topical issue in child and adolescent psychiatry over the last decade, driven by research in the USA. The proponents of PBD are concerned that the traditional approach to bipolar disorder in children and adolescents is missing a large number of distressed children, whose course of bipolar illness could be ameliorated or attenuated by early treatment.
Pediatric bipolar has certainly become more common as a diagnosis in the USA recently - a 40-fold increase in 12 years up to 2003:
The number of visits to primary care physicians in the under 20 age group where the diagnosis was bipolar disorder increased from 0.01% in 1994/5 to 0.44% in 2002/3
Whereas elsewhere, it's still regarded as incredibly uncommon...
Soutullo et al. reported that none of the 2,500 children 10 years or younger referred to the Royal Manchester Children's Hospital ... had a diagnosis of mania or bipolar disorder ... A more recent German survey revealed German child and adolescent psychiatrists were largely holding to a traditional stance as only 8% claimed to have diagnosed a pre-pubertal child with bipolar disorder.
Parry, Furber and Allison then present the results of a survey of 199 child and adolescent psychiatrists in Australia and New Zealand.
The majority of participants (53.4%) said they had never seen a case of pre-pubertal bipolar disorder, whilst a further 28.5% estimated they'd seen only 1 or 2 cases. Only 35 participants (18.2%) estimated having seen 3 or more cases of pre-pubertal bipolar disorder. ... Most participants (83.1%) were of the opinion that bipolar disorder in pre-pubertal children was either "very rare (less than 0.01%)", "rare (less than 0.1%)", or "cannot be diagnosed in this age group".
Of course this is just a survey, but the results are striking.

Peter Parry reports as a conflict-of-interest that he's a member of Healthy Skepticism, who are, in their own words, in the business of "Improving health by reducing harm from misleading drug promotion". I'm sure neither he nor I need to spell out why drug companies might conceivably have an interest in promoting the concept of pediatric bipolar disorder, given the wide range of drugs available for bipolar adults...

ResearchBlogging.orgParry, P., Furber, G., & Allison, S. (2009). The Paediatric Bipolar Hypothesis: The View from Australia and New Zealand Child and Adolescent Mental Health, 14 (3), 140-147 DOI: 10.1111/j.1475-3588.2008.00505.x

A Decade for Psychiatric Disorders...?

Nature kicks off the 2010s with an editorial pep-talk for psychiatry: A decade for psychiatric disorders.

New techniques — genome-wide association studies, imaging and the optical manipulation of neural circuits — are ushering in an era in which the neural circuitry underlying cognitive dysfunctions, for example, will be delineated... Whether for schizophrenia, depression, autism or any other psychiatric disorders, it is clear... that understanding of these conditions is entering a scientific phase more penetratingly insightful than has hitherto been possible.
But I don't feel too peppy.

The 2010s is not the decade for psychiatric disorders. Clinically, that decade was the 1950s. The 50s was when the first generation of psychiatric drugs were discovered - neuroleptics for psychosis (1952), MAOis (1952) and tricyclics (1957) for depression, and lithium for mania (1949, although it took a while to catch on).

Since then, there have been plenty of new drugs invented, but not a single one has proven more effective than those available in 1959. New antidepressants like Prozac are safer in overdose, and have milder side effects, than older ones. New "atypical" antipsychotics have different side effects to older ones. But they work no better. Compared to lithium, newer "mood stabilizers" probably aren't even as good. (The only exception is clozapine, a powerful antipsychotic, but dangerous side-effects limit its use.)

Scientifically, the 1960s were the decade of psychiatry. We learned that antipsychotics block dopamine receptors in the brain, and that antidepressants inhibit the reuptake or breakdown of monoamines: noradrenaline and serotonin. So it was natural, if unimaginative, to hypothesise that psychosis is caused by "too much dopamine", and that depression is a case of "not enough monoamines". (As for lithium, we still don't know how it works. Two out of three ain't bad.)

These are still the core dogmas of biological psychiatry. Since the 60s, the amount of money and people involved in the field has exploded, but today's research is still essentially making footnotes to the work done 30 or 40 years ago. It would be somewhat unfair to say that we haven't made any solid advances since then, but only somewhat.

The double helix structure of DNA was discovered in 1953, just after antipsychotics and antidepressants. Imagine if biologists had learned about the double helix, but instead of using it to understand genetics, or catch criminals, or sequence genomes, they spent 50 years arguing about whether all DNA was shaped like that, or only some of it.

The standard response to the charge that psychiatry has lagged behind the rest of medicine is that "It's hard". And it is, because it's about human life, which is complex. But so is the subject matter of every science: the whole point is to seek simplicity in the complexity. Genetics was hard, until we worked out how to do it.

What's remarkable is that so many things in psychiatry are simple. For example: any drug which blocks the dopamine transporter (DAT) in the brain has stimulant effects: increased energy, focus, and motivation, and at high doses, euphoria, grandiosity, and potentially addiction. Cocaine, amphetamine, Ritalin etc all work this way. There are no cocaine-like drugs that don't block DAT and no DAT inhibitors that aren't cocaine-like. Simple. The stimulant high looks strikingly like the mania seen in bipolar disorder, and is pretty much the exact opposite of what happens in clinical depression. Couldn't be easier.

There are plenty of cases just like this. What's also striking is that neuroscience has advanced in leaps and bounds since the 1960s. A 60s, or even a 90s, textbook about neuroscience looks incredibly dated - a 60s psychiatry textbook is essentially still up-to-date except for the drug names. Contemporary neuroscience is far from being a mature science like genetics, it has its problems (see: all my previous posts) but compared to psychiatry, "basic" neuroscience is rock-solid. Although I trained as basic neuroscientist, so I would say that.

Why? That's an excellent question. But if you ask me, and judging by the academic literature I'm not alone, the answer is: diagnosis. The weak link in psychiatry research is the diagnoses we are forced to use: "major depressive disorder", "schizophrenia", etc.

Basic neuroscientists don't use these. If a neuroscientist wants to study the effect of, say, pepperoni pizza on the human caudate nucleus, they can order a Dominos, recruit their friends as research subjects, pop them in an MRI scanner and get to work doing rigorous (and delicious) science. They've got the pepperoni pizza, they've got the human caudate nucleus - away they go.

Whereas in order to do research in psychiatry, you need patients, and to decide who's a patient and who isn't you basically have to use DSM-IV criteria, which are all but meaningless in most cases. It doesn't matter what amazing new scientific tools you have - genome-wide association studies, proteomics, brain imaging, whatever. If you're using them to study differences between "depressed people" and "normal people", and your "depressed people" are a mix of people who aren't ill and just need a holiday or a divorce, undiagnosed thyroid cases, local bums lying about being depressed to get paid for being in the study, and (if you're lucky) a few "really" clinically depressed people, you'll not get very far.

Edit 10.1.2009 - Changed the date of the discovery of the structure of DNA from 1952 to the correct 1953, oops.

ResearchBlogging.orgNature (2010). A decade for psychiatric disorders Nature, 463 (7277), 9-9 DOI: 10.1038/463009a

ECT in Nixonland

I've just finished Nixonland, Rick Perlstein's history of the 1960s. Some things I learned: Richard Nixon was a genius, albeit an evil one; the 1960s never ended; Rick Perlstein is my new favourite political author.

The book also reminded me of a sad episode in the history of psychiatry.

George McGovern ran against Nixon as the Democratic candidate for President in 1972. He was essentially the Obama of the 60s generation: unashamedly liberal and intellectual, he unseated the "established" candidate, Hubert Humphrey, to clinch the Democrat's nomination after a bitter primary campaign thanks to his idealistic young grass-roots.

McGovern had difficulty choosing his vice-presidential running mate, and eventually chose a little-known Senator from Missouri, Thomas Eagleton (left in the photo). It seemed a safe enough choice. Until Eagleton's first press conference.

Eagleton revealed that he'd been treated in a psychiatric hospital for "exhaustion" - everyone knew he meant clinical depression - three times, and that he had received electroconvulsive therapy twice. McGovern hadn't known this when he picked him.

From there it was all downhill. McGovern initially said he backed Eagleton "1000%". But to some, the idea of putting someone who'd had shock therapy a heartbeat away from the Presidency was unacceptable, and after two weeks of gossip, McGovern dropped him from the ticket.

Perlstein notes that this move wrecked McGovern's image as the idealistic and authentic alternative to politics-as-usual. Polls showed that Americans overwhelmingly trusted Nixon over McGovern, even as the facts about Watergate were emerging. Nixon won a landslide.

The Lonely Grave of Galileo Galilei

Galileo would be turning in his grave. His achievement was to set science on the course which has made it into an astonishingly successful means of generating knowledge. Yet some people not only reject the truths of the science that Galileo did so much to advance; they do it in his name.

Intro: In Denial?

Scientific truth is increasingly disbelieved, and this is a new phenomenon, so much so that new words have been invented to describe it. Leah Ceccarelli defines manufacturoversy as a public controversy over some question (usually scientific) which is not considered by experts on the topic to be in dispute; the controversy is not a legitimate scientific debate but a PR tool created by commercial or ideological interests.

Probably the best example is the attempts by tobacco companies to cast doubt on the association between tobacco smoking and cancer. The techniques involved are now well known. The number of smokers who didn't quit smoking because there was "doubt" over the link with cancer is less clear. More recently, there have been energy industry-sponsored attempts to do the same to the science on anthropogenic global warming. Other cases often cited are the MMR-autism link, Intelligent Design, and HIV/AIDS denial, although the agendas behind these "controversies" are less about money and more about politics and cultural warfare.

Many manufacturoversies are also examples of denialism, which Wikipedia defines as

the position of governments, political parties, business groups, interest groups, or individuals who reject propositions on which a scientific or scholarly consensus exists
although the two terms are not synonymous; one could be a denialist without having any ulterior motives, while conversely, one could manufacture a controversy which did not involve denying anything (e.g. the media-manufactured MMR-causes-autism theory, while completely wrong, didn't contradict any established science, it was just an assertion with no evidence and plenty of reasons to think it was wrong.) Denialism is very often accompanied by invokations of Galileo (or occasionally other "rebel scientists"), in an attempt to rhetorically paint the theory under attack as no more than an established dogma.

Just a caveat: in the wrong hands, the concepts of manufacturoversy and denialism could become a means of rubbishing legitimate dissent. The slogan of the denialism blog is "Don't mistake denialism for debate", but the line is sometimes very fine(*). For example, I'm critical of the idea that psychiatric medications and electroconvulsive therapy are of little or no benefit to patients. If one wanted to, it would be possible to make a coherent-sounding case as to why this debate was a manufacturoversy on the part of the psychotherapy industry to undermine confidence in a competing form of treatment which is overwhelmingly supported by the scientific evidence. This would be wrong (mostly).

A History of Error

Anyway. What's interesting is that the idea of inappropriate or manufactured doubt about scientific or historical claims is a very new phenomenon. Indeed, it's very hard to think of any examples before 1950, with the possible exception of the first wave of Creationism in the 1920s. Leah Ceccarelli points out that many of the rhetorical tricks used go back to the Greek Sophists but until recently the concept of denialism would have been almost meaningless, for the simple reason that this requires a truth to be inappropriately called into question and before about the 19th century, to a first approximation, we didn't have access to any such truths.

It's easy to forget just how ignorant we were until recently. The average schoolkid today has a more accurate picture of the universe than the greatest genius of 500 years ago, or of 300 years ago, and even of 100 years ago (assuming that the schoolkid knows about the Big Bang, plate tectonics, and DNA - all 20th century discoveries).

To exaggerate, but not very much: until the last couple of centuries of human history, no-one correctly believed in anything, and people had many beliefs that were actively wrong - they believed in ghosts, and witches, and Hiranyagarbha, and Penglai. People erred by believing. Those who disbelieved were likely to be right.

Things have changed. There is more knowledge now; today, when people err, it is increasingly because they reject the truth. No-one in the West now believes in witches, but hundreds of millions of us don't believe that the visible universe originated in a singularity about 13.5 billion years ago, although this is arguably a much bigger mistake to make. In other words, whereas in the past the main problem was belief in false ideas ("dogma"); increasingly the problem is doubting true ones ("denialism").

Myths & Legends of Science

The problem is that the way most people think about science hasn't caught up with the pace of scientific change. In just a couple of hundred years, science has gone from being an assortment of separate, largely bad notions, to being a vast construct of interlinking and mutually supporting theories, the foundations of which are supported by mountains of evidence. Yet all of our most popular myths about science are Robin Hood stories - the hero is the underdog, the rebel, the Maverick who stands up to authority, battles the entrenched beliefs of the Establishment, and challenges dogma. In other words, the hero is a denialist - albeit one who turns out to be right.

Once, this was realistic. Galileo was an Aristotelean cosmology denier; Pasteur was a miasma theory denier; Einstein was a Newtonian physics denier. (In fact, the historical facts are a bit more complicated, as they often are, but this is true enough.) But these stories are out of date. Thanks to the great deniers of the past, there are few, if any, inappropriate dogmas in mainstream science. There, I said it. Thanks to the efforts of scientists past and present, science has become a professional activity with, generally, a very good success rate.

The HIV/AIDS hypothesis and anti-retroviral drugs were developed by orthodox career scientists with proper qualifications working within the mainstream of biology and medicine. They probably wore boring, conventional white coats. There were no exciting paradigm shifts in HIV science. There was no Galileo of HIV; there was Robert Gallo. Yet orthodox science has been successful in delivering treatments for HIV and understanding of the disease (anti-retrovirals are not perfect, but they're a hell of a lot better than untreated AIDS, and just 20 years ago that was what all patients faced.) The skeptics, the rebels, the Robin Hoods of HIV/AIDS - they have been a disaster. If global warming deniers succeed, the consequences will be much worse.

Of coure, we do still need intelligent rebels. It would be a foolhardy person(**) who predicted that there will never be another paradigm shifts in science; neuroscience, at least, is due at least one more and there are parts of the remoter provinces of science, such as behavioural genetics, which are in serious need of a critical eye. But the vast majority of modern science, unlike the science of the past, is actually quite good. Hence, rebels are most likely wrong. To make a foolhardy prediction: there will never be another Galileo in the sense of a single figure who denies the scientific consensus and turns out to be right. There can only be a finite number of Galileos in history - once one succeeds in reforming some field, there is no need for another - and we may well have run out. My previous post on this topic included the bold claim that
if most scientists believe something you probably should believe it, just because scientists say so.
Yet this wasn't always true. To pluck a nice round number out of the air, I'd say that science has only been this trustworthy for 50 years. Most of our myths and ideas about science date from before that era. Science has moved on since the time of Galileo, thanks to his efforts and those of they who came after him, but he is still invoked as a hero by those who deny scientific truth. He would be turning in his grave, in the earth which, as we now know, turns around the sun.

(*) and of course as we know, "it's such a fine line between stupid and clever".
(**) As foolhardy as Francis Fukuyama who in 1989 proclaimed that history had ended and that the world was past the era of ideological struggles.

[BPSDB]

 
powered by Blogger