The Lonely Grave of Galileo Galilei

Galileo would be turning in his grave. His achievement was to set science on the course which has made it into an astonishingly successful means of generating knowledge. Yet some people not only reject the truths of the science that Galileo did so much to advance; they do it in his name.

Intro: In Denial?

Scientific truth is increasingly disbelieved, and this is a new phenomenon, so much so that new words have been invented to describe it. Leah Ceccarelli defines manufacturoversy as a public controversy over some question (usually scientific) which is not considered by experts on the topic to be in dispute; the controversy is not a legitimate scientific debate but a PR tool created by commercial or ideological interests.

Probably the best example is the attempts by tobacco companies to cast doubt on the association between tobacco smoking and cancer. The techniques involved are now well known. The number of smokers who didn't quit smoking because there was "doubt" over the link with cancer is less clear. More recently, there have been energy industry-sponsored attempts to do the same to the science on anthropogenic global warming. Other cases often cited are the MMR-autism link, Intelligent Design, and HIV/AIDS denial, although the agendas behind these "controversies" are less about money and more about politics and cultural warfare.

Many manufacturoversies are also examples of denialism, which Wikipedia defines as

the position of governments, political parties, business groups, interest groups, or individuals who reject propositions on which a scientific or scholarly consensus exists
although the two terms are not synonymous; one could be a denialist without having any ulterior motives, while conversely, one could manufacture a controversy which did not involve denying anything (e.g. the media-manufactured MMR-causes-autism theory, while completely wrong, didn't contradict any established science, it was just an assertion with no evidence and plenty of reasons to think it was wrong.) Denialism is very often accompanied by invokations of Galileo (or occasionally other "rebel scientists"), in an attempt to rhetorically paint the theory under attack as no more than an established dogma.

Just a caveat: in the wrong hands, the concepts of manufacturoversy and denialism could become a means of rubbishing legitimate dissent. The slogan of the denialism blog is "Don't mistake denialism for debate", but the line is sometimes very fine(*). For example, I'm critical of the idea that psychiatric medications and electroconvulsive therapy are of little or no benefit to patients. If one wanted to, it would be possible to make a coherent-sounding case as to why this debate was a manufacturoversy on the part of the psychotherapy industry to undermine confidence in a competing form of treatment which is overwhelmingly supported by the scientific evidence. This would be wrong (mostly).

A History of Error

Anyway. What's interesting is that the idea of inappropriate or manufactured doubt about scientific or historical claims is a very new phenomenon. Indeed, it's very hard to think of any examples before 1950, with the possible exception of the first wave of Creationism in the 1920s. Leah Ceccarelli points out that many of the rhetorical tricks used go back to the Greek Sophists but until recently the concept of denialism would have been almost meaningless, for the simple reason that this requires a truth to be inappropriately called into question and before about the 19th century, to a first approximation, we didn't have access to any such truths.

It's easy to forget just how ignorant we were until recently. The average schoolkid today has a more accurate picture of the universe than the greatest genius of 500 years ago, or of 300 years ago, and even of 100 years ago (assuming that the schoolkid knows about the Big Bang, plate tectonics, and DNA - all 20th century discoveries).

To exaggerate, but not very much: until the last couple of centuries of human history, no-one correctly believed in anything, and people had many beliefs that were actively wrong - they believed in ghosts, and witches, and Hiranyagarbha, and Penglai. People erred by believing. Those who disbelieved were likely to be right.

Things have changed. There is more knowledge now; today, when people err, it is increasingly because they reject the truth. No-one in the West now believes in witches, but hundreds of millions of us don't believe that the visible universe originated in a singularity about 13.5 billion years ago, although this is arguably a much bigger mistake to make. In other words, whereas in the past the main problem was belief in false ideas ("dogma"); increasingly the problem is doubting true ones ("denialism").

Myths & Legends of Science

The problem is that the way most people think about science hasn't caught up with the pace of scientific change. In just a couple of hundred years, science has gone from being an assortment of separate, largely bad notions, to being a vast construct of interlinking and mutually supporting theories, the foundations of which are supported by mountains of evidence. Yet all of our most popular myths about science are Robin Hood stories - the hero is the underdog, the rebel, the Maverick who stands up to authority, battles the entrenched beliefs of the Establishment, and challenges dogma. In other words, the hero is a denialist - albeit one who turns out to be right.

Once, this was realistic. Galileo was an Aristotelean cosmology denier; Pasteur was a miasma theory denier; Einstein was a Newtonian physics denier. (In fact, the historical facts are a bit more complicated, as they often are, but this is true enough.) But these stories are out of date. Thanks to the great deniers of the past, there are few, if any, inappropriate dogmas in mainstream science. There, I said it. Thanks to the efforts of scientists past and present, science has become a professional activity with, generally, a very good success rate.

The HIV/AIDS hypothesis and anti-retroviral drugs were developed by orthodox career scientists with proper qualifications working within the mainstream of biology and medicine. They probably wore boring, conventional white coats. There were no exciting paradigm shifts in HIV science. There was no Galileo of HIV; there was Robert Gallo. Yet orthodox science has been successful in delivering treatments for HIV and understanding of the disease (anti-retrovirals are not perfect, but they're a hell of a lot better than untreated AIDS, and just 20 years ago that was what all patients faced.) The skeptics, the rebels, the Robin Hoods of HIV/AIDS - they have been a disaster. If global warming deniers succeed, the consequences will be much worse.

Of coure, we do still need intelligent rebels. It would be a foolhardy person(**) who predicted that there will never be another paradigm shifts in science; neuroscience, at least, is due at least one more and there are parts of the remoter provinces of science, such as behavioural genetics, which are in serious need of a critical eye. But the vast majority of modern science, unlike the science of the past, is actually quite good. Hence, rebels are most likely wrong. To make a foolhardy prediction: there will never be another Galileo in the sense of a single figure who denies the scientific consensus and turns out to be right. There can only be a finite number of Galileos in history - once one succeeds in reforming some field, there is no need for another - and we may well have run out. My previous post on this topic included the bold claim that
if most scientists believe something you probably should believe it, just because scientists say so.
Yet this wasn't always true. To pluck a nice round number out of the air, I'd say that science has only been this trustworthy for 50 years. Most of our myths and ideas about science date from before that era. Science has moved on since the time of Galileo, thanks to his efforts and those of they who came after him, but he is still invoked as a hero by those who deny scientific truth. He would be turning in his grave, in the earth which, as we now know, turns around the sun.

(*) and of course as we know, "it's such a fine line between stupid and clever".
(**) As foolhardy as Francis Fukuyama who in 1989 proclaimed that history had ended and that the world was past the era of ideological struggles.

[BPSDB]

We Really Are Sorry, But Your Soul is Still Dead

Over the past few weeks, Christian neurosurgeon Michael Egnor, who writes on Evolution News & Views, and atheist neurologist Steve Novella (Neurologica) have been having an, er, vigorous debate about what neuroscience can tell us about materialism and the soul. As reported in New Scientist, this is part of an apparant attempt to undermine the materialist position (that all mental processes are the product of neural processes), on the part of the same people who brought you Intelligent Design. Many are calling it the latest front in the Culture War.

A couple of days ago Denyse O'Leary, a Canadian journalist who writes the blog Mindful Hack(*), posted some comments from Egnor about the great Wilder Penfield and his idea of "double consciousness" (my emphasis)

[By stimulating points on the cerebral cortex with electrodes during surgery] Penfield found that he could invoke all sorts of things- movements, sensations, memories. But in every instance ... the patients were aware that the stimulation was being done to them, but not by them. There was a part of the mind that was independent of brain stimulation and that constituted a part of subjective experience that Penfield was not able to manipulate with his surgery.... Penfield called this "double consciousness", meaning that there was a part of subjective experience that he could invoke or modify materially, and a different part that was immune to such manipulation.
I generally find arguing about religion boring, and I've no wish to enlist in any Culture Armies (I'm British - we're a nation of Culture Pacifists), but I'm going to say something about this, because it's just bad neuroscience. Maybe there are good arguments against materialism, but this isn't one.

Unfortunately, neither O'Leary nor Egnor allow comments on their blogs, but immediately after posting this I emailed them both with a link to this post. We'll see what happens.

Anyway, Penfield, whom you can read about in great detail at Neurophilosophy, was a pioneer in the functional mapping of the cerebral cortex. He was a neurosurgeon, and as part of his surgical procedures he would systematically stimulate different points of the cerebral cortex with an electrode, so as to locate which areas were responsible for important functions and avoid damaging them. Michael Egnor, following Penfield, is correct that this kind of point stimulation of the cortex tends to evoke sensations or motor responses which are experienced by the patient as external. Point stimulation is not reported to be able to effect our "higher" mental faculties such as our beliefs, desires, decisions, and "will"; it might evoke a movement of the arm, say, but the subject will report that this felt like an involuntary reflex, not a willed action.

However, to take this as evidence for some kind of a dualism between a form of conciousness which can be manipulated via the brain and another, non-material level of conciousness which can't (the "soul" in other words), is like saying that because hammering away at one key of a piano produces nothing but an annoying noise, there must be something magical going on when a pianist plays a Mozart concerto. Stimulating a single small part of the brain is about the crudest manipulation imaginable; all we can conclude from the results of point-stimulation experiments is that some kinds of mental processes are not controlled by single points on the cortex. This should not be surprising, since the brain is a network of 100 billion cells; what's interesting, in fact, is that stimulating a few million of these cells with the tip of an electrode can do anything.

Neuroskeptic is frequently critical of fMRI, but one of my favorite papers is an fMRI study, Reading Hidden Intentions in the Human Brain. In this experiment the authors got volunteers to freely decide on one of two courses of action several seconds before they were required to actually do the chosen act. (It was deciding betweening adding vs. subtracting two numbers on a screen.) They discovered that it was possible to determine (albeit with less than 100% accuracy) what subjects were planning to do on any given trial, before they actually did it, through an analysis of the pattern of neural activity across a large area of the medial prefrontal cortex.

The green area on this image shows the area over which activity predicts the future action. Importantly, no one point on the cortex is associated with one choice over another, but the combination of activity across the whole area is (once you put it through some brilliant maths).

Based on this evidence, it's reasonable to suppose that we could manipulate human intentions if, instead of just one electrode, we had several thousand (or million), and if we knew exactly which pattern of stimulation to apply. Or to run with the piano analogy: we could play a wonderful tune if we were skilled enough to play the right notes in the right combinations in the right order.

In fact, there are plenty of things which already are known to alter "higher" processes. At the correct doses, acetylcholine receptor antagonists such as scopolamine and atropine can produce a state of delerium with hallucinations which are experienced as being indistingishable from reality. Someone might talk to a non-existent friend or try to smoke a non-existent cigarette, without any knowledge of having taken a drug at all. Erowid has many first-hand accounts from people who have taken such drugs "recreationally" (a very bad idea, as you'll gather if you read a few.)

Then there's psychiatric illness. Someone who's psychotic may hear voices and believe them to be real communications from God, or the dead, or a radio transmitter in his head. A bipolar patient in a manic state may believe herself to have incredible talents or supernatural powers and dismiss as nonsense any suggestion that this is a result of her illness. In general those suffering from acute abnormal mental states may behave in a manner which is completely out of character, or think and talk in bizarre ways, without being aware of doing so. This is called "lacking in insight".

We don't yet know the neurobiological basis of these states, but that they (often) have one is beyond doubt; give the appropriate drugs - or use electricity to induce seizures - and they (usually) vanish. Many people in the advanced phases of dementia, especially Alzheimer's disease, as a result of neurodegeneration, are similarly unaware of being ill - hence the sad sight of formerly intelligent men and women wandering the streets, not knowing how they got there. Brain damage, or stimulation of deep brain structures (not the cortex which Penfield studied), can lead to profound alterations in personality and emotion. To summarize - if you seek the soul in the data of neuroscience, you will need to look harder than Penfield did.

Links : Sorry, But Your Soul Just Died - Tom Wolfe. A classic.

(*) - Mindful Hack - not to be confused with Mind Hacks.

[BPSDB]

New Deep Brain Stimulation Blog

Via Dr Shock, there's a new blog just been started by an anonymous American man who will soon be undergoing deep brain stimulation (DBS) for clinical depression, as part of a blinded trial.

It sounds like it's going to be fascinating reading - to my knowledge this is the first blog of its kind. I've always been a big believer in the important of first-hand reports in psychiatry and neurology, but sadly these are often in short supply compared to the huge proliferation of MRI scans, graphs and clinical rating scales. Sometimes, you just need to listen to people.

The study, called 278-005, also known as BROADEN, will involve electrical stimulation of the subgenual cingulate cortex ("Area 25"), the most commonly chosen target for DBS in depression. The preliminary reports from subgenual cingulate DBS have been extremely positive, but there have been no large scale clinical trials to date.

Lessons from the Placebo Gene

Update: See also Lessons from the Video Game Brain



The Journal of Neuroscience has published a Swedish study which, according to New Scientist (and the rest) is something of a breakthrough:

First 'Placebo Gene' Discovered
I rather like the idea of a dummy gene made of sugar, or perhaps a gene for being Brian Moloko, but what they're referring to is a gene, TPH2, which allegedly determines susceptibility to the placebo effect. Interesting, if true. Genetic Future was skeptical of the study because of its small sample size. It is small, but I'm not too concerned about that because there are, unfortunately, other serious problems with this study and the reporting on it. I should say at the outset, however, that most of what I'm about to criticize is depressingly common in the neuroimaging literature. The authors of this study have done some good work in the past and are, I'm sure, no worse than most researchers. With that in mind...



The study included 25 people diagnosed with Social Anxiety Disorder (SAD). Some people see the SAD diagnosis as a drug company ploy to sell pills (mainly antidepressants) to people who are just shy. I disagree. Either way, these were people who complained of severe anxiety in social situations. The 25 patients were all given placebo pill treatment for 8 weeks.



Before and after the treatment they each got an [H2

15O] PET scan, which measures regional blood flow (rCBF) in the brain, something that is generally assumed to correlate with neural activity. It's a bit like fMRI, although the physics are different. During the scans the subjects had to make a brief speech in front of 6 to 8 people. This was intended to make them anxious, as it would do. The patient's self-reported social anxiety in everyday situations was also assessed every 2 weeks by questionaires and clinical interviews.



The patients were then split into two groups based upon their final status: "placebo responders" were those who ended up with a "CGI" rating of 1 or 2 - meaning that they reported that their anxiety had got a lot better - and "placebo nonresponders" who didn't. (You may take issue with this terminology - if so, well done, and keep reading). Brain activation during the public speaking task was compared between these two groups. The authors also looked at two genes, 5HTTLPR and TPH2. Both are involved in serotonin signalling and both have been associated (in some studies) with vulnerability to anxiety and depression.



The results: The placebo responders reported less anxiety following treatment - unsurprisingly, because this is why they were classed as responders. On the PET scans, the placebo responders showed reduced amygdala activity during the second public speaking task compared to the first one; the non-responders showed no change. This is consistent with the popular and fairly sensible idea that the amygdala is active during the experience of emotion, especially fear and anxiety. However, in fact, this effect was marginal, and it was only significant under a region-of-interest analysis i.e. when they specifically looked at the data from the amygdala; in a more conservative whole-brain analysis they found nothing (or rather they did, but they wrote it off as uninteresting, as cognitive neuroscientists generally do when they see blobs in the cerebellum and the motor cortex):

PET data: whole-brain analyses

Exploratory analyses did not reveal significantly different treatment-induced patterns of change in responders versus nonresponders. Significant within-group alterations outside the amygdala region were noted only in nonresponders, who had increased (pre < post) rCBF in the right cerebellum ... and in a cluster encompassing the right primary motor and somatosensory cortices...
As for the famous "placebo gene", they found that two genetic variants, 5HTTLPR ll and TPH2 GG, were associated with a bigger drop in amygdala activity from before treatment to after treatment. TPH2 GG was also associated with the improvement in anxiety over the 8 weeks.
In a logistic regression analysis, the TPH2 polymorphism emerged as the only significant variable that could reliably predict clinical placebo response (CGI-I) on day 56, homozygosity for the G allele being associated with better outcome. Eight of the nine placebo responders (89%), for whom TPH2 gene data were available, were GG homozygotes.
You could call this a gene correlating with the "placebo effect", although you'd probably be wrong (see below). There are a number of important lessons to take home here.



1. Dr Placebo, I presume? - Be careful what you call the placebo effect



This study couldn't have discovered a "placebo gene", even if there is one. It didn't measure the placebo effect at all.



You'll recall that the patients in this study were assessed before and after 8 weeks of placebo treatment (sugar pills). Any changes occuring during these 8 weeks might be due to a true "placebo effect" - improvement caused by the patient's belief in the power of the treatment. This is the possibility that gets some people rather excited: it's mind over matter, man! This is why the word "placebo" is often preceded by words like "Amazing", "Mysterious", or even "Magical" - as if Placebo were the stage-name of a 19th century conjuror. (As opposed to the stage name of androgynous pop-goth Brian Moloko ... I've already done that one.)



But, as they often do, more prosaic explanations suggest themselves. Most boringly, the patients might have just got better. Time is the greater healer, etc., and two months is quite a long time. Maybe one of the patients hooked up with a cute guy and it did wonders for their self-confidence. Maybe the reason why the patients volunteered for the study when they did was because their anxiety was especially bad, and by the time of the second scan it had returned to normal (regression towards the mean). Maybe the study itself made a difference, by getting the patients talking about their anxiety with sympathetic professionals. Maybe the patients didn't actually feel any better at all, but just said they did because that's what they thought were expected to say. I could go on all day.



In my opinion most likely, the patients were just less anxious having their second PET scan, once they had survived the first one. PET scans are no fun: you get a catheter inserted into your arm, through which you're injected with a radioactive tracer compound. Meanwhile, your head is fixed in place within big white box covered in hazard signs. It's not hard to see that you'd probably be much more anxious on your first scan than on your second time around.



So, calling the change from baseline to 8 weeks a "placebo response", and calling the people who got better "placebo responders", is misleading (at least it misled every commentator on this study so far). The only way to measure the true placebo effect is to compare placebo-treated people with people who get no treatment at all. This wasn't done in this study. It rarely is. This is something which confuses an awful lot of people. When people talk about the placebo effect, they're very often referring to the change in the placebo group, which as we've seen is not the same thing at all, and has nothing even vaguely magical or mysterious about it. (For example, some armchair psychiatrists like to say that since patients in the placebo group in antidepressant drug trials often show large improvements, sugar pills must be helpful in depression.) Although that said there was another study in the same issue of the same journal which did measure an actual placebo effect.



2. Beware Post Hoc-us Pocus



From the way it's been reported, you would probably assume that this was a study designed to investigate the placebo effect. However, in the paper we read:

Patients were taken from two previously unpublished RCTs that evaluated changes in regional cerebral blood flow after 56 d of pharmacological treatment by means of positron emission tomography. ... The clinical PET trials ... included a total of 108 patients with SAD. There were three treatment arms in the first study and six arms in the second. ... Only the pooled placebo data are included herein, whereas additional data on psychoactive drug treatment will be reported separately.
Personally, I find this odd. Why have so many groups if you're interested in just one of them? Even if the data from the drug groups are published, it's unusual to report on some aspect of the placebo data in a seperate paper before writing up the main results of an RCT. To me it seems likely that when this study was designed, no-one intended to search for genes associated with the placebo effect. I suspect that the analysis the authors report on here was post-hoc; having looked at the data, they looked around for any interesting effects in it.



To be clear, there's no proof that this is what happened here, but anyone who has worked in science will know that it does happen, and to my jaded eyes it seems probable that this is a case of it. For one thing, if this was a study intended to investigate the placebo effect, it was poorly designed (see above).



There's nothing wrong with post-hoc findings. If scientists only ever found what they set out to look for, science wouldn't have got very far. However, unless they are clearly reported as post-hoc the problem of the Texas Sharpshooter arises - findings may appear to be more significant than they otherwise would. In this case, the TPH2 gene was only a significant predictor of "placebo response" with p=0.04, which is marginal at the best of times.



The reason researchers feel the need to do this kind of thing is because of the premium the scientific community (and hence scientific publishing) places on getting "positive results". Plus, no-one wants to PET scan over 100 people (they're incredibly expensive) and report that nothing interesting happened. However, this doesn't make it right (rant continues...)



3. Science Journalism Is Dysfunctional



Sorry to go on about this, but really it is. New Scientist's write up of this study was, relatively speaking, quite good - they did at least include some caveats ("The gene might not play a role in our response to treatment for all conditions, and the experiment involved only a small number of people.") Although, they had a couple of factual errors such as saying that "8 of the 10 responders had two copies [of the TPH2 G allele], while none of the non-responders did" - actually 8 of the 15 non-responders did - but anyway.



The main point is that they didn't pick up on the fact that this experiment didn't measure the placebo effect at all, which makes their whole article misleading. (The newspapers generally did an even worse job.) I was able to write this post because I had nothing else on this weekend and reading papers like this is a major part of my day job. Ego aside, I'm pretty good at this kind of thing. That's why I write about it, and not about other stuff. And that's why I no longer read science journalism (well, except to blog about how rubbish it is.)



It would be wrong to blame the journalist who wrote the article for this. I'm sure they did the best they could in the time available. I'm sure that I couldn't have done any better. The problem is that they didn't have enough time, and probably didn't have enough specialist knowledge, to read the study critically. It's not their fault, it's not even New Scientist's fault, it's the fault of the whole idea of science journalism, which involves getting non-experts to write, very fast, about complicated issues and make them comprehensible and interesting to the laymen even if they're manifestly not. I used to want to be a science journalist, until I realised that that was the job description.



ResearchBlogging.orgT. Furmark, L. Appel, S. Henningsson, F. Ahs, V. Faria, C. Linnman, A. Pissiota, O. Frans, M. Bani, P. Bettica, E. M. Pich, E. Jacobsson, K. Wahlstedt, L. Oreland, B. Langstrom, E. Eriksson, M. Fredrikson (2008). A Link between Serotonin-Related Gene Polymorphisms, Amygdala Activity, and Placebo-Induced Relief from Social Anxiety Journal of Neuroscience, 28 (49), 13066-13074 DOI: 10.1523/JNEUROSCI.2534-08.2008

Alas, Poor Noradrenaline

Previously I posted about the much-maligned serotonin theory of depression and tentatively defended it, while making it clear that "low serotonin" was certainly not the whole story. Critics have noted that the serotonin-is-happiness hypothesis has become folk wisdom, despite being clearly incomplete, and this is generally ascribed to the marketing power of the pharmaceutical industry. What's also interesting is that a predecessor and rival to the serotonin hypothesis, the noradrenaline theory, failed to achieve such prominence.

Everyone's heard of serotonin. Only doctors and neuroscientists have heard of noradrenaline (called norepinephine if you're American), which is another monoamine neurotransmitter. Chemically the two molecules are rather different, but they both play roughly parallel roles in the brain, in the sense that both are released from a small number of cells originating in the brain stem onto areas throughout the brain in what's often described as a "sprinkler system" arrangement.

Forty years ago, noradrenaline was seen by most psychopharmacologists as being the key chemical determinant of mood, and the leading theory on the cause of depression was some kind of noradrenaline deficiency. At this time, serotonin was generally seen as being at best of uncertain importance. In 1967 two superstars of psychopharmacology, Joseph Schildkraut and Seymour Kety, wrote a review article in Science in which they summarized the evidence for a noradrenaline theory of depression. It still makes quite convincing reading, and since 1967, more evidence has come to light; reboxetine, which selectively inhibits the reuptake of noradrenaline, is at least as effective as Prozac, which is selective for serotonin. Although it's slightly controversial, it also seems as though antidepressants which target both monoamines are slightly more effective than those which only target either.

So what happened to the n
oradrenaline theory? If pressed, most experts will admit that there must be something in it, and it is still discussed - but noradrenaline just doesn't get talked about as much as serotonin in the context of depression and mood. So far as I can see there is little good reason for this - given that both serotonin and noradrenaline seem to be involved in mood, the best thing would be to study both, and in particular to study their interactions. Yet this is not what most scientists are doing. Noradrenaline has just dropped off the scientific radar.

Because everyone likes graphs, and because I had nothing better to do today, I knocked together a couple to show the rise and fall of noradrenaline. The first shows the total number of PubMed entries for each year from 1969 to 2007, containing hits in the Title or Abstract for [noradrenaline OR norepinephrine] AND [depression OR depressive OR antidepressant OR antidepressants OR antidepressive] vs. [Serotonin OR 5HT OR 5-hydroxytryptamine] AND [depression OR depressive OR antidepressant OR antidepressants OR antidepressive]. As you can see, the two lines track each other very closely until about 1990, when interest in serotonin in the context of depression / antidepressants suddenly takes off, leaving noradrenaline languishing far behind.

What's fascinating is that the total amount of published research about noradrenaline also peaked around 1990 and has since declined markedly, while publications about serotonin and dopamine (another monoamine neurotransmitter) have been steadily growing.

What happened around 1990? Prozac, the first commercially successful selective serotonin reuptake inhibitor (SSRI), was released onto the U.S. market in late 1987. Bearing in mind that science generally takes a year or so to make it from the lab to the journal page, it's tempting to see 1990 as the year of the onset of the "Prozac Effect". Prozac notoriously achieved a huge amount of publicity, far more than was granted to older antidepressants such as imipramine, despite its probably being less effective. Could this be one reason why serotonin has eclipsed noradrenaline in the eyes of scientists?

A couple of caveats: All I've shown here are historical trends, which is not in itself proof of causation. Also, the fall in the total number of publications mentioning noradrenaline is much too large to be directly due to the stall in the number of papers about noradrenaline and depression / antidepressants. However, there could be indirect effects (scientists might be less interested in basic research on noradrenaline if they see it as having no relevance to medicine.)

Note 16/12/08: I've realized that it would have been better to include the term "5-HT" in the serotonin searches as this is a popular way of referring to it. I suspect that had I done this the serotonin lines would have been higher, but the trends over time would be the same.

ResearchBlogging.orgJ. J. Schildkraut, S. S. Kety (1967). Biogenic Amines and Emotion Science, 156 (3771), 21-30 DOI: 10.1126/science.156.3771.21

Do Herbs Get a Bad Press?

An neat little study in BMC Medicine investigates how newspapers report on clinical research. The authors tried to systematically compare the tone and accuracy of write-ups of clinical trials of herbal remedies with those of trials of pharmaceuticals. The results might surprise you.

The research comes from a Canadian group, and most of the hard slog was done by two undergrads, who read through and evaluated 105 trials and 553 newspaper articles about those trials. (They didn't get named as authors on the paper, which seems a bit mean, so let's take a moment to appreciate Megan Koper and Thomas Moran.) The aim was to take all English language newspaper articles about clinical trials printed between 1995 and 2005 (as found on LexisNexis). Duplicate articles were weeded out and every article was then rated for overall tone (subjective), the number of risks and benefits reported, whether it reported on conflicts of interest or not, and so forth. The trials themselves were also rated.

As the authors say

This type of study, comparing media coverage with the scientific research it covers is a well recognized method in media studies. Is the tone of reporting different for herbal remedy versus pharmaceutical clinical trials? Are there differences in the sources of trial funding and the reporting of that issue? What about the reporting of conflicts of interest?
There were a range of findings. Firstly, newspapers were generally poor at reporting on important facts about trials such as conflicts of interest and methodological flaws. No great surprise there. They also tended to understate risks, especially in regards to herbal trials.

The most novel finding was that newspaper reports of herbal remedy trials were quite a lot more likely to be negative in tone than reports of pharmaceutical trials. The graphs here show this: out of 201 newspaper articles about pharmaceutical clinical trials, not one was negative in overall tone, and most were actively positive about the drug, while the herbs got a harsh press, with roughly as many negative articles as positive ones. (Rightmost two bars.)


This might partly be explained by the fact that slightly more of the herbal remedy trials found a negative result, but the difference in this case was fairly small (leftmost two bars). The authors concluded that
Those herbal remedy clinical trials that receive newspaper coverage are of similar quality to pharmaceutical clinical trials ... Despite the overall positive results and tone of the clinical trials, newspaper coverage of herbal remedy clinical trials was more negative than for pharmaceutical clinical trials.
Bet you didn't see that coming - the media (at any rate in Britain) are often seen as reporting uncritically on complementary and alternative medicine. These results suggest that this is a simplification, but remember that this study only considered articles about specific clinical trials - not general discussions of treaments or diseases. The authors remark:
[The result] is contrary to most published research on media coverage of CAM. Those studies consider a much broader spectrum of treatments and the media content is generally anecdotal rather than evidence based. Indeed, journalists are displaying a degree of skepticism rare for medical reporting.
So, it's not clear why journalists are so critical of trials of herbs when they're generally fans of CAM the rest of the time. The authors speculate:
It is possible that once confronted with actual evidence, journalists are more critical or skeptical. It may be considered more newsworthy to debunk commonly held beliefs and practices related to CAM, to go against the trend of positive reporting in light of evidence. It is also possible that journalists who turn to press releases of peer-reviewed, high-impact journals have subtle biases towards scientific method and conventional medicine. Also, journalists turn to trusted sources in the biomedical community for comments on clinical trials, both herbal and pharmaceutical, potentially leading to a biomedical bias in reporting trial outcomes.
If you forgive the slightly CAM-ish language (biomedical indeed), you can see that they make some good suggestions - but we don't really know. This is the problem with this kind of study (as the authors note) - the fact that a story is "negative" about herbs could mean a lot of different things. We also don't know how many other articles there were about herbs which didn't mention clinical trials, and because this article only considered articles referring to primary literature, not meta-analyses (I think), it leaves out a lot of material. Meta-analyses are popular with journalists and are often more relevant to the public than single trials are.

Still, it's a paper which challenged my prejudices (like a lot of bloggers I have a bit of a persecution complex about the media being pro-CAM) and a nice example of empirical research on the media.

ResearchBlogging.orgTania Bubela, Heather Boon, Timothy Caulfield (2008). Herbal remedy clinical trials in the media: a comparison with the coverage of conventional pharmaceuticals BMC Medicine, 6 (1) DOI: 10.1186/1741-7015-6-35

The Spooky Case of the Disappearing Crap Science Article

Just a few hours ago, I drafted a post about a crap science study in the Daily Telegraph called "Stress of modern life cuts attention spans to five minutes".

The pressures of modern life are affecting our ability to focus on the task in hand, with work stress cited as the major distraction, it said.
Declining attention spans are causing household accidents such as pans being left to boil over on the hob, baths allowed to overflow, and freezer doors left open, the survey suggests.
A quarter of people polled said they regularly forget the names of close friends or relatives, and seven per cent even admitted to momentarily forgetting their own birthdays.
The study by Lloyds TSB insurance showed that the average attention span had fallen to just 5 minutes, down from 12 minutes 10 years ago.
But the over-50s are able to concentrate for longer periods than young people, suggesting that busy lifestyles and intrusive modern technology rather than old age are to blame for our mental decline.
"More than ever, research is highlighting a trend in reduced attention and concentration spans, and as our experiment suggests, the younger generation appear to be the worst afflicted," said sociologist David Moxon, who led the survey of 1,000 people.
Almost identical stories appeared in the Daily Mail (no surprise) and, for some reason, an awful lot of Indian news sites. So I hacked out a few curmudgeonly lines - but before I posted them, the story had vanished! (Update: It's back! See end of post). Spooky. But first, the curmudgeonry:
  • Crap science story in "crap" shocker
The term "attention span" is meaningless - attention to what? Are we so stressed out that after five minutes down the pub, we tend to forget our pints and wander home in a daze? You could talk about attention span for a particular activity, so long as you defined your criteria for losing attention - for example, you could measure the average time a student sits in a lecture before he starts doodling on his notes. Then if you wanted you could find out if stress affects that time. I wouldn't recommend it, because it would be very boring, but it would be a scientific study.

This news, however is not based on a study of this kind. It's based on a survey of 1,000 people i.e. they asked people how long their attention span was and whether they felt they were prone to accidents. No doubt the questions were chosen in such a way that they got the answers they wanted. Who are "they"? - Lloyds TSB insurance, or rather, their PR department, who decided that they would pay Mr David Moxon MSc. to get them the results they wanted. He obliged, because that's what he does. Then the PR people wrote up Moxon's "results" as a press release and sent it out to all the newspapers, where stressed-out, over-worked journalists (there's a grain of truth to every story!) leapt at the chance to fill some precious column inches with no thinking required. Lloyds get their name in the newspapers, their PR company gets cash, and Moxon gets cash and his name in the papers so he gets more clients in the future. Sorted!

How do I know this? Well, mainly because I've read Ben Goldacre's Bad Science and Nick Davie's Flat Earth News, two excellent books which explain in great detail how modern journalism works and how this kind of PR junk routinely ends up on the pages of your newspapers in the guise of science or "surveys". However, even if I hadn't, I could have worked it out by just consulting Google regarding Mr Moxon. Here is his website. Here's what Moxon says about his services:
David can provide a wide range of traditional behavioural research methods on a diverse range of social, psychological and health topics. David works in partnership with clients delivering precisely the brief they require whilst maintaining academic integrity.
The more commonly provided services include:
  • The development and compilation of questionnaire or survey questions

  • Statistical analysis of data (including SPSS® if required)

  • The development of personality typologies

  • The production of media friendly tests and quizzes (always with scoring systems)

  • The production of primary research reports identifying ‘top line findings’ as well as providing detailed results and conclusions.

In other words, he gets the results you want. And he urges potential customers to
Contact the consultancy which gives you fast, highly-creative and psychologically-endorsed stories that grab the headlines.
  • The Disappearance
The mystery is that the story, so carefully crafted by the PR department, has gone. Both the Telegraph and the Mail have pulled it, although it was there last time I checked, a couple of hours ago. Googling the story confirms that it used to be there, but now it's gone. Variants are still available elsewhere, sadly.

So, what happened? Did both the Mail and the Telegraph suddenly experience an severe attack of journalistic integrity and decide that this story was so bad, they weren't even going to host it on their websites? It seems doubtful, especially in the case of the Mail, but it's possible.

I prefer a different explanation: my intention to rubbish the story travelled forwards in time, and caused the story to be taken down, even though I hadn't posted about it yet. Lynn McTaggart has proven that this can happen, you know.

Update 27th November 13:30: And it's back! The story has reappeared on the Telegraph website. The Lay Scientist tells me that the story was originally put up too prematurely and then pulled because it was embargoed until today. I don't quite see why it matters when a non-story like this is published - it could just as well have been 10 years ago - but there you go. And in a ridiculous coda to this sorry tale, the Telegraph have today run a second crap science article centered around the concept of "5 minutes" - according to the makers of cold and flu remedy Lemsip, 52% of women feel sorry for their boyfriends when they're ill for just five minutes or less. Presumably because this is their attention span. How I wish I were making this up.

Totally Addicted to Genes

Why do some people get addicted to things? As with most things in life, there are lots of causes, most of which have little, if anything, to do with genes or the brain. Getting high or drunk all day may be an appealing and even reasonable life choice if you're poor, bored and unemployed. It's less so if you've got a steady job, a mortgage and a family to look after.

On the other hand, substance addiction is a biological process, and it would be surprising if genetics did not play a part. There could be many routes from DNA to dependence. Last year a study reported that two genes, TAS2R38 and TAS2R16, were associated with problem drinking. These genes code for some of the tongue's bitterness taste receptor proteins - presumably, carriers of some variants of these genes find alcoholic drinks less bitter, more drinkable and more appealing. Yet most people are more excited by the idea of genes which somehow "directly" affect the brain and predispose to addiction. Are there any? The answer is yes, probably, but they do lots of other things beside cause addiction.

A report just published in the American Journal of Medical Genetics by Argawal et. al. (2008), found an association between a certain variant in the CNR1 gene, rs806380, and the risk of cannabis dependence. They looked at a sample of 1923 white European American adults from six cities across the U.S, and found that the rs806380 "A" allele (variant) was more common in people with self-reported cannabis dependence than in those who denied having such a problem. A couple of other variants in the same gene were also associated, but less strongly.

As with all behavioural genetics, there are caveats. (I've warned about this before.) The people in this study were originally recruited as part of an alcoholism project,COGA. In fact, all of the participants were either alcohol dependent or had relatives who were. Most of the cannabis-dependent people were also dependent on alcohol. However, this is true of the real world as well, where dependence on more than one substance is common.

The sample size of nearly 2000 people is pretty good, but the authors investigated a total of eleven different variants of the CNR1 gene. This raises the problem of multiple comparisons, and they don't mention how they corrected for this, so we have to assume that they didn't. The main finding does corroborate earlier studies, however. So, assuming that this result is robust, and it's at least as robust as most work in this field, does this mean that a true "addiction gene" has been discovered?

Well, the gene CNR1 codes for the cannabinoid type 1 (CB1) receptor protein, the most common cannabinoid receptor in the brain. Endocannabinoids, and the chemicals in smoked cannabis, activate it. Your brain is full of endocannabinoids, molecules similiar to the active compounds found in cannabis. Although they were discovered just 20 short years ago, they've already been found to be involved in just about everything that goes on in the brain, acting as a feedback system which keeps other neurotransmitters under control.

So, what Argawal et. al. found is that the cannabinoid receptor gene is associated with cannabis dependence. Is this a common-sense result - doesn't it just mean that people whose receptors are less affected by cannabis are less likely to want to use it? Probably not, because what's interesting is that the same variant in the CNR1 gene, rs806380, has been found to be associated with obesity and dependence on cocaine and opioids. Other variants in the same gene have shown similar associations, although there have been several studies finding no effect, as always.

What makes me believe that CNR1 probably is associated with addiction is that a drug which blocks the CB1 receptor, rimonabant, causes people to lose weight, and is also probably effective in helping people stop smoking and quit drinking (weaker evidence). Give it to mice and they become little rodent Puritans - they lose interest in sweet foods, and recreational drugs including alcohol, nicotine, cocaine and heroin. Only the simple things in life for mice on rimonabant. (No-one's yet checked whether rimonabant makes mice lose interest in sex, but I'd bet money that it does.)

So it looks as though the CB1 receptor is necessary for pleasurable or motivational responses to a whole range of things - maybe everything. If so, it's not surprising that variants in the gene coding for CB1 are associated with substance dependence, and with body weight - maybe these variants determine how susceptible people are to the lures of life's pleasures, whether it be a chocolate muffin or a straight vodka. (This is speculation, although it's informed speculation, and I know that many experts are thinking along these lines.)

What if we all took rimonabant to make us less prone to such vices? Wouldn't that be a good thing? It depends on whether you think people enjoying themselves is evidence of a public health problem, but it's worth noting that rimonabant was recently taken of the European market, despite being really pretty good at causing weight loss, because it causes depression in a significant minority of users. Does rimonabant just rob the world of joy, making everything else less fun? That would make anyone miserable. Except for neuroscientists, who would look forward to being able to learn more about the biology of mood and motivation by studying such side effects.

ResearchBlogging.orgArpana Agrawal, Leah Wetherill, Danielle M. Dick, Xiaoling Xuei, Anthony Hinrichs, Victor Hesselbrock, John Kramer, John I. Nurnberger, Marc Schuckit, Laura J. Bierut, Howard J. Edenberg, Tatiana Foroud (2008). Evidence for association between polymorphisms in the cannabinoid receptor 1 (CNR1) gene and cannabis dependence American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 9999B DOI: 10.1002/ajmg.b.30881

Educational neuro-nonsense, or: The Return of the Crockus

Vicky Tuck, President of the British Girls' Schools Association, has some odd ideas about the brain.

Tuck has appeared on British radio and in print over the past few days arguing that there should be more single-sex schools (which are still quite common in Britain) because girls and boys learn in different ways and benefit from different teaching styles. Given her job, I suppose she ought to be doing that, and there are, I'm sure, some good arguments for single-sex schools.

So why has she resorted to talking nonsense about neuroscience? Listen if you will to an interview she gave on the BBC's morning Today Program (Her part runs from 51:50s to 55:10s). Or, here's a transcript of the neuroscience bit, with my emphasis:

Interviewer: Do we know that girls and boys brains are wired differently?
Tuck: We do, and I think we're learning more and more every day about the brain, and particularly in adolescents this wiring is very interesting, and it's quite clear that you need to teach girls and boys in a very different way for them to be successful.
Interviewer: Well give us some examples, how should the way in which you teach them differ?
Tuck: Well, take maths. If you look at the girls they sort of approach maths through the cerebral cortex, which means that to get them going you really need to sort of paint a picture, put it in context, relate it to the real world, while boys sort of approach maths through the hippocampus, therefore they're very happy and interested in the core properties of numbers and can sort of dive straight in. So if a girl's being taught in a male-focused way she will struggle, whereas in an all-girl's school their confidence in maths is very, very high.
Interviewer: So you have no doubt that all girls should be taught separately from boys?
Tuck: I think that ideally, girls fare better if they're in a single sex environment, and I think that boys also fare better in an all boy environment, I think for example in the study of literature, in English, again a different kind of approach is needed. Girls are very good at empathizing, attuning to things via the emotions, the cerebral cortex again, whereas the boys come at things... it's the amygdala is very strong in the boy, and he will you know find it hard to tune in in that way and needs a different approach.
Interviewer: And yet we've had this trend towards co-education and we've also had more boys schools opening their doors to girls... [etc.]
This is, to put it kindly, confused. Speaking as a neuroscientist, I know of no evidence that girls and boys approach maths or literature using different areas of the brain, I'm not sure what evidence you could look for which would suggest that, and I'm not even sure what that statement means.

Girls and boys all have brains, and they all have the same parts in roughly the same places. When they're reading about maths, or reading a novel, or indeed when they're doing anything, all of these areas are working together at once. The cerebral cortex, in particular, comprises most of the bulk of the brain, and almost literally does everything; it has dozens of sub-regions responsible for everything from seeing moving objects to feeling disgusted to moving your eyes. I don't know which area is responsible for the the boyish "core properties of numbers" but for what it's worth, the area most often linked to counting and calculation is the angular gyrus, part of... the supposedly girly cerebral cortex!

The gruff and manly hippocampus, on the other hand, is best known for its role in memory. Damage here leaves people unable to form new memories, although they can still remember things that happened before the injury. It's not known whether these people also have problems with number theory.

When it comes to literature, things get even worse. She says - "Girls are very good at empathizing, attuning to things via the emotions" - which I guess is a pop-psych version of psychologist Simon Baron-Cohen's famous theory of gender differences: that girls are, on average, better at girly social and emotional stuff while boys are better at systematic, logical stuff. This is, er, controversial, but it's a theory that has at least some merit to it.

However, given that the amygdala is generally seen as a fluffy "emotion area" while the cerebral cortex, or at least parts of it, are associated with more "cold" analytic cognition, "The amygdala is very strong in boys" suggests that they should be more emotionally empathic. If Tuck's going to deal in simplistic pop-neuroanatomy, she should at least get it the right way round.

The likely source of Tuck's confusion, given what's said here about Harvard research, is this study led by Dr. Jill Goldstein, who found differences in the size of brain areas between men and women. For example she found that men have, on average, larger amygdalas than women. Although they also have smaller hippocampi. Whatever, this study is fine science, although bear in mind that there could be a million reasons why men's and women's brains are different - it might have nothing to do with inborn differences. Stress, for example, makes your hippocampus shrink.

More importantly, there's no reason to think that "bigger is better", when it comes to parts of the brain. (I make no comment about other parts of the body.) That's phrenology, not science. Is a bigger mobile phone better than a smaller one? Bigger could be worse, if it means that the brain cells are less well organized. Likewise, if an area "lights up" more on an fMRI scan in boys than in girls, that sounds good, but in fact it might mean that the boys are having to think harder than the girls, because their brain is less efficient.

I'm a believer in the reality of biological sex differences myself - I just don't should try to find them with MRI scans. And Vicky Tuck seems like a clever person who's ended up talking nonsense unnecessarily. She could be making a good argument for single-sex schools based on some actual evidence about how kids learn and mature. Instead, she's shooting herself in the foot (or maybe in the brain's "foot center") with dodgy brain theories. Save yourself, Vicky - put the brain down and walk away.

Link Cognition and Culture who originally picked up on this.
Link The hilarious story of "The Crockus", a made-up brain area which has also been invoked to justify teaching girls and boys differently. It's weird how bad neuroscience repeats itself.

[BPSDB]

Deep Brain Stimulation Cures Urge To Break Glass

Deep Brain Stimulation (DBS) is in. There's been much buzz about its use in severe depression, and it has a long if less glamorous record of success in Parkinson's disease. Now that it's achieved momentum as a treatment in psychiatry, DBS is being tried in a range of conditions including chronic pain, obsessive-compulsive disorder and Tourette's Syndrome. Is the hype justified? Yes - but the scientific and ethical issues are more complex, and more interesting, than you might think.

Biological Psychiatry have just published this report of DBS in a man who suffered from severe, untreatable Tourette's syndrome, as well as OCD. The work was performed by a German group, Neuner et. al. (who also have a review paper just out), and they followed the patient up for three years after implanting high-frequency stimulation electrodes in an area of the brain called the nucleus accumbens. It's fascinating reading, if only for the insight into the lives of the patients who receive this treatment.

The patient suffered from the effects of auto-aggressive behavior such as self-mutilation of the lips, forehead, and fingers, coupled with the urge to break glass. He was no longer able to travel by car because he had broken the windshield of his vehicle from the inside on several occasions.
It makes even more fascinating viewing, because the researchers helpfully provide video clips of the patient before and after the procedure. Neuropsychiatric research meets YouTube - truly, we've entered the 21st century. Anyway, the DBS seemed to work wonders:
... An impressive development was the cessation of the self-mutilation episodes and the urge to destroy glass. No medication was being used ... Also worthy of note is the fact that the patient stopped smoking during the 6 months after surgery. In the follow-up period, he has successfully refrained from smoking. He reports that he has no desire to smoke and that it takes him no effort to refrain from doing so.
Impressive indeed. DBS is, beyond a doubt, an exciting technology from both a theoretical and a clinical perspective. Yet it's worth considering some things that tend to get overlooked.

Firstly, although DBS has a reputation as a high-tech, science-driven, precisely-targeted treatment, it's surprisingly hit-and-miss. This report involved stimulation of the nucleus accumbens, an area best known to neuroscientists as being involved in responses to recreational drugs. (It's tempting to infer that this must have something to do with why the patient quit smoking.) I'm sure there are good reasons to think that DBS in the nucleus accumbens would help with Tourette's - but there are equally good reasons to target several other locations. As the authors write:
For DBS in Tourette's patients, the globus pallidus internus (posteroventrolateral part, anteromedial part), the thalamus (centromedian nucleus, substantia periventricularis, and nucleus ventro-oralis internus) and the nucleus accumbens/anterior limb of the internal capsule have all been used as target points.
For those whose neuroanatomy is a little rusty, that's a fairly eclectic assortment of different brain regions. Likewise, in depression, the best-known DBS target is the subgenual cingulate cortex, but successful cases have been reported with stimulation in two entirely different areas, and at least two more have been proposed as potential targets (Paper.) Indeed, even once a location for DBS has been chosen, it's often necessary to try stimulating at several points in order to find the best target. The point is that there is no "Depression center" or "Tourette's center" in the brain which science has mapped out and which surgery can now fix.

Second, by conventional standards, this was an awful study: it only had one patient, no controls, and no blinding. Of course, applying usual scientific standards to this kind of research is all but impossible, for ethical reasons. These are people, not lab rats. And it does seem unlikely that the dramatic and sustained response in this case could be purely the placebo effect, especially given that the patient had tried several medications previously.

So what the authors did was certainly reasonable under the circumstances - but still, this article, published in a leading journal, is basically an anecdote. If it had been about a Reiki master waving his hands at the patient, instead of a neurosurgeon sticking electrodes into him, it wouldn't even make it into the Journal of Alternative and Complementary Medicine. This is par for the course in this field; there have been controlled trials of DBS, but they are few and very small. Is this a problem? It would be silly to pretend that it wasn't - there is no substitute for good science. There's not much we can do about it, though.

Finally, Deep Brain Stimulation is a misleading term - the brain doesn't really get stimulated at all. The electrical pulses used in most DBS are at such a high frequency (145 Hz in this case) that they "overload" nearby neurons and essentially switch them off. (At least that's the leading theory.) In effect, turning on a DBS electrode is like cutting a hole in the brain. Of course, the difference is that you can switch off the electrode and put it back to normal. But this aside, DBS is little more sophisticated than the notorious "psychosurgery" pioneered by Walter Freeman performed back in the 1930s and that have since become so unpopular. I see nothing wrong with that - if it works, it works, and psychosurgery worked for many people, which is why it's still used in Britain today. It's interesting, though, that whereas psychosurgery is seen as the height of psychiatry barbarity, DBS is lauded as medical science at its most sophisticated.

For all that, DBS is the most interesting thing in neuroscience at the moment. Almost all research on the human brain is correlational - we look for areas of the brain which activate on fMRI scans when people are doing something. DBS offers one of the very few ways of investigating what happens when you manipulate different parts of the human brain. For a scientist, it's a dream come true. But of course, the only real reason to do DBS is for the patients. DBS promises to help people who are suffering terribly. If it does, that's reason enough to be interested in it.

See also: Someone with Parkinson's disease writes of his experiences with DBS on his blog.

ResearchBlogging.org
I NEUNER, K PODOLL, D LENARTZ, V STURM, F SCHNEIDER (2008). Deep Brain Stimulation in the Nucleus Accumbens for Intractable Tourette's Syndrome: Follow-Up Report of 36 Months Biological Psychiatry DOI: 10.1016/j.biopsych.2008.09.030

Kruger & Dunning Revisited

The irreplaceable Overcoming Bias have an excellent post on every blogger's favorite psychology paper, Kruger and Dunning (1999) "Unskilled and Unaware Of It".

Most people (myself included) have taken this paper as evidence that the better you are at something, the better you are at knowing how good you are at it. Thus, people who are bad don't know that they are, which is why they don't try to improve. It's an appealing conclusion, and also a very intuitive one.

In general, these kind of conclusions should be taken with a pinch of salt.

Indeed, it turns out that there's another more recent paper, Burson et. al. (2006) "Skilled or Unskilled, but Still Unaware of It", which finds that everyone is pretty bad at judging their own skill, and in some circumstances, more skilled people make less accurate judgments than novices. Heh.

 
powered by Blogger