The Lonely Grave of Galileo Galilei

Galileo would be turning in his grave. His achievement was to set science on the course which has made it into an astonishingly successful means of generating knowledge. Yet some people not only reject the truths of the science that Galileo did so much to advance; they do it in his name.

Intro: In Denial?

Scientific truth is increasingly disbelieved, and this is a new phenomenon, so much so that new words have been invented to describe it. Leah Ceccarelli defines manufacturoversy as a public controversy over some question (usually scientific) which is not considered by experts on the topic to be in dispute; the controversy is not a legitimate scientific debate but a PR tool created by commercial or ideological interests.

Probably the best example is the attempts by tobacco companies to cast doubt on the association between tobacco smoking and cancer. The techniques involved are now well known. The number of smokers who didn't quit smoking because there was "doubt" over the link with cancer is less clear. More recently, there have been energy industry-sponsored attempts to do the same to the science on anthropogenic global warming. Other cases often cited are the MMR-autism link, Intelligent Design, and HIV/AIDS denial, although the agendas behind these "controversies" are less about money and more about politics and cultural warfare.

Many manufacturoversies are also examples of denialism, which Wikipedia defines as

the position of governments, political parties, business groups, interest groups, or individuals who reject propositions on which a scientific or scholarly consensus exists
although the two terms are not synonymous; one could be a denialist without having any ulterior motives, while conversely, one could manufacture a controversy which did not involve denying anything (e.g. the media-manufactured MMR-causes-autism theory, while completely wrong, didn't contradict any established science, it was just an assertion with no evidence and plenty of reasons to think it was wrong.) Denialism is very often accompanied by invokations of Galileo (or occasionally other "rebel scientists"), in an attempt to rhetorically paint the theory under attack as no more than an established dogma.

Just a caveat: in the wrong hands, the concepts of manufacturoversy and denialism could become a means of rubbishing legitimate dissent. The slogan of the denialism blog is "Don't mistake denialism for debate", but the line is sometimes very fine(*). For example, I'm critical of the idea that psychiatric medications and electroconvulsive therapy are of little or no benefit to patients. If one wanted to, it would be possible to make a coherent-sounding case as to why this debate was a manufacturoversy on the part of the psychotherapy industry to undermine confidence in a competing form of treatment which is overwhelmingly supported by the scientific evidence. This would be wrong (mostly).

A History of Error

Anyway. What's interesting is that the idea of inappropriate or manufactured doubt about scientific or historical claims is a very new phenomenon. Indeed, it's very hard to think of any examples before 1950, with the possible exception of the first wave of Creationism in the 1920s. Leah Ceccarelli points out that many of the rhetorical tricks used go back to the Greek Sophists but until recently the concept of denialism would have been almost meaningless, for the simple reason that this requires a truth to be inappropriately called into question and before about the 19th century, to a first approximation, we didn't have access to any such truths.

It's easy to forget just how ignorant we were until recently. The average schoolkid today has a more accurate picture of the universe than the greatest genius of 500 years ago, or of 300 years ago, and even of 100 years ago (assuming that the schoolkid knows about the Big Bang, plate tectonics, and DNA - all 20th century discoveries).

To exaggerate, but not very much: until the last couple of centuries of human history, no-one correctly believed in anything, and people had many beliefs that were actively wrong - they believed in ghosts, and witches, and Hiranyagarbha, and Penglai. People erred by believing. Those who disbelieved were likely to be right.

Things have changed. There is more knowledge now; today, when people err, it is increasingly because they reject the truth. No-one in the West now believes in witches, but hundreds of millions of us don't believe that the visible universe originated in a singularity about 13.5 billion years ago, although this is arguably a much bigger mistake to make. In other words, whereas in the past the main problem was belief in false ideas ("dogma"); increasingly the problem is doubting true ones ("denialism").

Myths & Legends of Science

The problem is that the way most people think about science hasn't caught up with the pace of scientific change. In just a couple of hundred years, science has gone from being an assortment of separate, largely bad notions, to being a vast construct of interlinking and mutually supporting theories, the foundations of which are supported by mountains of evidence. Yet all of our most popular myths about science are Robin Hood stories - the hero is the underdog, the rebel, the Maverick who stands up to authority, battles the entrenched beliefs of the Establishment, and challenges dogma. In other words, the hero is a denialist - albeit one who turns out to be right.

Once, this was realistic. Galileo was an Aristotelean cosmology denier; Pasteur was a miasma theory denier; Einstein was a Newtonian physics denier. (In fact, the historical facts are a bit more complicated, as they often are, but this is true enough.) But these stories are out of date. Thanks to the great deniers of the past, there are few, if any, inappropriate dogmas in mainstream science. There, I said it. Thanks to the efforts of scientists past and present, science has become a professional activity with, generally, a very good success rate.

The HIV/AIDS hypothesis and anti-retroviral drugs were developed by orthodox career scientists with proper qualifications working within the mainstream of biology and medicine. They probably wore boring, conventional white coats. There were no exciting paradigm shifts in HIV science. There was no Galileo of HIV; there was Robert Gallo. Yet orthodox science has been successful in delivering treatments for HIV and understanding of the disease (anti-retrovirals are not perfect, but they're a hell of a lot better than untreated AIDS, and just 20 years ago that was what all patients faced.) The skeptics, the rebels, the Robin Hoods of HIV/AIDS - they have been a disaster. If global warming deniers succeed, the consequences will be much worse.

Of coure, we do still need intelligent rebels. It would be a foolhardy person(**) who predicted that there will never be another paradigm shifts in science; neuroscience, at least, is due at least one more and there are parts of the remoter provinces of science, such as behavioural genetics, which are in serious need of a critical eye. But the vast majority of modern science, unlike the science of the past, is actually quite good. Hence, rebels are most likely wrong. To make a foolhardy prediction: there will never be another Galileo in the sense of a single figure who denies the scientific consensus and turns out to be right. There can only be a finite number of Galileos in history - once one succeeds in reforming some field, there is no need for another - and we may well have run out. My previous post on this topic included the bold claim that
if most scientists believe something you probably should believe it, just because scientists say so.
Yet this wasn't always true. To pluck a nice round number out of the air, I'd say that science has only been this trustworthy for 50 years. Most of our myths and ideas about science date from before that era. Science has moved on since the time of Galileo, thanks to his efforts and those of they who came after him, but he is still invoked as a hero by those who deny scientific truth. He would be turning in his grave, in the earth which, as we now know, turns around the sun.

(*) and of course as we know, "it's such a fine line between stupid and clever".
(**) As foolhardy as Francis Fukuyama who in 1989 proclaimed that history had ended and that the world was past the era of ideological struggles.

[BPSDB]

We Really Are Sorry, But Your Soul is Still Dead

Over the past few weeks, Christian neurosurgeon Michael Egnor, who writes on Evolution News & Views, and atheist neurologist Steve Novella (Neurologica) have been having an, er, vigorous debate about what neuroscience can tell us about materialism and the soul. As reported in New Scientist, this is part of an apparant attempt to undermine the materialist position (that all mental processes are the product of neural processes), on the part of the same people who brought you Intelligent Design. Many are calling it the latest front in the Culture War.

A couple of days ago Denyse O'Leary, a Canadian journalist who writes the blog Mindful Hack(*), posted some comments from Egnor about the great Wilder Penfield and his idea of "double consciousness" (my emphasis)

[By stimulating points on the cerebral cortex with electrodes during surgery] Penfield found that he could invoke all sorts of things- movements, sensations, memories. But in every instance ... the patients were aware that the stimulation was being done to them, but not by them. There was a part of the mind that was independent of brain stimulation and that constituted a part of subjective experience that Penfield was not able to manipulate with his surgery.... Penfield called this "double consciousness", meaning that there was a part of subjective experience that he could invoke or modify materially, and a different part that was immune to such manipulation.
I generally find arguing about religion boring, and I've no wish to enlist in any Culture Armies (I'm British - we're a nation of Culture Pacifists), but I'm going to say something about this, because it's just bad neuroscience. Maybe there are good arguments against materialism, but this isn't one.

Unfortunately, neither O'Leary nor Egnor allow comments on their blogs, but immediately after posting this I emailed them both with a link to this post. We'll see what happens.

Anyway, Penfield, whom you can read about in great detail at Neurophilosophy, was a pioneer in the functional mapping of the cerebral cortex. He was a neurosurgeon, and as part of his surgical procedures he would systematically stimulate different points of the cerebral cortex with an electrode, so as to locate which areas were responsible for important functions and avoid damaging them. Michael Egnor, following Penfield, is correct that this kind of point stimulation of the cortex tends to evoke sensations or motor responses which are experienced by the patient as external. Point stimulation is not reported to be able to effect our "higher" mental faculties such as our beliefs, desires, decisions, and "will"; it might evoke a movement of the arm, say, but the subject will report that this felt like an involuntary reflex, not a willed action.

However, to take this as evidence for some kind of a dualism between a form of conciousness which can be manipulated via the brain and another, non-material level of conciousness which can't (the "soul" in other words), is like saying that because hammering away at one key of a piano produces nothing but an annoying noise, there must be something magical going on when a pianist plays a Mozart concerto. Stimulating a single small part of the brain is about the crudest manipulation imaginable; all we can conclude from the results of point-stimulation experiments is that some kinds of mental processes are not controlled by single points on the cortex. This should not be surprising, since the brain is a network of 100 billion cells; what's interesting, in fact, is that stimulating a few million of these cells with the tip of an electrode can do anything.

Neuroskeptic is frequently critical of fMRI, but one of my favorite papers is an fMRI study, Reading Hidden Intentions in the Human Brain. In this experiment the authors got volunteers to freely decide on one of two courses of action several seconds before they were required to actually do the chosen act. (It was deciding betweening adding vs. subtracting two numbers on a screen.) They discovered that it was possible to determine (albeit with less than 100% accuracy) what subjects were planning to do on any given trial, before they actually did it, through an analysis of the pattern of neural activity across a large area of the medial prefrontal cortex.

The green area on this image shows the area over which activity predicts the future action. Importantly, no one point on the cortex is associated with one choice over another, but the combination of activity across the whole area is (once you put it through some brilliant maths).

Based on this evidence, it's reasonable to suppose that we could manipulate human intentions if, instead of just one electrode, we had several thousand (or million), and if we knew exactly which pattern of stimulation to apply. Or to run with the piano analogy: we could play a wonderful tune if we were skilled enough to play the right notes in the right combinations in the right order.

In fact, there are plenty of things which already are known to alter "higher" processes. At the correct doses, acetylcholine receptor antagonists such as scopolamine and atropine can produce a state of delerium with hallucinations which are experienced as being indistingishable from reality. Someone might talk to a non-existent friend or try to smoke a non-existent cigarette, without any knowledge of having taken a drug at all. Erowid has many first-hand accounts from people who have taken such drugs "recreationally" (a very bad idea, as you'll gather if you read a few.)

Then there's psychiatric illness. Someone who's psychotic may hear voices and believe them to be real communications from God, or the dead, or a radio transmitter in his head. A bipolar patient in a manic state may believe herself to have incredible talents or supernatural powers and dismiss as nonsense any suggestion that this is a result of her illness. In general those suffering from acute abnormal mental states may behave in a manner which is completely out of character, or think and talk in bizarre ways, without being aware of doing so. This is called "lacking in insight".

We don't yet know the neurobiological basis of these states, but that they (often) have one is beyond doubt; give the appropriate drugs - or use electricity to induce seizures - and they (usually) vanish. Many people in the advanced phases of dementia, especially Alzheimer's disease, as a result of neurodegeneration, are similarly unaware of being ill - hence the sad sight of formerly intelligent men and women wandering the streets, not knowing how they got there. Brain damage, or stimulation of deep brain structures (not the cortex which Penfield studied), can lead to profound alterations in personality and emotion. To summarize - if you seek the soul in the data of neuroscience, you will need to look harder than Penfield did.

Links : Sorry, But Your Soul Just Died - Tom Wolfe. A classic.

(*) - Mindful Hack - not to be confused with Mind Hacks.

[BPSDB]

New Deep Brain Stimulation Blog

Via Dr Shock, there's a new blog just been started by an anonymous American man who will soon be undergoing deep brain stimulation (DBS) for clinical depression, as part of a blinded trial.

It sounds like it's going to be fascinating reading - to my knowledge this is the first blog of its kind. I've always been a big believer in the important of first-hand reports in psychiatry and neurology, but sadly these are often in short supply compared to the huge proliferation of MRI scans, graphs and clinical rating scales. Sometimes, you just need to listen to people.

The study, called 278-005, also known as BROADEN, will involve electrical stimulation of the subgenual cingulate cortex ("Area 25"), the most commonly chosen target for DBS in depression. The preliminary reports from subgenual cingulate DBS have been extremely positive, but there have been no large scale clinical trials to date.

Lessons from the Placebo Gene

Update: See also Lessons from the Video Game Brain



The Journal of Neuroscience has published a Swedish study which, according to New Scientist (and the rest) is something of a breakthrough:

First 'Placebo Gene' Discovered
I rather like the idea of a dummy gene made of sugar, or perhaps a gene for being Brian Moloko, but what they're referring to is a gene, TPH2, which allegedly determines susceptibility to the placebo effect. Interesting, if true. Genetic Future was skeptical of the study because of its small sample size. It is small, but I'm not too concerned about that because there are, unfortunately, other serious problems with this study and the reporting on it. I should say at the outset, however, that most of what I'm about to criticize is depressingly common in the neuroimaging literature. The authors of this study have done some good work in the past and are, I'm sure, no worse than most researchers. With that in mind...



The study included 25 people diagnosed with Social Anxiety Disorder (SAD). Some people see the SAD diagnosis as a drug company ploy to sell pills (mainly antidepressants) to people who are just shy. I disagree. Either way, these were people who complained of severe anxiety in social situations. The 25 patients were all given placebo pill treatment for 8 weeks.



Before and after the treatment they each got an [H2

15O] PET scan, which measures regional blood flow (rCBF) in the brain, something that is generally assumed to correlate with neural activity. It's a bit like fMRI, although the physics are different. During the scans the subjects had to make a brief speech in front of 6 to 8 people. This was intended to make them anxious, as it would do. The patient's self-reported social anxiety in everyday situations was also assessed every 2 weeks by questionaires and clinical interviews.



The patients were then split into two groups based upon their final status: "placebo responders" were those who ended up with a "CGI" rating of 1 or 2 - meaning that they reported that their anxiety had got a lot better - and "placebo nonresponders" who didn't. (You may take issue with this terminology - if so, well done, and keep reading). Brain activation during the public speaking task was compared between these two groups. The authors also looked at two genes, 5HTTLPR and TPH2. Both are involved in serotonin signalling and both have been associated (in some studies) with vulnerability to anxiety and depression.



The results: The placebo responders reported less anxiety following treatment - unsurprisingly, because this is why they were classed as responders. On the PET scans, the placebo responders showed reduced amygdala activity during the second public speaking task compared to the first one; the non-responders showed no change. This is consistent with the popular and fairly sensible idea that the amygdala is active during the experience of emotion, especially fear and anxiety. However, in fact, this effect was marginal, and it was only significant under a region-of-interest analysis i.e. when they specifically looked at the data from the amygdala; in a more conservative whole-brain analysis they found nothing (or rather they did, but they wrote it off as uninteresting, as cognitive neuroscientists generally do when they see blobs in the cerebellum and the motor cortex):

PET data: whole-brain analyses

Exploratory analyses did not reveal significantly different treatment-induced patterns of change in responders versus nonresponders. Significant within-group alterations outside the amygdala region were noted only in nonresponders, who had increased (pre < post) rCBF in the right cerebellum ... and in a cluster encompassing the right primary motor and somatosensory cortices...
As for the famous "placebo gene", they found that two genetic variants, 5HTTLPR ll and TPH2 GG, were associated with a bigger drop in amygdala activity from before treatment to after treatment. TPH2 GG was also associated with the improvement in anxiety over the 8 weeks.
In a logistic regression analysis, the TPH2 polymorphism emerged as the only significant variable that could reliably predict clinical placebo response (CGI-I) on day 56, homozygosity for the G allele being associated with better outcome. Eight of the nine placebo responders (89%), for whom TPH2 gene data were available, were GG homozygotes.
You could call this a gene correlating with the "placebo effect", although you'd probably be wrong (see below). There are a number of important lessons to take home here.



1. Dr Placebo, I presume? - Be careful what you call the placebo effect



This study couldn't have discovered a "placebo gene", even if there is one. It didn't measure the placebo effect at all.



You'll recall that the patients in this study were assessed before and after 8 weeks of placebo treatment (sugar pills). Any changes occuring during these 8 weeks might be due to a true "placebo effect" - improvement caused by the patient's belief in the power of the treatment. This is the possibility that gets some people rather excited: it's mind over matter, man! This is why the word "placebo" is often preceded by words like "Amazing", "Mysterious", or even "Magical" - as if Placebo were the stage-name of a 19th century conjuror. (As opposed to the stage name of androgynous pop-goth Brian Moloko ... I've already done that one.)



But, as they often do, more prosaic explanations suggest themselves. Most boringly, the patients might have just got better. Time is the greater healer, etc., and two months is quite a long time. Maybe one of the patients hooked up with a cute guy and it did wonders for their self-confidence. Maybe the reason why the patients volunteered for the study when they did was because their anxiety was especially bad, and by the time of the second scan it had returned to normal (regression towards the mean). Maybe the study itself made a difference, by getting the patients talking about their anxiety with sympathetic professionals. Maybe the patients didn't actually feel any better at all, but just said they did because that's what they thought were expected to say. I could go on all day.



In my opinion most likely, the patients were just less anxious having their second PET scan, once they had survived the first one. PET scans are no fun: you get a catheter inserted into your arm, through which you're injected with a radioactive tracer compound. Meanwhile, your head is fixed in place within big white box covered in hazard signs. It's not hard to see that you'd probably be much more anxious on your first scan than on your second time around.



So, calling the change from baseline to 8 weeks a "placebo response", and calling the people who got better "placebo responders", is misleading (at least it misled every commentator on this study so far). The only way to measure the true placebo effect is to compare placebo-treated people with people who get no treatment at all. This wasn't done in this study. It rarely is. This is something which confuses an awful lot of people. When people talk about the placebo effect, they're very often referring to the change in the placebo group, which as we've seen is not the same thing at all, and has nothing even vaguely magical or mysterious about it. (For example, some armchair psychiatrists like to say that since patients in the placebo group in antidepressant drug trials often show large improvements, sugar pills must be helpful in depression.) Although that said there was another study in the same issue of the same journal which did measure an actual placebo effect.



2. Beware Post Hoc-us Pocus



From the way it's been reported, you would probably assume that this was a study designed to investigate the placebo effect. However, in the paper we read:

Patients were taken from two previously unpublished RCTs that evaluated changes in regional cerebral blood flow after 56 d of pharmacological treatment by means of positron emission tomography. ... The clinical PET trials ... included a total of 108 patients with SAD. There were three treatment arms in the first study and six arms in the second. ... Only the pooled placebo data are included herein, whereas additional data on psychoactive drug treatment will be reported separately.
Personally, I find this odd. Why have so many groups if you're interested in just one of them? Even if the data from the drug groups are published, it's unusual to report on some aspect of the placebo data in a seperate paper before writing up the main results of an RCT. To me it seems likely that when this study was designed, no-one intended to search for genes associated with the placebo effect. I suspect that the analysis the authors report on here was post-hoc; having looked at the data, they looked around for any interesting effects in it.



To be clear, there's no proof that this is what happened here, but anyone who has worked in science will know that it does happen, and to my jaded eyes it seems probable that this is a case of it. For one thing, if this was a study intended to investigate the placebo effect, it was poorly designed (see above).



There's nothing wrong with post-hoc findings. If scientists only ever found what they set out to look for, science wouldn't have got very far. However, unless they are clearly reported as post-hoc the problem of the Texas Sharpshooter arises - findings may appear to be more significant than they otherwise would. In this case, the TPH2 gene was only a significant predictor of "placebo response" with p=0.04, which is marginal at the best of times.



The reason researchers feel the need to do this kind of thing is because of the premium the scientific community (and hence scientific publishing) places on getting "positive results". Plus, no-one wants to PET scan over 100 people (they're incredibly expensive) and report that nothing interesting happened. However, this doesn't make it right (rant continues...)



3. Science Journalism Is Dysfunctional



Sorry to go on about this, but really it is. New Scientist's write up of this study was, relatively speaking, quite good - they did at least include some caveats ("The gene might not play a role in our response to treatment for all conditions, and the experiment involved only a small number of people.") Although, they had a couple of factual errors such as saying that "8 of the 10 responders had two copies [of the TPH2 G allele], while none of the non-responders did" - actually 8 of the 15 non-responders did - but anyway.



The main point is that they didn't pick up on the fact that this experiment didn't measure the placebo effect at all, which makes their whole article misleading. (The newspapers generally did an even worse job.) I was able to write this post because I had nothing else on this weekend and reading papers like this is a major part of my day job. Ego aside, I'm pretty good at this kind of thing. That's why I write about it, and not about other stuff. And that's why I no longer read science journalism (well, except to blog about how rubbish it is.)



It would be wrong to blame the journalist who wrote the article for this. I'm sure they did the best they could in the time available. I'm sure that I couldn't have done any better. The problem is that they didn't have enough time, and probably didn't have enough specialist knowledge, to read the study critically. It's not their fault, it's not even New Scientist's fault, it's the fault of the whole idea of science journalism, which involves getting non-experts to write, very fast, about complicated issues and make them comprehensible and interesting to the laymen even if they're manifestly not. I used to want to be a science journalist, until I realised that that was the job description.



ResearchBlogging.orgT. Furmark, L. Appel, S. Henningsson, F. Ahs, V. Faria, C. Linnman, A. Pissiota, O. Frans, M. Bani, P. Bettica, E. M. Pich, E. Jacobsson, K. Wahlstedt, L. Oreland, B. Langstrom, E. Eriksson, M. Fredrikson (2008). A Link between Serotonin-Related Gene Polymorphisms, Amygdala Activity, and Placebo-Induced Relief from Social Anxiety Journal of Neuroscience, 28 (49), 13066-13074 DOI: 10.1523/JNEUROSCI.2534-08.2008

Alas, Poor Noradrenaline

Previously I posted about the much-maligned serotonin theory of depression and tentatively defended it, while making it clear that "low serotonin" was certainly not the whole story. Critics have noted that the serotonin-is-happiness hypothesis has become folk wisdom, despite being clearly incomplete, and this is generally ascribed to the marketing power of the pharmaceutical industry. What's also interesting is that a predecessor and rival to the serotonin hypothesis, the noradrenaline theory, failed to achieve such prominence.

Everyone's heard of serotonin. Only doctors and neuroscientists have heard of noradrenaline (called norepinephine if you're American), which is another monoamine neurotransmitter. Chemically the two molecules are rather different, but they both play roughly parallel roles in the brain, in the sense that both are released from a small number of cells originating in the brain stem onto areas throughout the brain in what's often described as a "sprinkler system" arrangement.

Forty years ago, noradrenaline was seen by most psychopharmacologists as being the key chemical determinant of mood, and the leading theory on the cause of depression was some kind of noradrenaline deficiency. At this time, serotonin was generally seen as being at best of uncertain importance. In 1967 two superstars of psychopharmacology, Joseph Schildkraut and Seymour Kety, wrote a review article in Science in which they summarized the evidence for a noradrenaline theory of depression. It still makes quite convincing reading, and since 1967, more evidence has come to light; reboxetine, which selectively inhibits the reuptake of noradrenaline, is at least as effective as Prozac, which is selective for serotonin. Although it's slightly controversial, it also seems as though antidepressants which target both monoamines are slightly more effective than those which only target either.

So what happened to the n
oradrenaline theory? If pressed, most experts will admit that there must be something in it, and it is still discussed - but noradrenaline just doesn't get talked about as much as serotonin in the context of depression and mood. So far as I can see there is little good reason for this - given that both serotonin and noradrenaline seem to be involved in mood, the best thing would be to study both, and in particular to study their interactions. Yet this is not what most scientists are doing. Noradrenaline has just dropped off the scientific radar.

Because everyone likes graphs, and because I had nothing better to do today, I knocked together a couple to show the rise and fall of noradrenaline. The first shows the total number of PubMed entries for each year from 1969 to 2007, containing hits in the Title or Abstract for [noradrenaline OR norepinephrine] AND [depression OR depressive OR antidepressant OR antidepressants OR antidepressive] vs. [Serotonin OR 5HT OR 5-hydroxytryptamine] AND [depression OR depressive OR antidepressant OR antidepressants OR antidepressive]. As you can see, the two lines track each other very closely until about 1990, when interest in serotonin in the context of depression / antidepressants suddenly takes off, leaving noradrenaline languishing far behind.

What's fascinating is that the total amount of published research about noradrenaline also peaked around 1990 and has since declined markedly, while publications about serotonin and dopamine (another monoamine neurotransmitter) have been steadily growing.

What happened around 1990? Prozac, the first commercially successful selective serotonin reuptake inhibitor (SSRI), was released onto the U.S. market in late 1987. Bearing in mind that science generally takes a year or so to make it from the lab to the journal page, it's tempting to see 1990 as the year of the onset of the "Prozac Effect". Prozac notoriously achieved a huge amount of publicity, far more than was granted to older antidepressants such as imipramine, despite its probably being less effective. Could this be one reason why serotonin has eclipsed noradrenaline in the eyes of scientists?

A couple of caveats: All I've shown here are historical trends, which is not in itself proof of causation. Also, the fall in the total number of publications mentioning noradrenaline is much too large to be directly due to the stall in the number of papers about noradrenaline and depression / antidepressants. However, there could be indirect effects (scientists might be less interested in basic research on noradrenaline if they see it as having no relevance to medicine.)

Note 16/12/08: I've realized that it would have been better to include the term "5-HT" in the serotonin searches as this is a popular way of referring to it. I suspect that had I done this the serotonin lines would have been higher, but the trends over time would be the same.

ResearchBlogging.orgJ. J. Schildkraut, S. S. Kety (1967). Biogenic Amines and Emotion Science, 156 (3771), 21-30 DOI: 10.1126/science.156.3771.21

 
powered by Blogger