Tampilkan postingan dengan label controversiology. Tampilkan semua postingan
Tampilkan postingan dengan label controversiology. Tampilkan semua postingan

Left Wing vs. Right Wing Brains

So apparently: Left wing or right wing? It's written in the brain

People with liberal views tended to have increased grey matter in the anterior cingulate cortex, a region of the brain linked to decision-making, in particular when conflicting information is being presented...

Conservatives, meanwhile, had increased grey matter in the amygdala, an area of the brain associated with processing emotion.

This was based on a study of 90 young adults using MRI to measure brain structure. Sadly that press release is all we know about the study at the moment, because it hasn't been published yet. The BBC also have no fewer than three radio shows about it here, here and here.

Politics blog Heresy Corner discusses it...
Subjects who professed liberal or left-wing opinions tended to have a larger anterior cingulate cortex, an area of the brain which, we were told, helps process complex and conflicting information. (Perhaps they need this extra grey matter to be able to cope with the internal contradictions of left-wing philosophy.)
This kind of story tends to attract chuckle-some comments.

In truth, without seeing the full scientific paper, we can't know whether the differences they found were really statistically solid, or whether they were voodoo or fishy. The authors, Geraint Rees and Ryota Kanai, have both published a lot of excellent neuroscience in the past, but that's no guarantee.

In fact, however, I suspect that the brain is just the wrong place to look if you're interested in politics, because most political views don't originate in the individual brain, they originate in the wider culture and are absorbed and regurgitated without much thought. This is a real shame, because all of us, left or right, have a brain, and it's really quite nifty:

But when it comes to politics we generally don't use it. The brain is a powerful organ designed to help you deal with reality in all its complexity. For a lot of people, politics doesn't take place there, it happens in fairytale kingdoms populated by evil monsters, foolish jesters, and brave knights.

Given that the characters in this story are mindless stereotypes, there's no need for empathy. Because the plot comes fully-formed from TV or a newspaper, there's no need for original ideas. Because everything is either obviously right or obviously wrong, there's not much reasoning required. And so on. Which is why this happens amongst other things.

I don't think individual personality is very important in determining which political narratives and values you adopt: your family background, job, and position in society is much more important.

Where individual differences matter, I think, is in deciding how "conservative" or "radical" you are within whatever party you find yourself. Not in the sense of left or right, but in terms of how keen you are on grand ideas and big changes, as opposed to cautious, boring pragmatism.

In this sense, there are conservative liberals (i.e. Obama) and radical conservatives (i.e. Palin), and that's the kind of thing I'd be looking for if I were trying to find political differences in the brain.

Links: If right wingers have bigger amygdalae, does that mean patient SM, the woman with no amygdalae at all, must be a communist? Then again, Neuroskeptic readers may remember that the brain itself is a communist...

XMRV - Innocent on All Counts?

A bombshell has just gone off in the continuing debate over XMRV, the virus that may or may not cause chronic fatigue syndrome. Actually, 4 bombshells.

A set of papers out today in Retrovirology (1,2,3,4) claim that many previous studies claiming to have found the virus haven't actually been detecting XMRV at all.

Here's the rub. XMRV is a retrovirus, a class of bugs that includes HIV. Retroviruses are composed of RNA, but they can insert themselves into the genetic material of host cells as DNA. This is how they reproduce: once their DNA is part of the host cell's chromosomes, that cell is ends up making more copies of the virus.

But there are lots of retroviruses out there, and there used to be yet others that are now extinct. So bits of retroviral DNA are scattered throughout the genome of animals. These are called endogenonous retro-viruses (ERVs).

XMRV is extremely similar to certain ERVs found in the DNA of mice. And mice are the most popular laboratory mammals in the world. So you can see the potential problem: laboratories all over the world are full of mice, but mouse DNA might show up as "XMRV" DNA on PCR tests.

Wary virologists take precautions against this by checking specifically for mouse DNA. But most mouse-contamination tests are targeted at mouse mitochondrial DNA (mtDNA). In theory, a test for mouse mtDNA is all you need, because mtDNA is found in all mouse cells. In theory.

Now the four papers (or are they the Four Horsemen?) argue, in a nutshell, that mouse DNA shows up as "XMRV" on most of the popular tests that have been used in the past, that mouse contamination is very common - even some of the test kits are affected! - and that tests for mouse mtDNA are not good enough to detect the problem.

  • Hue et al say that "Taqman PCR primers previously described as XMRV-specific can amplify common murine ERV sequences from mouse suggesting that mouse DNA can contaminate patient samples and confound specific XMRV detection." They go on to show that some human samples previously reported as infected with XMRV, are actually infected with a hybrid of XMRV and a mouse ERV which we know can't infect humans.
  • Sato et al report that PCR testing kits from Invitrogen, a leading biotech company, are contaminated with mouse genes including an ERV almost identical to XMRV, and that this shows up as a false positive using commonly used PCR primers "specific to XMRV".
  • Oakes et al say that in 112 CFS patients and 36 healthy control, they detected "XMRV" in some samples but all of these samples were likely contaminated with mouse DNA because "all samples that tested positive for XMRV and/or MLV DNA were also positive for the highly abundant IAP long terminal repeat [found only in mice] and most were positive for murine mitochondrial cytochrome oxidase sequences [found only in mice]"
  • Robinson et al agree with Oakes et al: they found "XMRV" in some human samples, in this case prostate cancer cells, but they then found that all of the "infected" samples were contaminated with mouse DNA. They recommend that in future, samples should be tested for mouse genes such as the IAP long terminal repeat or cytochrome oxidase, and that researchers should not rely on tests for mouse mtDNA.
They're all open-access so everyone can take a peek. For another overview see this summary published alongside them in Retrovirology.

I lack the technical knowledge to evaluate these claims, no doubt plenty of people will be rushing to do that before long. (Update: The excellent virologyblog has a more technical discussion of these studies.) But there are a couple of things to bear in mind.

Firstly, these papers cast doubt on tests using PCR to detect XMRV DNA. However, they don't have anything to say about studies which have looked for antibodies against XMRV in human blood, at least not directly. There haven't been many of these, but the paper which started the whole story, Lombardi et al (2009), did look for, and found, anti-XMRV immunity, and also used various other methods to support the idea that XMRV is present in humans. So this isn't an "instant knock-out" of the XMRV theory, although it's certainly a serious blow.

Secondly, if the 'mouse theory' is true, it has serious implications for the idea that XMRV causes chronic fatigue syndrome and also for the older idea that it's linked to prostate cancer. But it still leaves a mystery: why were the samples from CFS or prostate cancer patients more likely to be contaminated with mouse DNA than the samples from healthy controls?

ResearchBlogging.orgRobert A Smith (2010). Contamination of clinical specimens with MLV-encoding nucleic acids: implications for XMRV and other candidate human retroviruses Retrovirology : 10.1186/1742-4690-7-112

Wikileaks: A Conversation

"Wikileaks is great. It lets people leak stuff."

"Hang on, so you're saying that no-one could leak stuff before? They invented it?"

"Well, no, but they brought leaking to the masses. Sure, people could post documents to the press before, but now anyone in the world can access the leaks!"

"Great, but isn't that just the internet that did that? If it weren't for Wikileaks, people could just upload their leaks to a blog. Or email them to 50 newspapers. Or put them on the torrents. Or start their own site. If it's good, it would go viral, and be impossible to take down. Just like Wikileaks, with all their mirrors, except even more secure, because there'd be literally no-one to arrest or cut off funding to."

"OK, but Wikileaks is a brand. It's not about the technical stuff - it's the message. Like one of their wallpapers says, they're synonymous with free speech."

"So you think it's a good thing that one organization has become synonymous with the whole process of leaking? With the whole concept of openness? What will happen to the idea of free speech, then, if that brand image suddenly gets tarnished - like, say, if their founder and figurehead gets convicted of a serious crime, or..."

"He's innocent! Justice for Julian!"

"Quite possibly, but why do you care? Is he a personal friend?"

"It's an attack on free speech!"

"So you agree that one man has become synonymous with free speech? Doesn't that bother you?"

"Erm... well. Look, fundamentally, we need Wikileaks. Before, there was no centralized system for leaking. Anyone could do it. It was a mess! Wikileaks put everything in one place, and put a committee of experts in a position to decide what was worth leaking and what wasn't. It brought much-needed efficiency and respectability to the idea of leaking. Before Wikileaks, it was anarchy. They're like... the government."

"..."

Edit: See also The Last Psychiatrist's take.

Online Comments: It's Not You, It's Them

Last week I was at a discussion about New Media, and someone mentioned that they'd been put off from writing content online because of a comment on one of their articles accusing them of being "stupid".

I found this surprising - not the comment, but that anyone would take it so personally. It's the internet. You will get called names. Everyone does. It doesn't mean there's anything wrong with you.

I suspect this is a generational issue. People who 'grew up online' know, as Penny Arcade explained, that

The sad fact is that there are millions of people whose idea of fun is to find people they disagree with, and mock them. And they're right, it can be fun - why else do you think people like Jon Stewart are so popular? - but that's all it is, entertainment. If you're on the receiving end, don't take it seriously.

If you write something online, and a lot of people read it, you will get slammed. Someone, somewhere, will disagree with you and they'll tell you so, in no uncertain terms. This is true whatever you write about, but some topics are like a big red rag to the herds of bulls out there.

Just to name a few, if you say anything vaguely related to climate change, religion, health, the economy, feminism or race, you might as well be holding a placard with a big arrow pointing down at you and "Sling Mud Here" on it.

The point is - it's them, not you. They are not interested in you, they don't know you, it's not you. True, they might tailor their insults a bit; if you're a young woman you might be, say, a "stupid girl" where a man would merely get called an "idiot". But this doesn't mean that the attacks are a reflection on you in any way. You just happen to be the one in the line of fire.

What do you do about this? Nothing.

Trying to enter into a serious debate is pointless. Insulting them back can be fun, just remember that if you find it fun, you've become one of them: "he who stares too long into the abyss...", etc. Complaining to the moderators might help, but unless the site has a rock solid zero-tolerance-for-fuckwads policy, probably not. Where the blight has taken root, like Comment is Free, I'd not waste your time complaining. Just ignore it and carry on.

The most important thing is not to take it personally. Do not get offended. Do not care. Because no-one else cares. Especially the people who wrote the comments. They presumably care about whatever "issue" prompted their attack, but they don't care about you. If anything, you should be pleased, because on the internet, the only stuff that doesn't attract stupid comments is the stuff that no-one reads.

I've heard these attacks referred to as "policing" existing hierarchies or "silencing" certain types of people. This seems to me to be granting them far more respect than they deserve. With the actual police, if you break the rules, they will physically arrest you. They have power. Internet trolls don't: if they succeed in policing or silencing anybody, it's because their targets let them boss them around. They're nobody; they're not your problem.

If you can't help being offended by such comments, don't read them, but ideally you shouldn't need to resort to that. For one thing, it means you miss the sensible comments (and there's always a few). But fundamentally, you shouldn't need to do this, because you really shouldn't care what some anonymous joker from the depths of the internet thinks about you.

The Tree of Science

How do you know whether a scientific idea is a good one or not?


The only sure way is to study it in detail and know all the technical ins and outs. But good ideas and bad ideas behave differently over time, and this can provide clues as to which ones are solid; useful if you're a non-expert trying to evaluate a field, or a junior researcher looking for a career.

Today's ideas are the basis for tomorrow's experiments. A good idea will lead to experiments which provide interesting results, generating new ideas, which will lead to more experiments, and so on.

Before long, it will be taken as granted that it's true, because so many successful studies assumed it was. The mark of a really good idea is not that it's always being tested and found to be true; it's that it's an unstated assumption of studies which could only work if it were true. Good ideas grow onwards and upwards, in an expanding tree, with each exciting new discovery becoming the boring background of the next generation.

Astronomers don't go around testing whether light travels at a finite speed as opposed to an infinite one; rather, if it were infinite, their whole set-up would fail.

Bad ideas generate experiments too, but they don't work out. The assumptions are wrong. You try to explain why something happens, and you find that it doesn't happen at all. Or you come up with an "explanation", but next time, someone comes along and finds evidence suggesting the "true" explanation is the exact opposite.

Unfortunately, some bad ideas stick around, for political or historical reasons or just because people are lazy. What tends to happen is that these ideas are, ironically, more "productive" than good ideas: they are always giving rise to new hypotheses. It's just that these lines of research peter out eventually, meaning that new ones have to take their place.

As an example of a bad idea, take the theory that "vaccines cause autism". This hypothesis is, in itself, impossible to test: it's too vague. Which vaccines? How do they cause autism? What kind of autism? In which people? How often?

The basic idea that some vaccines, somewhere, somehow, cause some autism, has been very productive. It's given rise to a great many, testable, ideas. But every one which has been tested has proven false.

First there was the idea that the MMR vaccine causes autism, linked to a "leaky gut" or "autistic enterocolitis". It doesn't, and it's not linked to that. Then along came the idea that actually it's mercury preservatives in vaccines that cause autism. It doesn't. No problem - maybe it's aluminium? Or maybe it's just the Hep B vaccine? And so on.

At every turn, it's back to square one after a few years, and a new idea is proposed. "We know this is true; now we just need to work out why and how...". Except that turns out to be tricky. Hmm. Maybe, if you keep ending up back at square one, you ought to find a new square to start from.

Israel and Palestine are Both Fighting Back...?

There are three basic schools of thought on the Israel / Palestine thing.

  • Those evil Israelis are out to destroy Palestine, and the Palestinians are just fighting back.
  • Those evil Palestinians are out to destroy Israel, and the Israelis are just fighting back.
  • It's a cycle of violence, where both sides are fighting back against the other.
Which one you subscribe to depends mostly on where you were born. I'm not aware of many people who've changed their minds on this issue, perhaps because doing so would require a study of the last 2,500 years of history, religion and politics.

Wouldn't it be handy if science could provide an answer? According to the authors of a new paper in Proceedings of the National Academy of Science, the "cycle" school is right: both sides are fighting back against the other: Both sides retaliate in the Israeli-Palestinian conflict.

The authors (from Switzerland, Israel and the USA) took data on daily fatalities on both sides, and also of daily launches of Palestinian "Qassam" rockets at Israel. The data run from 2001, the start of the current round of unpleasantness, to late 2008, the Gaza War.

They looked to see whether the number of events that happened on a certain day predicted the number of events caused by the other side on the following days, i.e. whether a Palestinian death caused the Palestinians to retaliate by firing more rockets and killing more Israelis, and vice versa.

What happened? They found that both sides were more likely to launch attacks on the days following a death on their own side. The exception to this rule was that Israel did not noticeably retaliate against Qassam launches. This is perhaps because Qassams are so ineffective: out of 3,645 recorded launches, they killed 15 people.

These graphs show the number of "extra" actions on the days following a event, averaged over the whole 8 years, according to a statistical method called the Impulse Response Function. Note that the absolute size of the effects is larger for the Israeli retaliations (the Y axis is bigger); there were a total of 4,874 Palestinian fatalities and 1,062 Israeli fatalities

The authors then used another method called Vector Autoregression to discover more about the relationship. In theory, this method controls for the past history of actions by a given side, so that it reveals the number of actions independently caused by the opposing side.
the number of Qassams fired increases by 6% on the first day after a single killing of a Palestinian by Israel; the probability of any Qassams being fired increases by 11%; and the probability of any Israelis being killed by Palestinians increases by 10%. Conversely, 1 day after the killing of a single Israeli by Palestinians, the number of Palestinians killed by Israel increases by 9%, and the probability of any Palestinians being killed increases by 20%

....retaliation accounts for a larger fraction of Palestinian compared with Israeli aggression: in the levels specification, 10% of all Qassam rockets can be attributed to prior Israeli attacks on Palestinians, but only 4% of killings of Palestinians by Israel can be attributed to prior Palestinian attacks on Israel.... 6% of all days on which Palestinians attack Israel with rockets, and 5% of all days on which they attack by killing Israelis, can be attributed to retaliation; in contrast, this is true for only 2% of all days on which Israel kills Palestinians.
What are we to make of this? This is a good paper as far as it goes, and it casts doubt on earlier analyses finding that Israel is retaliating against Palestinians but not vice versa. However, the inherent problem with all of this research (beyond the fact that it's all based on correlations and can only indirectly imply causation), is that it focuses on individual acts of violence. The authors say, citing surveys, that
Over one half of Israelis and three quarters of Palestinians think the other side seeks to take over their land. When accounting for their own acts of aggression, Israelis often claim to be merely responding to Palestinian violence, and Palestinians often see themselves as simply reacting to Israeli violence.
But I don't think many Israelis would argue that the IDF only kills individual Palestinians as a reflex reaction to a particular attack. They're claiming that the whole conflict is a defensive one, that the Palestinians are the aggressors, but that doesn't rule out their taking the initiative on a tactical level e.g. in destroying Palestinian military capabilities before they have a chance to attack. And vice versa on the other side.

WW2 was a war of aggression by the Axis powers, but that doesn't mean that the Allies only killed Axis soldiers after they'd attacked a certain place. The Allies were on the offensive for the second half of the war, and eventually invaded the Axis's own homelands, but it was still a defensive war, because the Axis were responsible for it.

For Israel and for Palestine, the other guys are to blame for the whole thing. Who's right, if anyone, is fundamentally a historical, political and ethical question, that can't be answered by looking at day-to-day variations in who's shooting when.

Comment Policy: Please only comment if you've got something to say about this paper, or related research. Comments that are just making the case for or against Israel will get deleted.

ResearchBlogging.orgHaushofer J, Biletzki A, & Kanwisher N (2010). Both sides retaliate in the Israeli-Palestinian conflict. Proceedings of the National Academy of Sciences of the United States of America PMID: 20921415

How To Sell An Idea

You've got an idea: a new way of doing things; a change; a paradigm shift. It might work, it might be no better than what we've got already, or it might end up being a disaster.

The honest way to present your proposal would be to admit its novelty, and hence the uncertainty: this is a new idea I had, I can't promise anything, but here are my reasons for thinking it's worth a try, here are the likely costs and benefits, here are the alternatives.

However, let's suppose you don't want to do that. That's hard work, and if your idea is crap, people could tell. How else could you convince them? By making it seem as though it's not a new idea at all.

You could dress your idea up as:

  • the glorious past. Your idea is nothing more than how we did things back in the golden age, when everything was great. For some reason, people strayed from the true path, and things went bad. We should go back to the the good old days. It worked then, so it'll work now. You'll use words like: restoring, reviving, regaining, renewing... "re" is your friend.
  • the next step. Your idea is just the logical progression of what we're already doing. Things used to be bad, and then they started to change, and get better. Let's make them even better, by doing more of the same. It's inevitable, anyway: you can't stop history. You'll use words like: progress, forward, advance, build, grow...
  • catching up. You're just saying we should bring stuff into line with the way things are done elsewhere, which as we know, is working well. It's not even a matter of moving forward, so much as keeping up. It would be weird not to change. We don't want to be dinosaurs. You'll use words like: modernization, rationalization, reform...
  • keeping things the same. Things are fine right now, and don't need improving. But in order for things to stay great, we must adapt to changing circumstances, so we'll have to make a few adjustments, but don't worry, fundamentally things are going to stay just as they are. You'll use words like: preserving, maintaining, protecting, upholding, strengthening...
The point in every case being to make an innovation seem like it's not one. New means untested, and uncertain, and risky. No-one likes that. Passing off ideas as already proven is a way of gaining acceptance for ideas that wouldn't stand up on their own merits. I'm sure I don't need to point out that this trick is a mainstay of politicians, ideologues and managers everywhere.

Of course, there are plenty of changes that really are these things, to various degrees. Sometimes the past was glorious, relatively speaking (France 1942 springs to mind); sometimes we do need to catch up.

But every new idea still has an element of risk. Nothing has ever been tried and tested in the exact circumstances that we face now, because those circumstances have never existed before. Just because it worked before, or elsewhere, in a situation that we think is similar, is no guarantee. There are only degrees of certainty.

This doesn't mean we can't decide what to do, or that we shouldn't change anything. Not changing things is a plan of action in itself, anyway. The point is that we ought to be willing to try stuff that might not work, our guide to what's likely to happen being the evidence on what's worked before, critically appraised. "I don't know" is not a dirty phrase.

"Koran Burning"

According to the BBC:

Koran protests sweep Afghanistan... Thousands of protesters have taken to the streets across Afghanistan... Three people were shot when a protest near a Nato base in the north-east of the country turned violent.
Wow. That's a lot of fuss about, literally, nothing - the Koran burning hasn't happened. So what are they angry about? The "Koran Burning" - the mere idea of it. That has happened, of course - it's been all over the news.

Why? Well, obviously, it's a big deal. People are getting shot protesting about it in Afghanistan. It's news, so of course the media want to talk about it. But all they're talking about is themselves: the news is that everyone is talking about the news which is that everyone is talking about...

A week ago no-one had heard of Pastor Jones. The only way he could become newsworthy is if he did something important. But what he was proposing to do was not, in itself, important: he was going to burn a Koran in front of a handful of like-minded people.

No-one would have cared about that, because the only people who'd have known about it would have been the participants. Muslims wouldn't have cared, because they would never have heard about it. "Someone You've Never Heard Of Does Something" - not much of a headline.

But as soon as it became news, it was news. Once he'd appeared on CNN, say, every other news outlet was naturally going to cover the story because by then people did care. If something's on CNN, it's news, by definition. Clever, eh?


What's odd is that Jones actually announced his plans way back in July; no-one took much notice at the time. Google Trends shows that interest began to build only in late August, peaking on August 22nd, but then falling off almost to zero.

What triggered the first peak? It seems to have been the decision of the local fire department to deny a permit for the holy book bonfire, on August 18th. (There were just 6 English-language news hits between the 1st and the 17th.)

It all kicked off when the Associated Press reported about the fire department's decision on August 18th and was quickly followed up by everyone else; the AP credit the story to the local paper The Gainsville Sun who covered the story on the same day.

But in their original article, the Sun wrote that Pastor Jones had already made "international headlines" over the event. Indeed there were a number of articles about it in late July following Jones's original Facebook announcement. But interest then disappeared - there was virtually nothing about it in the first half of August, remember.

So there was, it seems, nothing inevitable about this story going global. It had a chance to become a big deal in late July - and it didn't. It had another shot in mid-August, and it got a bit of press that time, but then it all petered out.

Only this week has the story become massive. US commander in Afghanistan General Petraeus spoke out on September 6th; ironically, just before the story finally exploded, since as you can see on the Google Trends above, searches were basically zero up until September 7th when they went through the roof.

So the "Koran Burning" story had three chances to become front-page global news and it only succeeded on the third try. Why? The easy answer is that it's an immediate issue now, because the burning is planned for 11th September - tomorrow. But I wonder if that's one of those post hoc explanations that makes whatever random stuff that happened seem inevitable in retrospect.

The whole story is newsworthy only because it's news, remember. The more attention it gets, the more it attracts. Presumably, therefore, there's a certain critical mass, the famous Tipping Point, after which it's unstoppable. This happened around September 6th, and not in late July or mid August.

But there's a random factor: every given news outlet who might run the story, might decide not to; maybe it doesn't have space because something more important happened, or because the Religion correspondent was off sick that day, etc. Whether a story reaches the critical mass is down to luck, in other words.

The decision of a single journalist on the 5th or the 6th might well have been what finally tipped it.

Marc Hauser's Scapegoat?

The dust is starting to settle after the Hauser-gate scandal which rocked psychology a couple of weeks back.

Harvard Professor Marc Hauser has been investigated by a faculty committee and the verdict was released on the 20th August: Hauser was "found solely responsible... for eight instances of scientific misconduct." He's taking a year's "leave", his future uncertain.

Unfortunately, there has been no official news on what exactly the misconduct was, and how much of Hauser's work is suspect. According to Harvard, only three publications were affected: a 2002 paper in Cognition, which has been retracted; a 2007 paper which has been "corrected" (see below), and another 2007 Science paper, which is still under discussion.

But what happened? Cognition editor Gerry Altmann writes that he was given access to some of the Harvard internal investigation. He concludes that Hauser simply invented some of the crucial data in the retracted 2002 paper.

Essentially, some monkeys were supposed to have been tested on two conditions, X and Y, and their responses were videotaped. The difference in the monkey's behaviour between the two conditions was the scientifically interesting outcome.

In fact, the videos of the experiment showed them being tested only on condition X. There was no video evidence that condition Y was even tested. The "data" from condition Y, and by extension the differences, were, apparently, simply made up.

If this is true, it is, in Altmann's words, "the worst form of academic misconduct." As he says, it's not quite a smoking gun: maybe tapes of Y did exist, but they got lost somehow. However, this seems implausible. If so, Hauser would presumably have told Harvard so in his defence. Yet they found him guilty - and Hauser retracted the paper.

So it seems that either Hauser never tested the monkeys on condition B at all, and just made up the data, or he did test them, saw that they weren't behaving the "right" way, deleted the videos... and just made up the data. Either way it's fraud.

Was this a one-off? The Cognition paper is the only one that's been retracted. But another 2007 paper was "replicated", with Hauser & a colleague recently writing:

In the original [2007] study by Hauser et al., we reported videotaped experiments on action perception with free ranging rhesus macaques living on the island of Cayo Santiago, Puerto Rico. It has been discovered that the video records and field notes collected by the researcher who performed the experiments (D. Glynn) are incomplete for two of the conditions.
Luckily, Hauser said, when he and a colleague went back to Puerto Rico and repeated the experiment, they found "the exact same pattern of results" as originally reported. Phew.

This note, however, was sent to the journal in July, several weeks before the scandal broke - back when Hauser's reputation was intact. Was this an attempt by Hauser to pin the blame on someone else - David Glynn, who worked as a research assistant in Hauser's lab for three years, and has since left academia?

As I wrote in my previous post:
Glynn was not an author on the only paper which has actually been retracted [the Cognition 2002 paper that Altmann refers to]... according to his resume, he didn't arrive in Hauser's lab until 2005.
Glynn cannot possibly have been involved in the retracted 2002 paper. And Harvard's investigation concluded that Hauser was "solely responsible", remember. So we're to believe that Hauser, guilty of misconduct, was himself an innocent victim of some entirely unrelated mischief in 2007 - but that it was all OK in the end, because when Hauser checked the data, it was fine.

Maybe that's what happened. I am not convinced.

Personally, if I were David Glynn, I would want to clear my name. He's left science, but still, a letter to a peer reviewed journal accuses him of having produced "incomplete video records and field notes", which is not a nice thing to say about someone.

Hmm. On August 19th, the Chronicle of Higher Education ran an article about the case, based on a leaked Harvard document. They say that "A copy of the document was provided to The Chronicle by a former research assistant in the lab who has since left psychology."

Hmm. Who could blame them for leaking it? It's worth remembering that it was a research assistant in Hauser's lab who originally blew the whistle on the whole deal, according to the Chronicle.

Apparently, what originally rang alarm bells was that Hauser appeared to be reporting monkey behaviours which had never happened, according to the video evidence. So at least in that case, there were videos, and it was the inconsistency between Hauser's data and the videos that drew attention. This is what makes me suspect that maybe there were videos and field notes in every case, and the "inconvenient" ones were deleted to try to hide the smoking gun. But that's just speculation.

What's clear is that science owes the whistle-blowing research assistant, whoever it is, a huge debt.

Hauser Of Cards

Update: Lots of stuff has happened since I wrote this post: see here for more.

A major scandal looks to be in progress involving Harvard Professor Marc Hauser, a psychologist and popular author whose research on the minds of chimpanzees and other primates is well-known and highly respected. The Boston Globe has the scoop and it's well worth a read (though you should avoid reading the comments if you react badly to stupid.)

Hauser's built his career on detailed studies of the cognitive abilities of non-human primates. He's generally argued that our closest relatives are smarter than people had previously believed, with major implications for evolutionary psychology. Now one of his papers has been retracted, another has been "corrected" and a third is under scrutiny. Hauser has also announced that he's taking a year off from his position at Harvard.

It's not clear what exactly is going on, but the problems seem to centre around videotapes of the monkeys that took part in Hauser's experiments. The story begins with a 2007 paper published in Proceedings of the Royal Society B. That paper has just been amended in a statement that appeared in the same journal last month:

In the original study by Hauser et al., we reported videotaped experiments on action perception with free ranging rhesus macaques living on the island of Cayo Santiago, Puerto Rico. It has been discovered that the video records and field notes collected by the researcher who performed the experiments (D. Glynn) are incomplete for two of the conditions.
The authors of the original paper were Hauser, David Glynn and Justin Wood. In the amendment, which is authored by Hauser and Wood i.e. not Glynn, they say that upon discovering the issues with Glynn's data, they went back to Puerto Rico, did the studies again, and confirmed that the original results were valid. Glynn left academia in 2007, to work for a Boston company, Innerscope Research, according to this online resume.

If that was the whole of the scandal it wouldn't be such a big deal, but according to the Boston Globe, that was just the start. David Glynn was also an author on a second paper which is now under scrutiny. It was published in Science 2007, with the authors listed as Wood, Glynn, Brenda Phillips and Hauser.

However, crucially, Glynn was not an author on the only paper which has actually been retracted, "Rule learning by cotton-top tamarins". This appeared in the journal Cognition in 2002. The three authors were Hauser, Daniel Weiss and Gary Marcus. David Glynn wasn't mentioned in the acknowledgements section either, and according to his resume, he didn't arrive in Hauser's lab until 2005.

So the problem, whatever it is, is not limited to Glynn.

Not was Glynn an author on the final paper mentioned in the Boston Globe, a 1995 article by Hauser, Kralik, Botto-Mahan, Garrett, and Oser. Note that the Globe doesn't say that this paper is formally under investigation, but rather, that it was mentioned in an interview by researcher Gordon G. Gallup who says that when he viewed the videotapes of the monkeys from that study, he didn't observe the behaviours which Hauser et al. said were present. Gallup is famous for his paper "Does Semen Have Antidepressant Properties?" in which he examined the question of whether semen... oh, guess.

The crucial issue for scientists is whether the problems are limited to the three papers that have so far been officially investigated or whether it goes further: that's an entirely open question right now.

In Summary: We don't know what is going on here and it would be premature to jump to conclusions. However, the only author who appears on all of the papers known to be under scrutiny, is Marc Hauser himself.

ResearchBlogging.orgHauser MD, Weiss D, & Marcus G (2002). Rule learning by cotton-top tamarins. Cognition, 86 (1) PMID: 12208654

Hauser MD, Glynn D, & Wood J (2007). Rhesus monkeys correctly read the goal-relevant gestures of a human agent. Proceedings. Biological sciences / The Royal Society, 274 (1620), 1913-8 PMID: 17540661

Wood JN, Glynn DD, Phillips BC, & Hauser MD (2007). The perception of rational, goal-directed action in nonhuman primates. Science (New York, N.Y.), 317 (5843), 1402-5 PMID: 17823353

Hauser MD, Kralik J, Botto-Mahan C, Garrett M, & Oser J (1995). Self-recognition in primates: phylogeny and the salience of species-typical features. Proceedings of the National Academy of Sciences of the United States of America, 92 (23), 10811-14 PMID: 7479889

Chronic Fatigue Syndrome in "not caused by single virus" shock!

Late last year, Science published a bombshell - Lombardi et al's Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. This paper reported the presence of a recently-discovered virus in 67% of the blood samples from 101 people with chronic fatigue syndrome (CFS).

The question of whether people with CFS are suffering from an organic illness, or whether their condition is partially or entirely psychological in nature, is the Israel vs. Palestine of modern medicine - as a brief look at the Wikipedia talk pages will show. So when Lombardi et al linked CFS to xenotropic murine leukaemia virus-related virus (XMRV), they were hailed as heroes by some, less so by others. For some balanced coverage of this paper, see virology blog. Everyone agreed though that Lombardi et al was, as the saying goes, "important if true"...

But it wasn't, at least not everywhere, according to a paper out today in PLoS ONE: Erlwein et al's Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome. The findings are all there in the title - unlike Lombardi et al, these researchers didn't find XMRV in the blood of any of their blood samples from 186 CFS patients.

Still, before people start proclaiming that the original finding has been "debunked", or decrying these results as flawed, some things to bear in mind...

This was a different country. Erlwein et al used patients attending the CFS clinic at King’s College Hospital, London, England. The patients in the original study were drawn from various parts of the USA. So the new results don't mean that the original findings were wrong, merely that they don't apply everywhere. Notably, XMRV has previously been detected in prostrate cancer cells from American patients, but not European ones, so geographic differences seem to be at work. So maybe XMRV does cause CFS, it's just that the virus doesn't exist in Europe, for whatever reason - but bear in mind that even the original study never showed causation, only a correlation. There are many viruses that infect people in certain parts of the world and don't cause illness.

On the other hand, it was a similar group of patients in terms of symptoms: Diagnosing CFS can be difficult, as there are no biological tests to confirm the condition, but Erlwein et al say that

Both studies use the widely accepted 1994 clinical case definition of CFS. Lombardi et al. reported that their cases ‘‘presented with severe disability’’ and we provide quantifiable evidence confirming high levels of disability in our subjects. Our subjects were also typical of those seen in secondary and tertiary care in other centres.
But the first study selected patients with "immunological abnormalities", although we're given few details...
These are patients that have been seen in private medical practices, and their diagnosis of CFS is based upon prolonged disabling fatigue and the presence of cognitive deficits and reproducible immunological abnormalities. These included but were not limited to perturbations of the 2-5A synthetase/RNase L antiviral pathway, low natural killer cell cytotoxicity (as measured by standard diagnostic assays), and elevated cytokines particularly interleukin-6 and interleukin-8.
The biological methods were similar: Both studies used a standard technique called nested PCR. (Lombardi et al also used various other methods, but their headline finding of XMRV in 67% of CFS patients vs just 4% of health people came from nested PCR.) PCR is a way of greatly increasing the amount of a certain sequence of DNA in a sample. If there's even a little bit to start with, you end up with lots. If there's none, you end up with none. It's easy to tell the difference between lots and none.

But there were some differences. The first study only looked at a certain kind of white blood cells, whereas the new study used DNA from whole blood. Also, the first study targeted a larger span of viral DNA - from 419 to 1154:
For identification of gag, 419F and 1154R were used as forward and reverse primers.
Than the second one, which examined the section between positions 411 and 606. As a result, primer sequences used - which determine the DNA detected - were different. However, the authors of the new study claim that they would definitely have detected XMRV DNA if it had been there, because they used the same methods on control samples with the virus added, and got positive results...
The positive control was a dilution of a plasmid with a full-length XMRV (isolate VP62) insert, generously gifted by Dr R. Silverman.
Silverman was one of the authors of the original paper - so hopefully, both research teams were studying the same virus. But (although I'm no virologist) it seems possible that the new study might have been unable to detect XMRV if the DNA sequence of the virus from British patients was differed at certain key ways - the whole point about nested PCR is that it's extremely specific.

Finally, there are stories behind these papers. The first study, that suggested that XMRV causes CFS, was conducted by the Whittemore Peterson Institute, who firmly believe that CFS is an organic disorder and who are now offering XMRV diagnostic tests to CFS patients. By contrast, the authors of the new study include Simon Wessely, a psychiatrist. Wessely is the most famous (or notorious) advocate of the idea that psychological factors are the key to CFS; he believes that it should be treated with psychotherapy.

I'm sure we'll be hearing a lot more about XMRV in the coming months, so stay tuned.

ResearchBlogging.orgErlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519

Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723

Statistically

"Statistically, airplane travel is safer than driving..." "Statistically, you're more likely to be struck by lightning than to..." "Statistically, the benefits outweigh the risks..."

What does statistically mean in sentences like this? Strictly speaking, nothing at all. If airplane travel is safer than driving, then that's just a fact. (It is true on an hour-by-hour basis). There's no statistically about it. A fact can't be somehow statistically true, but not really true. Indeed, if anything, it's the opposite: if there are statistics proving something, it's more likely to be true than if there aren't any.

But we often treat the word statistically as a qualifier, something than makes a statement less than really true. This is because psychologically, statistical truth is often different to, and less real than, other kinds of truth. As everyone knows, Joseph Stalin said that one death is a tragedy, but a million deaths is a statistic. Actually, Stalin didn't say that, but it's true. And if someone has a fear of flying, then all the statistics in the world probably won't change that. Emotions are innumerate.

*

Another reason why statistics feel less than real is that, by their very nature, they sometimes seem to conflict with everyday life. Statistics show that regular smoking, for example, greatly raises your risk of suffering from lung cancer, emphysema, heart disease and other serious illnesses. But it doesn't guarantee that you will get any of them, the risk is not 100%, so there will always be people who smoke a pack a day for fifty years and suffer no ill effects.

In fact, this is exactly what the statistics predict, but you still hear people referring to their grandfather who smoked like a chimney and lived to 95, as if this somehow cast doubt on the statistics. Statistically, global temperatures are rising, which predicts that some places will be unusually cold (although more will be unusually warm), but people still think that the fact that it's a bit chilly this year casts doubt on the fact of global warming.

*

Some people admit that they "don't believe in statistics". And even if we don't go that far, we're often a little skeptical. There are lies, damn lies, and statistics, we say. Someone wrote a book called How To Lie With Statistics. Few of us have read it, but we've all heard of it.

Sometimes, this is no more than an excuse to ignore evidence we don't like. It's not about all statistics, just the inconvenient ones. But there's also, I think, a genuine distrust of statistics per se. Partially, this reflects distrust towards the government and "officialdom", because most statistics nowadays come from official sources. But it's also because psychologically, statistical truth is just less real than other kinds of truth, as mentioned above.

*

I hope it's clear that I do believe in statistics, and so should you, all of them, all the time, unless there is a good reason to doubt a particular one. I've previously written about my doubts concerning mental health statistics, because there are specific reasons to think that these are flawed.

But in general, statistics are the best way we have of knowing important stuff. It is indeed possible to lie with statistics, but it's much easier to lie without them: there are more people in France than in China. Most people live to be at least 110 years old. Africa is richer than Europe. Those are not true. But statistics are how we know that.

[BPSDB]

Of Carts and Horses

Last week, I wrote about a paper finding that the mosquito repellent chemical, DEET, inhibits an important enzyme, cholinesterase. If DEET were toxic to humans, this finding might explain why.
But it isn't - tens of millions of people use DEET safely every year, and there's no reason to think that it is dangerous unless it's used completely inappropriately. That didn't stop this laboratory finding being widely reported as a cause for concern about the safety of DEET.

This is putting the cart before the horse. If you know that something happens, then it's appropriate to search for an explanation for it. If you have a phenemonon, then there must be a mechanism by which it occurs.

But this doesn't work in reverse: just because you have a plausible mechanism by which something could happen, doesn't mean that it does in fact happen. This is because there are always other mechanisms at work which you may not know about. And the effect of your mechanism may be trivial by comparison.

Caffeine can damage DNA under some conditions. Other things which damage DNA, like radiation, can cause cancer. But the clinical evidence is that, if anything, drinking coffee may protect against some kinds of cancer (previous post). There's a plausible mechanism by which coffee could cause cancer, but it doesn't.

Medicine has learned the hard way that while understanding mechanisms is important, it's no substitute for clinical trials. The whole philosophy of evidence-based medicine is that treatments should only be used when there is clinical evidence that they do in fact work.

Unfortunately, in other fields, the horse routinely finds itself behind the cart. An awful lot - perhaps most - of political debate consists of saying that if you do X, Y will happen, through some mechanism. If you legalize heroin, people will take more of it, because it'll be more available and cheaper. If you privatize public services, they'll improve, because competition will ensure that only the most efficient services survive. If you topple this dictator, the country will become a peaceful democracy, because people like peace and democracy. And so on.

These kinds of arguments sound good. And they invite opponents to respond in kind: actually, legalizing heroin is a good idea, because it will make taking it much safer by eliminating impurities and infections... And so the debate becomes a case of fantasizing about things that might happen, with the winner being the person whose fantasy sounds best.

If you want to know what will happen when you implement some policy, the only way of knowing is to look at other countries or other places which have already done it. If no-one else has ever done it, you are making a leap into the unknown. This is not necessarily a bad thing - there's a first time for everything. But it means that "We don't know" should be heard much more often in politics.

93% of Surveys are Meaningless

Over at Bad Science, Ben Goldacre decries an article about a spurious "study", lifted straight from a corporate press-release, in his own newspaper The Guardian:

On Monday we printed a news article about a “report” “published” by Nuffield Health, headlined “No sex please, we’re British and we’re lazier than ever”. “This is the damning conclusion of a major new report published today,” says the press release from Nuffield ... I asked Nuffield’s press office for a copy of the new report, but they refused, and explained that the material is all secret. The Guardian journalist can’t have read it either. I don’t really see how this “report” has been “published”, and in all honesty, I wonder whether it even exists, in any meaningful sense, outside of a press release.

Nuffield Health are the people who run private hospitals and clinics which you can’t afford....the Guardian gave free advertising to Nuffield, for their unpublished published “report”, which nobody even read, in exchange for 370 words of content. This is endemic, and it creeps me out.

The Telegraph also reprinted the press release; sorry, wrote an article drawing on the press-release amongst many other carefully-research sources. The other papers probably did too; I'm too lazy to check.

For you see, the alleged study found that British people are monumentally slothful: 73% of couples said that they are "regularly" too tired to have sex while 64% of parents say that they are always "too tired" to play with their children, and so on.

Yes, according to Nuffield Health, only one in three British parents ever play with their own children. The rest are always too exhausted. It's a wonder they found 2,000 people who were awake enough to answer their survey - although, as Goldacre says, maybe they didn't.

This "study" is, obviously, bollocks. It serves only as advertising for Nuffield Health's network of fitness centers, the benefits of which are helpfully listed at the end of the press release. That newspapers regularly reproduce press releases because they can't afford to pay journalists to fill the space any other way is well known nowadays. This is thanks mostly to Nick Davies and his outstanding book Flat Earth News which revealed, in great detail, just how bad things have got.

But the fact that this advert was published in the Health section of The Guardian, is more than just a symptom of the decline of British journalism. It also reflects the peculiarly British obsession with "surveys".

Even if the Nuffield data was fully published in a proper journal, and even it had been a survey of 200,000 people, it would still be meaningless. Asking people whether they are lazy is not a good way of finding out whether they are, in fact, lazy. All it can tell you is whether people think of themselves as lazy, which is very different. If you wanted to prove that British people really were lazy and getting lazier, you would have to look at actual indicators of activity like, say, gym membership rates, or amateur sports team participation, or swimming pool use, or condom sales if you really think people are too tired have sex, etc.

Yet surveys like this seem to be almost mandatory if you want to draw attention to your cause in Britain at present. You have to do one, and you have to massively over-interpret the results. The gay rights group Stonewall this week accused British football of being institutionally homophobic. Their basis for this claim was a survey of - guess - 2,000 football fans, finding, amongst other things, that

Only one in six fans said their club was working to tackle anti-gay abuse and 54% believed the Football Association, Premier League and Football League were not doing enough to tackle the issue.

This survey demonstrates, at best, that many football fans think British football is institutionally homophobic. It does not "Sadly demonstrate that football is institutionally homophobic", as a Stonewall spokesman said, unless you think that British football fans are infallible godlike beings.

I have nothing but sympathy for Stonewall, and they may well be right about homophobia in football. But their survey is meaningless. It's advertising, just like Nuffield Health's survey. Attentive Neuroskeptic readers will remember the case of "In The Face of Fear", yet another survey of about 2,000 people, claiming that Britain is in the grip of an epidemic of anxiety disorders (it's not) and serving as advertising for another well-meaning group, the Mental Health Foundation.

A large and growing proportion of British newspaper articles are essentially promotional material for some kind of company, charity, or activist organization. Honestly, newspapers should just go the whole hog and replace half their pages with paid adverts, and use the money earned to pay their journalists to actually do some journalism. There would only be half as much news, but it would at least be news.

[BPSDB]

Science, Journalism, and Bug Spray

Watch out! The BBC report that -

Deet bug repellent 'toxic worry'
While The Telegraph are even more concerned -
Insect repellent Deet is bad for your nerves, claim scientists
This is in reference to a new paper about the widely-used insect repellent DEET. The BBC, as usual, performed slightly better than the Telegraph here. They included quotes from two experts making it clear that the research in question was preliminary and in no way proves that DEET is dangerous to humans. But they still ran the headline implying that DEET could be "toxic", which is the only thing most people will remember about the article. As you'll see below, this is quite misleading.

DEET is an insect repellant, generally used to prevent mosquito bites. You spray it on your skin, clothes, mosquito nets, etc. If you've ever been to a tropical country, you'll probably remember it. It has a distinctive smell, it stings the eyes and throat, and, most distressingly, it dissolves plastics. My watch fell off in Thailand because DEET ate through the strap.

That aside, DEET is believed to be safe, so long as you spray it instead of drinking it. Hundreds of millions of people have used it for decades. And it works, which means it saves lives. Mosquitoes spread diseases like malaria, yellow fever, Dengue, and plenty more. They can all kill you. This is why any health professional will advise you to use mosquito-repellants, preferably DEET-based ones, when visiting risk areas.

So it would be massive news if DEET was found to be dangerous. But it hasn't. What's been found is that, in animals and in test-tubes, DEET is a cholinesterase inhibitor. Cholinesterase is an enzyme which breaks down acetylcholine (ACh), a neurotransmitter. If you inhibit cholinesterase, ACh levels rapidly increase. This can cause problems because ACh is the transmitter that your nerves use to communicate with your muscles. As ACh builds up, your muscles don't stop contracting, and you suffer paralysis, until you can't breathe. This is how "nerve gas" works.

But we know DEET isn't a strong cholinesterase inhibitor, when used normally, because people don't get cholinergic effects after using it. The toxicity of cholinesterase inhibitors is acute. You get paralyzed, and suffer other symptoms like uncontrollable salivation, crying, vomiting, and incontinence. You'd know if this happened to you.

Cholinesterase inhibitors are not, as various media reports have said about DEET, "neurotoxic" , they do not cause "neural damage". They act on the nerves, but they do not damage the nerves. In fact people with Alzheimer's take them (in low doses!), as do people with the nerve disease myasthenia gravis.

So the fact that DEET can act as a cholinesterase inhibitor in the lab changes nothing. It's still safe, at least until evidence comes along that it actually causes harm in people who use it. You can't show that something is harmful by doing an experiment showing how it could be harmful in theory.

To be fair, there is one cause for concern in the paper - in the experiments, DEET interacted with other cholinesterase inhibitors, leading to an amplified effect. That suggests that DEET could become toxic in combination with cholinesterase inhibitor insecticides, but again, the risk is theoretical.

The media should never have reported on this paper. The science itself is perfectly good, but the results are completely irrelevant to the average person who might want to use DEET. They are of interest only to biologists. If people decide not to use DEET on the basis of these reports, they are putting themselves in danger. Others have noted that journalists almost always report on laboratory experiments like these as if they were directly relevant to human health. They're not.

Appendix: In one of the articles, an expert says that "I also would guess that the actual concentration [of DEET] in the body is much lower than they had to use in the study to see an effect in the mouse tissues." But we don't have to guess, we can work it out. DEET had detectable effects in mammalian tissues at a concentration of 0.5 millimole. A millimole is a unit of concentration; 1 millimole is 0.19 grams DEET per liter of water. (Molar weight of DEET = 191g/mole). The human body is 60% water by weight. A person weighs, say, 75 kg, which means roughly 50 liters of water. That means that to achieve the level of DEET used in this study, you would need to absorb into your body about 50 x 0.19 = 9.5 grams of DEET (assuming it was evenly distributed in your body).

That's a huge amount. But maybe it's not completely impossible, bearing in mind that DEET might be absorbed through the skin? Is there any data on DEET levels in humans? Yes. This paper reports on the development of a way of measuring DEET in human blood. This method could detect DEET at levels from 1 ng/mL to 100 ng/mL. I assume that the upper limit was chosen because no-one ever gets more DEET than that. 100 ng / mL = 100 micrograms / L = 0.52 micromolar = 0.0005 millimolar. That's 1000-fold too low, and that's the upper limit.

This was just a back-of-the-envelope calculation so please feel free to critique it, but, I find it reassuring.


[BPSDB]

ResearchBlogging.orgCorbel, V., Stankiewicz, M., Pennetier, C., Fournier, D., Stojan, J., Girard, E., Dimitrov, M., Molgo, J., Hougard, J., & Lapied, B. (2009). Evidence for inhibition of cholinesterases in insect and mammalian nervous systems by the insect repellent deet BMC Biology, 7 (1) DOI: 10.1186/1741-7007-7-47

 
powered by Blogger