Tampilkan postingan dengan label philosophy. Tampilkan semua postingan
Tampilkan postingan dengan label philosophy. Tampilkan semua postingan

Amy Bishop, Neuroscientist Turned Killer

Across at Wired, Amy Wallace has a long but riveting article about Amy Bishop, the neuroscience professor who shot her colleagues at the University of Alabama last year, killing three.

It's a fascinating article because of the picture it paints of a killer and it's well worth the time to read. Yet it doesn't really answer the question posed in the title: "What Made This University Scientist Snap?"

Wallace notes the theory that Bishop snapped because she was denied tenure at the University, a serious blow to anyone's career and especially to someone who, apparantly, believed she was destined for great things. However, she points out that the timing doesn't fit: Bishop was denied tenure several months before the shooting. And she shot at some of the faculty who voted in her favor, ruling out a simple "revenge" motive.

But even if Bishop had snapped the day after she found out about the tenure decision, what would that explain? Thousands of people are denied tenure every year. This has been going on for decades. No-one except Bishop has ever decided to pick up a gun in response.

Bishop had always displayed a streak of senseless violence; in 1986, she killed her 18 year old brother with a shotgun in her own kitchen. She was 21. The death was ruled an accident, but probably wasn't. It's not clear what it was, though: Bishop had no clear motive.

Amy had said something that upset her father. That morning they’d squabbled, and at about 11:30 am, Sam, a film professor at Northeastern University, left the family’s Victorian home to go shopping... Amy, 21, was in her bedroom upstairs. She was worried about “robbers,” she would later tell the police. So she loaded her father’s 12-gauge pump-action shotgun and accidentally discharged a round in her room. The blast struck a lamp and a mirror and blew a hole in the wall...

The gun, a Mossberg model 500A, holds multiple rounds and must be pumped after each discharge to chamber another shell. Bishop had loaded the gun with number-four lead shot. After firing the round into the wall, she could have put the weapon aside. Instead, she took it downstairs and walked into the kitchen. At some point, she pumped the gun, chambering another round.

...[her mother] told police she was at the sink and Seth was by the stove when Amy appeared. “I have a shell in the gun, and I don’t know how to unload it,” Judy told police her daughter said. Judy continued, “I told Amy not to point the gun at anybody. Amy turned toward her brother and the gun fired, hitting him.”

Years later Bishop, possibly with the help of her husband, sent a letter-bomb to a researcher who'd sacked her, Paul Rosenberg. Rosenberg avoided setting off the suspicious package and police disarmed it; Bishop was questioned, but never charged.

Wallace argues that Bishop's "eccentricity", or instability, was fairly evident to those who knew her but that in the environment of science, it went unquestioned because science is full of eccentrics.

I'm not sure this holds up. It's certainly true that science has more than its fair share of oddballs. The "mad scientist" trope is a stereotype but it has its basis in fact and it has done at least since Newton; many say that you can't be a great scientist and be entirely 'normal'.

But the problem with this, as a theory for why Bishop wasn't spotted sooner, is that she was spotted sooner, as unhinged, albeit not as a potential killer,by a number of people. Rosenberg sacked her, in 1993, on the grounds that her work was inadaquate and said that "Bishop just didn’t seem stable". And in 2009, the reason Bishop was denied tenure in Alabama was partially that one of her assessors referred to her as "crazy", more than once; she filed a complaint on that basis.

Bishop also published a bizarre paper in 2009 written by herself, her husband, and her three children, of "Cherokee Lab Systems", a company which was apparantly nothing more than a fancy name for their house. There may be a lot of eccentrics in science, but that's really weird.

So I think that all of these attempts at an explanation fall short. Amy Bishop is a black swan; she is the first American professor to do what she did. Hundreds of thousands of scientists have been through the same academic system and only one ended up shooting their colleagues. If there is an explanation, it lies within Bishop herself.

Whether she was suffering from a diagnosable mental illness is unclear. Her lawyer has said so, but he would; it's her only defence. Maybe we'll learn more at the trial.#

H/T: David Dobbs for linking to this.

When "Healthy Brains" Aren't

There's a lot of talk, much of it rather speculative, about "neuroethics" nowadays.

But there's one all too real ethical dilemma, a direct consequence of modern neuroscience, that gets very little attention. This is the problem of incidental findings on MRI scans.

An "incidental finding" is when you scan someone's brain for research purposes, and, unexpectedly, notice that something looks wrong with it. This is surprisingly common: estimates range from 2–8% of the general population. It will happen to you if you regularly use MRI or fMRI for research purposes, and when it does, it's a shock. Especially when the brain in question belongs to someone you know. Friends, family and colleagues are often the first to be recruited for MRI studies.

This is why it's vital to have a system in place for dealing with incidental findings. Any responsible MRI scanning centre will have one, and as a researcher you ought to be familiar with it. But what system is best?

Broadly speaking there are two extreme positions:

  1. Research scans are not designed for diagnosis, and 99% of MRI researchers are not qualified to make a diagnosis. What looks "abnormal" to Joe Neuroscientist BSc or even Dr Bob Psychiatrist is rarely a sign of illness, and likewise they can easily miss real diseases. So, we should ignore incidental findings, pretend the scan never happened, because for all clinical purposes, it didn't.
  2. You have to do whatever you can with an incidental finding. You have the scans, like it or not, and if you ignore them, you're putting lives at risk. No, they're not clinical scans, they can still detect many diseases. So all scans should be examined by a qualified neuroradiologist, and any abnormalities which are possibly pathological should be followed-up.
Neither of these extremes is very satisfactory. Ignoring incidental findings sounds nice and easy, until you actually have to do it, especially if it's your girlfriend's brain. On the other hand, to get every single scan properly checked by a neuroradiologist would be expensive and time-consuming. Also, it would effectively turn your study into a disease screening program - yet we know that screening programs can cause more harm than good, so this is not necessarily a good idea.

Most places adopt a middle-of-the-road approach. Scans aren't routinely checked by an expert, but if a researcher spots something weird, they can refer the scan to a qualified clinician to follow up. Almost always, there's no underlying disease. Even large, OMG-he-has-a-golf-ball-in-his-brain findings can be benign. But not always.

This is fine but it doesn't always work smoothly. The details are everything. Who's the go-to expert for your study, and what are their professional obligations? Are they checking your scan "in a personal capacity", or is this a formal clinical referral? What's their e-mail address? What format should you send the file in? If they're on holiday, who's the backup? At what point should you inform the volunteer about what's happening?

Like fire escapes, these things are incredibly boring, until the day when they're suddenly not.

A new paper from the University of California Irvine describes a computerized system that made it easy for researchers to refer scans to a neuroradiologist. A secure website was set up and publicized in University neuroscience community.

Suspect scans could be uploaded, in one of two common formats. They were then anonymized and automatically forwarded to the Department of Radiology for an expert opinion. Email notifications kept everyone up to date with the progress of each scan.

This seems like a very good idea, partially because of the technical advantages, but also because of the "placebo effect" - the fact that there's an electronic system in place sends the message: we're serious about this, please use this system.

Out about 5,000 research scans over 5 years, there were 27 referrals. Most were deemed benign... except one which turned out to be potentially very serious - suspected hydrocephalus, increased fluid pressure in the brain, which prompted an urgent referral to hospital for further tests.

There's no ideal solution to the problem of incidental findings, because by their very nature, research scans are kind of clinical and kind of not. But this system seems as good as any.

ResearchBlogging.orgCramer SC, Wu J, Hanson JA, Nouri S, Karnani D, Chuang TM, & Le V (2011). A system for addressing incidental findings in neuroimaging research. NeuroImage PMID: 21224007

The Time Travelling Brain

What's the difference between walking down the street yesterday, and walking down the street tomorrow?

It's nothing to do with the walking, or the street: that's the same. When seems to be something external to the what, how, and where of the situation. But this creates a problem for neuroscientists.

We think we know how the fact that the brain could store the concept of "walking down the street" (or "walking" and "street"). Very roughly, simple sensory impressions are thought to get built up into more and more complex combinations, and this happens as you move away from the brain's primary visual cortex (V1) and down the so-called ventral visual stream.

In area V1, cells respond mostly to nothing more complex than position and the orientations of straight lines: / or \ or _ , etc. Whereas once you get to the temporal lobe, far down the stream, you have cells that respond to Jennifer Aniston. In between are progressively more complex collections of features.

Even if the details are wrong, the fact that complex objects are composed of simpler parts and ultimately raw sensations, means that our ability to process complex scenes doesn't seem too mysterious, given that we have senses.

But the fact that we can take any given scene, and effortlessly think of it as either "past", "present", or "future", is puzzling under this view because, as I said, the scene itself is the same in all cases. And it's not as if we have a sense devoted to time: the only time we're ever directly aware of, is "right now".

Swedish neuroscientists Nyberg et al used fMRI to measure brain activity associated with "mental time travel": Consciousness of subjective time in the brain. They scanned volunteers and asked them imagine walking between two points, in 4 different situations: past, present, future, or remembered (as opposed to imagined in the past). This short walk was one which they'd really done, many times.

What happened?
Compared to a control task of doing mental arithmetic, both remembering and imagining the walk activated numerous brain areas and there was very strong overlap between the two conditions. No big surprise there.

The crucial contrast was between remembering, past imagining and future imagining, vs. imagining in the present. This revealed a rather cute little blob:

This small nugget of the left parietal cortex represents an area where the brain is more active when thinking about times other than the present, relative to thinking about the same thing, but right now. They note that this area "partly overlaps a left angular region shown to be recruited during both past and future thinking and with parietal regions implicated in self-projection in past, present, or future time."

So what? This is a nice study, but like most fMRI it doesn't tell us what this area is actually doing. To know that, we'd need to know what would happen to someone if that area were damaged. Would they be unable to imagine any time except the present? Would they think their memories were happening right now? Maybe you could use rTMS could temporarily inactivate it - if you could find volunteers willing to lose their sense of time for a while...

ResearchBlogging.orgNyberg L, Kim AS, Habib R, Levine B, & Tulving E (2010). Consciousness of subjective time in the brain. Proceedings of the National Academy of Sciences of the United States of America PMID: 21135219

Genes To Brains To Minds To... Murder?

A group of Italian psychiatrists claim to explain How Neuroscience and Behavioral Genetics Improve Psychiatric Assessment: Report on a Violent Murder Case.

The paper presents the horrific case of a 24 year old woman from Switzerland who smothered her newborn son to death immediately after giving birth in her boyfriend's apartment. After her arrest, she claimed to have no memory of the event. She had a history of multiple drug abuse, including heroin, from the age of 13.


Forensic psychiatrists were asked to assess her case and try to answer the question of whether "there was substantial evidence that the defendant had an irresistible impulse to commit the crime." The paper doesn't discuss the outcome of the trial, but the authors say that in their opinion she exhibits a pattern of "pathologically impulsivity, antisocial tendencies, lack of planning...causally linked to the crime, thus providing the basis for an insanity defense."

But that's not all. In the paper, the authors bring neuroscience and genetics into the case in an attempt to provide
a more “objective description” of the defendant’s mental disease by providing evidence that the disease has “hard” biological bases. This is particularly important given that psychiatric symptoms may be easily faked as they are mostly based on the defendant’s verbal report.
So they scanned her brain, and did DNA tests for 5 genes which have been previously linked to mental illness, impulsivity, or violent behaviour. What happened? Apparently her brain has "reduced gray matter volume in the left prefrontal cortex" - but that was compared to just 6 healthy control women. You really can't do this kind of analysis on a single subject, anyway.

As for her genes, well, she had genes. On the famous and much-debated 5HTTLPR polymorphism, for example, her genotype was long/short; while it's true that short is generally considered the "bad" genotype, something like 40% of white people, and an even higher proportion of East Asians, carry it. The situation was similar for the other four genes (STin2 (SCL6A4), rs4680 (COMT), MAOA-uVNTR, DRD4-2/11, for gene geeks).

I've previously posted about cases in which a well-defined disorder of the brain led to criminal behaviour. There was the man who became obsessed with child pornography following surgical removal of a tumour in his right temporal lobe. There are the people who show "sociopathic" behaviour following fronto-temporal degeneration.

However this woman's brain was basically "normal" at least as far as a basic MRI scan could determine. All the pieces were there. Her genotypes was also normal in that lots of normal people carry the same genes; it's not (as far as we know) that she has a rare genetic mutation like Brunner syndrome in which an important gene is entirely missing. So I don't think neurobiology has much to add to this sad story.

*

We're willing to excuse perpetrators when there's a straightforward "biological cause" for their criminal behaviour: it's not their fault, they're ill. In all other cases, we assign blame: biology is a valid excuse, but nothing else is.

There seems to be a basic difference between the way in which we think about "biological" as opposed to "environmental" causes of behaviour. This is related, I think, to the Seductive Allure of Neuroscience Explanations and our fascination with brain scans that "prove that something is in the brain". But when you start to think about it, it becomes less and less clear that this distinction works.

A person's family, social and economic background is the strongest known predictor of criminality. Guys from stable, affluent families rarely mug people; some men from poor, single-parent backgrounds do. But muggers don't choose to be born into that life any more than the child-porn addict chose to have brain cancer.

Indeed, the mugger's situation is a more direct cause of his behaviour than a brain tumour. It's not hard to see how a mugger becomes, specifically, a mugger: because they've grown up with role-models who do that; because their friends do it or at least condone it; because it's the easiest way for them to make money.

But it's less obvious how brain damage by itself could cause someone to seek child porn. There's no child porn nucleus in the brain. Presumably, what it does is to remove the person's capacity for self-control, so they can't stop themselves from doing it.

This fits with the fact that people who show criminal behaviour after brain lesions often start to eat and have (non-criminal) sex uncontrollably as well. But that raises the question of why they want to do it in the first place. Were they, in some sense, a pedophile all along? If so, can we blame them for that?

ResearchBlogging.orgRigoni D, Pellegrini S, Mariotti V, Cozza A, Mechelli A, Ferrara SD, Pietrini P, & Sartori G (2010). How neuroscience and behavioral genetics improve psychiatric assessment: report on a violent murder case. Frontiers in behavioral neuroscience, 4 PMID: 21031162

I Feel X, Therefore Y

I'm reading Le Rouge et le Noir ("The Red and the Black"), an 1830 French novel by Stendhal...

One passage in particular struck me. Stendhal is describing two characters who are falling in love (mostly); both are young, have lived all their lives in a backwater provincial town, and neither has been well educated.

In Paris, the nature of [her] attitude towards [him] would have very quickly become plain - but in Paris, love is an offspring of the novels. In three or four such novels, or even in a couplet or two of the kind of song they sing at the Gymnase, the young tutor and his shy mistress would have found a clear explanation of their relations with each other. Novels would have traced out a part for them to play, given them a model to imitate.
The idea that reading novels could change the way people fall in love might strange today, but remember that in 1830 the novel as we know it was still a fairly new invention, and was seen in conservative quarters as potentially dangerous. Stendhal was of course pro-novels (he was a novelist), but he accepts that they have a profound effect on the minds of readers.

Notice that his claim is not that novels create entirely new emotions. The two characters had feelings for each other despite never having read any. Novels suggest roles to play and models to follow: in other words, they provide interpretations as to what emotions mean and expectations as to what behaviours they lead to. You feel that, therefore you'll do this.

This bears on many things that I've written about recently. Take the active placebo phenomenon. This refers to cases in which a drug creates certain feelings, and the user interprets these feelings as meaning that "the drug is working", so they expect to improve, which leads them to feel better and behave as if they are getting better.

As I said at the time, active placebos are most often discussed in terms of drug side effects creating the expectation of improvement, but the same thing also happens with real drug effects. Valium (diazepam) produces a sensation of relaxation and reduces anxiety as a direct pharmacological effect but if someone takes it expecting to feel better, this will also drive improvement via expectation: the Valium is working, I can cope with this.

The same process can be harmful, though, and this may be even more common. The cognitive-behavioural theory of recurrent panic attacks is that they're caused by vicious cycles of feelings and expectations. Suppose someone feels a bit anxious, or notices their heart is racing a little. They could interpret that in various ways. They might write it off and ignore it, but they might conclude that they're about to have a panic attack.

If so, that's understandably going to make them more anxious, because panic is horrible. Anxiety causes adrenaline released, the heart beats ever faster etc., and this causes yet more anxiety until a full-blown panic attack occurs. The more often this happens, the more they come to fear even minor symptoms of physical arousal because they expect to suffer panic. Cognitive behavioural therapy for panic generally consists of breaking the cycle by changing interpretations, and by gradual exposure to physical symptoms and "panic-inducing" situations until they no longer cause the expectation of panic.

This also harks back to Ethan Watters' book Crazy Like Us which I praised a few months back. Watters argued that much mental illness is shaped by culture in the following way: culture tells us what to expect and how people behave when they feel distressed in certain ways, and thus channels distress into recognizable "syndromes" - a part to play, a model to imitate, though probably quite unconsciously. The most common syndromes in Western culture can be found in the DSM-IV, but this doesn't mean that they exist in the rest of the world.

Like Stendhal's, this theory does not attempt to explain everything - it assumes that there are fundamental feelings of distress - and I do not think that it explains the core symptoms of severe mental illness such as bipolar disorder and schizophrenia. But people with bipolar and schizophrenia have interpretations and expectations just like everyone else, and these may be very important in determining long-term prognosis. If you expect to be ill forever and never have a normal life, you probably won't.

The World Turned Upside Down

This map is not “upside down”. It looks that way to us; the sense that north is up is a deeply ingrained one. It's grim up north, Dixie is away down south. Yet this is pure convention. The earth is a sphere in space. It has a north and a south, but no up and down.

There’s a famous experiment involving four guys and a door. An unsuspecting test subject is lured into a conversation with a stranger, actually a psychologist. After a few moments, two people appear carrying a large door, and they walk right between the subject and the experimenter.

Behind the door, the experimenter swaps places with one of the door carriers, who may be quite different in voice and appearance. Most subjects don't notice the swap. Perception is lazy: whenever it can get away with it, it merely tells us that things are as we expect, rather than actually showing us stuff. We often do not really perceive things at all. Did the subject really see the first guy? The second? Either?

The inverted map makes us actually see the Earth's geography, rather than just showing us the expected "countries" and "continents". I was struck by how parochial Europe is – the whole place is little more than a frayed end of the vast Eurasian landmass, no more impressive than the one at the other end, Russia's Chukotski. Africa dominates the scene: it can no longer be written off as that poor place at the bottom.

One of the most common observations in psychotherapy of people with depression or anxiety is that they hold themselves to impossibly high standards, although they have a perfectly sensible evaluation of everyone else. Their own failures are catastrophic; other people's are minor setbacks. Other people's successes are well-deserved triumphs; their own are never good enough, flukes, they don't count.

The first step in challenging these unhelpful patterns of thought is to simply point out the double-standard: why are you such a perfectionist about yourself, when you're not when it comes to other people? The idea being to help people to think about themselves in more like healthy way they already think about others. Turn the map of yourself upside down - what do you actually see?

Fingers

How many fingers do you have?

10, obviously, unless you've been the victim of an accident or a birth defect. Everyone knows that. You count up to ten on your fingers, for one thing.

But look at your left hand - how many fingers are on it? Little finger, ring finger, middle finger, first finger... thumb. So that's 4. But then we'd only have 8 fingers, and we all know we have 10. Unless the thumb is a finger, but is it?

Hmm. Hard to say. Wikipedia has some interesting facts about this question, and on Google if you start to type in "is the thumb", the top suggested search terms are all about this issue. It's a tricky one. People don't seem to know for sure.

But does that mean there's any real mystery about the thumb? No - we understand it as well as any other part of the body. We know all about the bones and muscles and joints and nerves of the thumb, we know how it works, what it does, even its evolutionary history (see The Panda's Thumb by Steven J Gould, still one of the greatest popular science books ever.) Science has got thumbs covered.

The mystery is in the English language, which isn't quite clear on whether the word "finger" encompasses the human thumb; for some purposes it does, i.e. we have 10 fingers, but for other purposes it probably doesn't, although even English speakers seem to be in two minds about the details (see Google, above).

Notice that although the messiness seems to focus on the thumb, the word "thumb" is perfectly clear. The ambiguity is rather in the word "finger", which can mean either any of the digits of the hand, or, the digits of the hand with three joints. Take a look at your hand again and you'll notice that your thumb lacks a joint compared to the fingers; something I must admit I'd forgotten until Wikipedia reminded me.

Yet it would be very easy to blame the thumb for the confusion. After all, the other 4 fingers are definitely fingers. The fingers are playing by the rules. Only the thumb is a troublemaker. So it comes as somewhat of a surprise to realize that it's the fingers, not the thumb, that are the problem.

*

So words or phrases can be ambiguous, and when they are, they can lead to confusion, but not always in the places you'd expect. Specifically, the confusion seems to occur at the borderlines, the edge cases, of the ambiguous terminology, but the ambiguity is really in the terminology itself, not the edge cases. To resolve the confusion you need to clarify the terminology, and not get bogged down in wondering whether this or that thing is or isn't covered by the term.

It's important to bear in this in mind when thinking about psychiatry, because psychiatry has an awful lot of confusion, and a lot of it can be traced back to ambiguous terms. Take, for example, the question of whether X "is a mental illness". Is addiction a mental illness, or a choice? Is mild depression a mental illness, or a normal part of life? Is PTSD a mental illness, or a normal reaction to extreme events? Is... I could go on all day.

The point is that you will never be able to answer these questions until you stop focussing on the particular case and first ask, what do I mean by mental illness? If you can come up with a single, satisfactory definition of mental illness, all the edge cases will become obvious. But at present, I don't think anyone really knows what they mean by this term. I know I don't, which is why I try to avoid using it, but often I do still use it because it seems to be the most fitting phrase.

It might seem paradoxical to use a word without really knowing what it means, but it isn't, because being able to use a word is procedural knowledge, like riding a bike. The problem is that many of our words have confusion built-in, because they're ambiguous. We can all use them, but that means we're all risking confusing each other, and ourselves. When this gets serious enough the only solution is to stop using the offending word and create new, unambiguous ones. With "finger", it's hardly a matter of life or death. With "mental illness", however, it is.

Serial Killers

Much of Britain is currently following the trial of Steven Griffiths, or as he'd like you to refer to him, the Crossbow Cannibal.

Serial killers are always newsworthy, and Griffiths has killed at least three women in cold blood. (He did use a crossbow, but I think the newspapers made up the cannibalism.) But it's Griffiths's interests that have really got people's attention.

It turns out that before he became a serial killer, he was a man obsessed with... serial killers. His Amazon wish list was full of books about murder. He has a degree in psychology, and he was working on his PhD, in Criminology. Guess what his research was about.

Griffiths is therefore a kind of real life Hannibal Lecter or Dexter, an expert in murderers who is himself one. He's also a good example of the fact that, unlike on TV, real life serial killers are never cool and sophisticated, nor even charmingly eccentric, just weird and pathetic. Not to mention lazy, given that he was still working on his PhD after 6 years...

Yet there is an interesting question: was Griffiths a good criminologist? Does he have a unique insight into serial killers? We'll probably never know, at least not until (or if) the police release some of his writings. But it seems to me that he might have done.

When the average person hears about the crimes of someone like Griffiths, we are not just shocked but confused - it seems incomprehensible. I can understand why someone would want to rob me for my wallet, because I like money too. I can understand how one guy might kill another in a drunken fight, because I've been drunk too. Of course this doesn't mean I condone either crime, but they don't leave me scratching my head; I can see how it happens.

I cannot begin to understand why Griffiths did what he did. My understanding of humanity doesn't cover him. But he is human, so all that really means is that my understanding is limited. Someone understands people like Griffiths, it can't be impossible; but it may be that the only way to understand a serial killer is to be one.

The same may be true of less dramatic mental disorders. Karl Jaspers believed that the hallmark of severe mental illness is symptoms that are impossible to understand: they just exist. I've experienced depression; I've also read an awful lot about it and published academic papers on it. My own illness taught me much more about depression than my reading. Maybe I've been reading the wrong things. I don't think so.

Do Cats Hallucinate?

I have two cats. One is about four, and he is a psychopath. The other is sixteen - elderly, in cat terms - and I've recently noticed some changes in her behaviour.

For one, she's become a lot more affectionate, and she demands constant attention - she meows at people on sight, follows you around, and almost always comes and sits on top of you, or on top of whatever you're doing/reading/typing.

But on top of that, she's started pausing in the middle of whatever she's doing and staring at empty corners, or walls. All cats sit down and gaze into space a lot of the time, but this is different - it happens in the middle of normal actions, like eating or walking around. What does this mean?

Could she be hallucinating? Hallucinations are unfortunately not uncommon in elderly people. Seeing and hearing things that aren't there is a major symptom of Alzheimer's, and other forms of dementia. Do cats get Alzheimer's? The internet says: yes. In terms of scientific research there doesn't seem to have been much, but a few studies have found Alzheimer's-like changes (amyloid-beta protein accumulation) in the brains of old cats. Whether these cause the same symptoms as they do in people is unclear, but, why not?

How would you know if an animal was hallucinating? They can't talk about it, and unlike say hunger or pain, they don't have specific ways of communicating it through body language or cries. A hallucinating animal would, presumably, react fairly normally to whatever it thought it saw or heard: so hallucinations would manifest as normal behaviours, but in inappropriate situations. Whether this is what's happening to my cat, I'm not sure, but again, it's possible.

A more philosophical issue is whether we can conclude that this kind of out-of-context behaviour means the animal is experiencing a hallucination. But this is really just the age old question of whether animals have conciousness at all. If they do, then they can presumably hallucinate: if you can be concious of sensations, you can be concious of false sensations.

For what its worth, my view is that animals, at any rate for mammals, are concious. Humans are (although technically we only know for sure that we personally are, and have to assume the same is true of others.) Mammalian brains are structured in a similar way to our own; they're made of the same cells; they use the same neurotransmitters and the same drugs interfere with them in the same ways; pretty much all of the brain regions are there, although the sizes differ.

There's of course a big difference between us and other mammals: we have language, and conceptual thinking, and so forth. But does conciousness depend on that? It seems unlikely, just because most of what we're concious of at any one time isn't anything to do with those specifically human things.

Right now, I'm concious of what I can see, what I can hear, what I can feel with my fingertips, and the thoughts I'm writing down. Only 1/4 of that (to put it crudely) is unique to humans. And I'm not always aware of thoughts or words; there are plenty of times when I'm only aware of sensations and perceptions.

Probably the closest we get to animal conciousness is in strong, primitive experiences like pain, panic and anger, in which we "take leave of our senses" - not meaning that we become unconscious, but that we temporarily stop being able to "think straight" i.e. like a human. That doesn't mean that animals spend all their time in some extreme emotional state, but it's harder for us to know what it's like to be a relaxed cat because generally when we're relaxed, we're thinking (or daydreaming, etc. Although who's to say cats don't? They dream, after all...)

Is Your Brain A Communist?

Capitalists beware. No less a journal than Nature has just published a paper proving conclusively that the human brain is a Communist, and that it's plotting the overthrow of the bourgeois order and its replacement by the revolutionary Dictatorship of the Proletariat even as we speak.

Kind of. The article, Neural evidence for inequality-averse social preferences, doesn't mention the C word, but it does claim to have found evidence that people's brains display more egalitarianism than people themselves admit to.

Tricomi et al took 20 pairs of men. At the start of the study, both men got a $30 payment, but one member of each pair was then randomly chosen to get a $50 bonus. Thus, one guy was "rich", while the other was "poor". Both men then had fMRI scans, during which they were offered various sums of money and saw their partner being offered money too. They rated how "appealing" these money transfers were on a 10 point scale.

What happened? Unsurprisingly both "rich" and "poor" said that they were pleased at the prospect of getting more cash for themselves, the poor somewhat more so, but people also had opinions about payments to the other guy:

the low-pay group disliked falling farther behind the high-pay group (‘disadvantageous inequality aversion’), because they rated positive transfers to the high-pay participants negatively, even though these transfers had no effect on their own earnings. Conversely, the high-pay group seemed to value transfers [to the poor person] that closed the gap between their earnings and those of the low-pay group (‘advantageous inequality aversion’)
What about the brain? When people received money for themselves, activity in the ventromedial prefrontal cortex (vmPFC) and the ventral striatum correlated with the size of their gain.

However, when presented with a payment to the other person, these areas seemed to be rather egalitarian. Activity rose in rich people when their poor colleagues got money. In fact, it was greater in that case than when they got money themselves, which means the "rich" people's neural activity was more egalitarian than their subjective ratings were. Whereas in "poor" people, the vmPFC and the ventral striatum only responded to getting money, not to seeing the rich getting even richer.


The authors conclude that this
indicates that basic reward structures in the brain may reflect even stronger equity considerations than is necessarily expressed or acted on at the behavioural level... Our results provide direct neurobiological evidence in support of the existence of inequality-averse social preferences in the human brain.
Notice that this is essentially a claim about psychology, not neuroscience, even though the authors used neuroimaging in this study. They started out by assuming some neuroscience - in this case, that activity in the vmPFC and the ventral striatum indicates reward i.e. pleasure or liking - and then used this to investigate psychology, in this case, the idea that people value equality per se, as opposed to the alternative idea, that "dislike for unequal outcomes could also be explained by concerns for social image or reciprocity, which do not require a direct aversion towards inequality."

This is known as reverse inference, i.e. inference from data about the brain to theories about the mind. It's very common in neuroimaging papers - we've all done it - but it is problematic. In this case, the problem is that the argument relies on the idea that activity in the vmPFC and ventral striatum is evidence for liking.

But while there's certainly plenty of evidence that these areas are activated by reward, and the authors confirmed that activity here correlated with monetary gain, that doesn't mean that they only respond to reward. They could also respond to other things. For example, there's evidence that the vmPFC is also activated by looking at angry and sad faces.

Or to put it another way: seeing someone you find attractive makes your pupils dilate. If you were to be confronted by a lion, your pupils would dilate. Fortunately, that doesn't mean you find lions attractive - because fear also causes pupil dilation.

So while Tricomi et al argue that people, or brains, like equality, on the basis of these results, I remain to be fully convinced. As Russell Poldrack noted in 2006
caution should be exercised in the use of reverse inference... In my opinion, reverse inference should be viewed as another tool (albeit an imperfect one) with which to advance our understanding of the mind and brain. In particular, reverse inferences can suggest novel hypotheses that can then be tested in subsequent experiments.
ResearchBlogging.orgTricomi E, Rangel A, Camerer CF, & O'Doherty JP (2010). Neural evidence for inequality-averse social preferences. Nature, 463 (7284), 1089-91 PMID: 20182511

The Needle and the Damage (Not) Done

You may already have heard about Desiree Jennings.


If not, here's a summary, although for the full story you should consult Steven Novella or Orac, whose expert analyses of the case are second to none. Desiree Jennings is a 25 year old woman from Ashburn, Virginia who developed horrible symptoms following a seasonal flu vaccination in August. As she puts it:
In a matter of a few short weeks I lost the ability to walk, talk normally, and focus on more than one stimuli at a time. Whenever I eat I know, without fail, that my body will soon go into uncontrollable convulsions coupled with periods of blacking out.
For some weeks the problems were so bad that she was almost completely disabled, and feared the damage was permanent. Vaccines had destroyed her life. You can see a video here - American TV has covered the story in a lot of detail (the fact that she is quite... photogenic can't have put them off). Desiree and the media described her illness as dystonia, a neurological condition characterised by uncontrollable muscle contractions. Dystonia is caused by damage to certain motor pathways in the brain.

However, Desiree Jennings does not have dystonia. The symptoms look a bit like dystonia to the untrained eye, but they're not it. This is the unanimous opinion of dystonia experts who've seen the footage of Jennings. A blogger discovered that it was also seemingly the view of the neurologist who originally examined her.

So what's wrong with her? The answer, according to experts, is that her symptoms are psychogenic - "neurological" or "medical" symptoms caused by psychological factors rather than organic brain damage. It's important to be clear on what exactly this implies. It doesn't mean that Jennings is "making up" or "faking" the symptoms or that they're a "hoax". The symptoms are as "real" as any others, the only thing psychological about them is the cause. Nor are psychogenic symptoms delusions - Jennings isn't mentally ill or "crazy".

Almost certainly, she is in her right mind, and she sincerely believes that she is a victim of brain damage caused by the flu shot. The belief is false, but it's not crazy - in 1976 one flu vaccine may have caused neurological disorders and today many, many otherwise sane people believe that vaccines cause all kinds of damage. (It could well be that this belief is actually driving Jennings' symptoms, but we can't know that - there could be other psychological factors at work.)

*

One of the hallmarks of psychogenic symptoms is that they improve in response to psychological factors. Neurologist blogger Steven Novella predicted that:
I predict that they will be able to “cure” her, because psychogenic disorders can and do spontaneously resolve. They will then claim victory for their quackery in curing a (non-existent) vaccine injury.
They being anti-vaccination group Generation Rescue who were swift to offer Jennings their support and, er, expertise. And this is exactly what seems to be happening: Dr Rashid Buttar, a prominent anti-vaccine doctor who treats "vaccine damage" cases, began giving Jennings (amongst other things) chelation therapy to flush out toxic metals from her body, on the theory that her dystonia was caused by mercury in the vaccine. It worked! Dr. Buttar tells us - 15 minutes after the chelation solution started entering her body through an IV drip, all of the symptoms had disappeared (on the podcast it's about 6:00 onwards).

It's completely implausible that mercury in the vaccine could have caused dystonia, and even if it somehow did, it's impossible that chelation could reverse mercury-induced brain damage so quickly. If you are unfortunate enough to get mercury poisoning the neurological damage is permanent; flushing out the mercury wouldn't cure you. There's now no question that Jennings is a textbook case of psychogenic illness.

*

On this blog I've often written about the mysterious "placebo effect". A few weeks ago, I said -
People seem more willing to accept the mind-over-matter powers of "the placebo" than they are to accept the existence of psychosomatic illness.
We certainly seem to talk about placebos more than we talk about psychosomatic or psychogenic illness. There are 20 million Google hits for "placebo", just 1.6 million for "psychosomatic", and 500,000 for "psychogenic". (Even "placebo -music -trial" gives 8.7 million, which excludes all of the many placebo-controlled clinical trials and also hits about the band.)

Why? One important factor is surely that it's very difficult to prove that any given illness is "psychosomatic". Even if a patient has symptoms with no apparent medical cause, leading to suspicions that they're psychogenic, there could always be an organic cause waiting to be discovered. Just as we can never prove that there were no WMDs in Iraq, we can never prove that a given illness is purely psychological in origin.

But occasionally, there are cases where the psychogenic nature of an illness is so patent that there can be little doubt, and this is one of them. Watch the videos, listen to the account of the cure, and marvel at the mysteries of the mind.

[BPSDB]

Deconstructing the Placebo

Last month Wired, announced that Placebos Are Getting More Effective. Drugmakers Are Desperate to Know Why.

The article's a good read, and the basic story is true, at least in the case of psychiatric drugs. In clinical trials, people taking placebos do seem to get better more often now than in the past (paper). This is a big problem for Big Pharma, because it means that experimental new drugs often fail to perform better than placebo, i.e. they don't work. Wired have just noticed this, but it's been being discussed in the academic literature for several years.

Why is this? No-one knows. There have been many suggestions - maybe people "believe in" the benefits of drugs more nowadays, so the placebo effect is greater; maybe clinical trials are recruiting people with milder illnesses that respond better to placebo, or just get better on their own. But we really don't have any clear idea.

What if the confusion is because of the very concept of the "placebo"? Earlier this year, the BMJ ran a short opinion piece called It’s time to put the placebo out of our misery. Robin Nunn wants us to "stop thinking in terms of placebo...The placebo construct conceals more than it clarifies."

His central argument is an analogy. If we knew nothing about humour and observed a comedian telling jokes to an audience, we might decide there was a mysterious "audience effect" at work, and busy ourselves studying it...
Imagine that you are a visitor from another world. You observe a human audience for the first time. You notice a man making vocal sounds. He is watched by an audience. Suddenly they burst into smiles and laughter. Then they’re quiet. This cycle of quietness then laughter then quietness happens several times.

What is this strange audience effect? Not all of the man’s sounds generate an audience effect, and not every audience member reacts. You deem some members of the audience to be “audience responders,” those who are particularly influenced by the audience effect. What makes them react? A theory of the audience effect could be spun into an entire literature analogous to the literature on the placebo effect.
But what we should be doing is examining the details of jokes and of laughter -
We could learn more about what makes audiences laugh by returning to fundamentals. What is laughter? Why is “fart” funnier than “flatulence”? Why are some people just not funny no matter how many jokes they try?
And this is what we should be doing with the "placebo effect" as well -
Suppose there is no such unicorn as a placebo. Then what? Just replace the thought of placebo with something more fundamental. For those who use placebo as treatment, ask what is going on. Are you using the trappings of expertise, the white coat and diploma? Are you making your patients believe because they believe in you?
Nunn's piece is a polemic and he seems to be conclude by calling for a "post-placebo era" in which there will be no more placebo-controlled trials (although it's not clear what he means by this). This is going too far. But his analogy with humour is an important one because it forces us to analyse the placebo in detail.

"The placebo effect" has become a vague catch-all term for anything that seems to happen to people when you give them a sugar pill. Of course, lots of things could happen. They could feel better just because of the passage of time. Or they could realize that they're supposed to feel better and say they feel better, even if they don't.

The "true" placebo effect refers to improvement (or worsening) of symptoms driven purely by the psychological expectation of such. But even this is something of a catch-all term. Many things could drive this improvement. Suppose you give someone a placebo pill that you claim will make them more intelligent, and they believe it.

Believing themselves to be smarter, they start doing smart things like crosswords, math puzzles, reading hard books (or even reading Neuroskeptic), etc. But the placebo itself was just a nudge in the right direction. Anything which provided that nudge would also have worked - and the nudge itself can't take all the credit.

The strongest meaning of the "placebo effect" is a direct effect of belief upon symptoms. You give someone a sugar pill or injection, and they immediately feel less pain, or whatever. But even this effect encompasses two kinds of things. It's one thing if the original symptoms have a "real" medical cause, like a broken leg. But it's another thing if the original symptoms are themselves partially or wholly driven by psychological factors, i.e. if they are "psychosomatic".

If a placebo treats a "psychosomatic" disease, then that's not because the placebo has some mysterious, mind-over-matter "placebo effect". All the mystery, rather, lies with the psychosomatic disease. But this is a crucial distinction.

People seem more willing to accept the mind-over-matter powers of "the placebo" than they are to accept the existence of psychosomatic illness. As if only doctors with sugar pills possess the power of suggestion. If a simple pill can convince someone that they are cured, surely the modern world in all its complexity could convince people that they're ill.

[BPSDB]

ResearchBlogging.orgNunn, R. (2009). It's time to put the placebo out of our misery BMJ, 338 (apr20 2) DOI: 10.1136/bmj.b1568

Statistically

"Statistically, airplane travel is safer than driving..." "Statistically, you're more likely to be struck by lightning than to..." "Statistically, the benefits outweigh the risks..."

What does statistically mean in sentences like this? Strictly speaking, nothing at all. If airplane travel is safer than driving, then that's just a fact. (It is true on an hour-by-hour basis). There's no statistically about it. A fact can't be somehow statistically true, but not really true. Indeed, if anything, it's the opposite: if there are statistics proving something, it's more likely to be true than if there aren't any.

But we often treat the word statistically as a qualifier, something than makes a statement less than really true. This is because psychologically, statistical truth is often different to, and less real than, other kinds of truth. As everyone knows, Joseph Stalin said that one death is a tragedy, but a million deaths is a statistic. Actually, Stalin didn't say that, but it's true. And if someone has a fear of flying, then all the statistics in the world probably won't change that. Emotions are innumerate.

*

Another reason why statistics feel less than real is that, by their very nature, they sometimes seem to conflict with everyday life. Statistics show that regular smoking, for example, greatly raises your risk of suffering from lung cancer, emphysema, heart disease and other serious illnesses. But it doesn't guarantee that you will get any of them, the risk is not 100%, so there will always be people who smoke a pack a day for fifty years and suffer no ill effects.

In fact, this is exactly what the statistics predict, but you still hear people referring to their grandfather who smoked like a chimney and lived to 95, as if this somehow cast doubt on the statistics. Statistically, global temperatures are rising, which predicts that some places will be unusually cold (although more will be unusually warm), but people still think that the fact that it's a bit chilly this year casts doubt on the fact of global warming.

*

Some people admit that they "don't believe in statistics". And even if we don't go that far, we're often a little skeptical. There are lies, damn lies, and statistics, we say. Someone wrote a book called How To Lie With Statistics. Few of us have read it, but we've all heard of it.

Sometimes, this is no more than an excuse to ignore evidence we don't like. It's not about all statistics, just the inconvenient ones. But there's also, I think, a genuine distrust of statistics per se. Partially, this reflects distrust towards the government and "officialdom", because most statistics nowadays come from official sources. But it's also because psychologically, statistical truth is just less real than other kinds of truth, as mentioned above.

*

I hope it's clear that I do believe in statistics, and so should you, all of them, all the time, unless there is a good reason to doubt a particular one. I've previously written about my doubts concerning mental health statistics, because there are specific reasons to think that these are flawed.

But in general, statistics are the best way we have of knowing important stuff. It is indeed possible to lie with statistics, but it's much easier to lie without them: there are more people in France than in China. Most people live to be at least 110 years old. Africa is richer than Europe. Those are not true. But statistics are how we know that.

[BPSDB]

Dorothy Rowe Wronged, also Wrong

(Via Bad Science) Here's the curious story of what happened when clinical psychologist Dorothy Rowe was interviewed for a BBC radio show about religion. She gave a 50 minute interview in which she said that religion was bad. The BBC, in their wisdom, edited this down to 2 minutes of audio which made her sound as if she was saying religion was good. She was annoyed, and complained. The BBC admitted that they'd misrepresented her and apologized. Naughty.

But that's not the point of this post. Because the BBC not only offered Rowe an apology, they also agreed to let her write about what she really believes and put it up on bbc.co.uk. Here is the result. Oh dear. It's, well, it's confused.

"Neuroscience proves the existence of free will" would be an extraordinary media headline, and, perhaps even more extraordinary, it would be true.
No it wouldn't Rowe - it wouldn't even mean anything. It gets worse from there on in. Read it if you can, but it's pretty bad. Not Bono-bad, but bad, especially in the way that she inserts references to the brain and to neuroscience seemingly at random which add literally nothing to her argument. Her argument being that we interpret reality, rather than directly percieving it. Which is true enough, but that idea's been around since the time of ancient Greece, where the cutting edge of neuroscience was the theory that the brain was made of semen. It's philosophy, not neuroscience.

This kind of neuro-fetishism happens a lot nowadays, but what's really weird is that Rowe is one of those psychologists who is convinced that depression (and indeed all mental illness) is not a "brain problem". Even one such as she clearly isn't immune to the lure of neuroscience explanations.

[BPSDB]

Critiquing a Classic: "The Seductive Allure of Neuroscience Explanations"


One of the most blogged-about psychology papers of 2008 was Weisberg et. al.'s The Seductive Allure of Neuroscience Explanations.

As most of you probably already know, Weisberg et. al. set out to test whether adding an impressive-sounding, but completely irrelevant, sentence about neuroscience to explanations for common aspects of human behaviour made people more likely to accept those explanations as good ones. As they noted in their Introduction:
Although it is hardly mysterious that members of the public should find psychological research fascinating, this fascination seems particularly acute for findings that were obtained using a neuropsychological measure. Indeed, one can hardly open a newspaper’s science section without seeing a report on a neuroscience discovery or on a new application of neuroscience findings to economics, politics, or law. Research on nonneural cognitive psychology does not seem to pique the public’s interest in the same way, even though the two fields are concerned with similar questions.
They found that the pointless neuroscience made people rate bad psychological "explanations" as being better. The bad psychological explanations were simply descriptions of the phenomena in need of explanation (something like "People like dogs because they have a preference for domestic canines"). Without the neuroscience, people could tell that the bad explanations were bad, compared to other, good explanations. The neuroscience blinded them to this. This confusion was equally present in "normal" volunteers and in cognitive neuroscience students, although cognitive neuroscience experts (PhDs and professors) seemed to be immune.

But is this really true?

This kind of research - which claims to provide hard, scientific evidence for the existence of a commonly believed in psychological phenomenon, usually some annoyingly irrational human quirk - is dangerous; it should always be read with extra care. The danger is that the results can seem so obviously true ("Well of course!") and so important ("How many times have I complained about this?") that the methodological strengths and weaknesses of the study go unnoticed. People see a peer-reviewed paper which seemingly confirms the existence of one of their pet peeves, and they believe it - becoming even more peeved in the process.(*)

In this case, the peeve is obvious: the popular media certainly seem to inordinately keen on neuroimaging studies, and often seem to throw in pictures of brain scans and references to brain regions just to make their story seem more exciting. The number of people who confuse neural localization with explanation is depressing. Those not involved in cognitive neuroscience must find this rather frustrating. Even neuroimagers roll their eyes at it (although some may be secretly glad of it!)

So Weisberg et al. struck a chord with most readers, including most of the potentially skeptical ones - which is exactly why it needs to be read very carefully critiqued. Personally, having done so, I think that it's an excellent paper, but the data presented only allow fairly modest conclusions to be drawn, so far. The authors have not shown that neuroscience, specifically, is seductive or alluring.

Most fundamentally, the explanations including the dodgy neuroscience differed from the non-neurosciencey explanations in more than just neuroscience. Most obviously, they were longer, which may have made them seem "better" to the untrained, or bored, eye; indeed the authors themselves cite a paper, Kikas (2003), in which the length of explanations altered how people perceived them. Secondly, the explanations with added neuroscience were more "complex" - they included two separate "explanations", a psychological one and a neuroscience one. This complexity, rather than the presence of neuroscience per se, might have contributed to their impressiveness.

Perhaps the authors should have used three conditions - psychology, "double psychology" (with additional psychological explanations or technical terminology), and neuroscience (with additional neuroscience). As it stands, the authors have strictly shown is that longer, more jargon-filled explanations are rated as better - which is an interesting finding, but is not necessarily specific to neuroscience.

In their discussion (and to their credit) the authors fully acknowledge these points (emphasis mine)
Other kinds of information besides neuroscience could have similar effects. We focused the current experiments on neuroscience because it provides a particularly fertile testing ground, due to its current stature both in psychological research and in the popular press. However, we believe that our results are not necessarily limited to neuroscience or even to psychology. Rather, people may be responding to some more general property of the neuroscience information that encouraged them to find the explanations in the With Neuroscience condition more satisfying.
But this is rather a large caveat. If all the authors have shown is that people can be "Blinded with Science" (yes...like the song) in a non-specific manner, that has little to do with neuroscience. The authors go on to discuss various interesting, and plausible, theories about what might make seemingly "scientific" explanations seductive, and why neuroscience might be especially prone to this - but they are, as they acknowledge, just speculations. At this stage, we don't know, and we don't know how important this effect is in the real world, when people are reading newspapers and looking at pictures of brain scans.

Secondly, the group differences - between the "normal people", the neuroscience students, and the neuroscience experts - are hard to interpret. There were 81 normal people, mean age 20, but we don't know who they were or how they were recruited - were they students, internet users, the authors' friends? (10 of them didn't give their age and for 2 gender was "unreported" -?) We don't know whether their level of education, their interests, or values were different from the cognitive neuroscience students in the second group (mean age 20), who may likewise have been different in terms of education, intelligence and beliefs from the expert neuroscientists in the third group (mean age 27). Maybe such personal factors, rather than neuroscience knowledge, explained the group similarities and differences?

Finally, the effects seen in this paper were, on the face of it, small - people rated the explanations on a 7 point scale from -3 (bad) to +3 (excellent), but the mean scores were all between -1 and +1. The dodgy neuroscience added about 1 point on a 7 point scale of satisfactoriness. Is that "a lot" or "a little"? It's impossible to say.

All of that said - this is still a great paper, and the point of this post is not to criticize or "debunk" Weisberg et. al.'s excellent work. If you haven't read their paper, you should read it, in full, right now, and I'm looking forward to further stuff from the same group. What I'm trying to do is to warn against another kind of seductive allure, probably the oldest and most dangerous of all - the allure of that which confirms what we already thought we knew.

(*)Or do they? Or is this just one of my pet peeves? Maybe I need to do an experiment about the allure of psychology papers confirming the allure of psychologist's pet peeves...


ResearchBlogging.orgDeena Skolnick Weisberg, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, Jeremy R. Gray (2008). The Seductive Allure of Neuroscience Explanations Journal of Cognitive Neuroscience, 20 (3), 470-477 DOI: 10.1162/jocn.2008.20040

Lessons from the Video Game Brain

See also Lessons from the Placebo Gene. Also, if you like this kind of thing, see my other fMRI-curmudgeonry(1, 2)

The life of a neurocurmudgeon is a hard one, but once in a while, fate smiles upon us. This article in the Daily Telegraph neatly embodies several of the mistakes that people make about the brain, all in one bite-size portion.

The article is about a recent fMRI study published in the Journal of Psychiatric Research. 22 healthy Stanford student volunteers (half of them male) played a "video game" while being scanned. The game wasn't an actual game like Left 4 Dead(*), but rather a kind of very primitive cross between Pong and Risk, designed specifically for the purposes of the experiment:

Balls appeared on one-half of the screen from the side at 40 pixel/s, and 10 balls were constantly on the screen at any given time. One’s own space was defined as the space behind the wall and opposite side to where the balls appeared. The ball disappeared whenever clicked by the subject. Anytime a ball hit the wall before it could be clicked, the ball was removed and the wall moved at 20 pixel/s, making the space narrower. Anytime all the balls were at least 100 pixels apart from the wall ... the wall moved such that the space became wider.
Essentially they had to click on balls to stop them moving a line. This may not sound like much fun, but the author's justification for using this task was that it allowed them to have a control condition in which the instructions and were the same (click on the balls) but there was no "success" or "failure" because the line defining the "territory" was always fixed. That's actually a pretty good idea. The students did the task 40 times during the scan for 24s at a time, alternating between the two conditions, "no success" (line fixed) and "game with success/failure" (line moves).

The results: While men & women were equally good at clicking balls, men were more successful at gaining "territory" than the women. In both genders, doing the task vs. just resting in the scanner activated various visual and motor-related areas - no surprise. Playing the game vs. doing the control task in which there was no success or failure produced more activation in a handful of areas but only "at a more liberal threshold" i.e. this activation was not statistically reliable. A region-of-interest analysis found activation in the left nucleus accumbens and right orbitofrontal cortex, which are "reward-related" areas. In males, the game-specific activation was greater than in females in the right nucleus accumbens, the orbitofrontal cortex, and the right amygdala.

These areas are indeed "neural circuitries involved in reward and addiction" as the authors put it, but they're also activated whenever you experience anything pleasant or enjoyable, such as drinking water when you're thirsty. Water is not known to be addictive. So whether this study is relevant to video-game "addiction" is anyone's guess. As far as I can tell, all it shows is that men are more interested in simple, repetitive, abstract video games. But that's hardly news: in 2007 there was an International Pac-Man Championship with 30,000 entrants; the top 10 competitors were all male. (If anything in that last sentence surprises you, you haven't spent enough time on the internet.)

Anyway, that's the study. This is what the Telegraph made of it:
Playing on computer consoles activates parts of the male brain which are linked to rewarding feelings and addiction, scans have shown. The more opponents they vanquish and points they score, the more stimulated this region becomes. In contrast, these parts of women's brains are much less likely to be triggered by sessions on the Sony PlayStation, Nintendo Wii or Xbox.
Well, not quite. No opponents were vanquished and no Wii's were played. But so far this is just another fMRI study that attracted the attention of a journalist who knew how to spin a good story. Readers of Neuroskeptic will know this is not uncommon. However, it doesn't end there. Here's the really instructive bit:
Professor Allan Reiss of the Centre for Interdisciplinary Brain Sciences Research at Stanford University, California, who led the research, said that women understood computer games just as well as men but did not have the same neurological drive to win.
"These gender differences may help explain why males are more attracted to, and more likely to become 'hooked' on video games than females," he said.
"I think it's fair to say that males tend to be more intrinsically territorial. It doesn't take a genius to figure out who historically are the conquerors and tyrants of our species – they're the males.
"Most of the computer games that are really popular with males are territory and aggression-type games."
Now this is a theory - men like video games because we're intrinsically drawn to competition, conquest and territory-grabbing. This may or may not be true; personally, in the light of what I know of history and anthropology, I suspect it is, but even if you disagree, you can see that this is an important theory: it makes a big difference whether it's true or not.

However, the fMRI results have nothing to do with this theory. They neither support nor refute it, and nor could they; this experiment is essentially irrelevant to the theory in question. Prof. Allan Reiss is simply stating his personal opinions about human nature - however intelligent & informed these opinions may be. (Just to be clear, it's quite possible that Reiss didn't expect to be quoted in the way he was; he may have, not unreasonably, thought that he was just giving his informal opinion.) The Telegraph's sub-headline?
Men's passion for computer games stems from a deep-rooted urge to conquer, according to research
There are some lessons here.

1. If you want to know about something, study it.

If you want to learn about human behaviour, study human behaviour. Stanley Milgram discovered important things about behaviour; if he had never even heard about the brain, it wouldn't have stopped him from doing that.

Neuroscience can tell us about how behaviour happens. We get thirsty when we haven't drunk water for a while. Neuroscience, and only neuroscience, will tell you how. Some people get depressed or manic. One day, I hope, neuroscience will tell us the complete story of how - maybe mania will turn out to be caused by hyper-stimulation of a certain dopamine receptor - and we'll be able to stop it happening with some pill with a 100% success rate.

However, neuroscience can't tell you what human behaviour is: it cannot describe behaviour, it can only explain it. People know about thirst and depression and mania long before they knew anything about the brain. More importantly, and more subtly, neuroscience can only explain behaviour in the "how" sense; only rarely can it tell you why behaviour is the way that it is.

If someone is behaving in a certain way because of brain damage or disease, that's one of these rare cases. In that case "damage to area X caused by disease Y" is "why". But in most cases, it's not. To say that men like video games because their reward systems are more sensitive to video games is not a "why" explanation. It's a "how" explanation, and it leaves completely open the question of why the male brain is more sensitive to video games. The answer might be "innate biological differences due to evolution", or it might be "sexist upbringing", or "paternalistic culture", or anything else.

(This is often overlooked in discussions about psychiatry. Some people object to the idea that clinical depression is a neuro-chemical state, pointing out that depression can be caused by stress, rejection and other events in life. This is confused; there is no reason why stress or rejection could not cause a state of low serotonin. By extension, saying that someone has "low serotonin" always leaves open the question of why.)

2. Brains are people too

This leads on to a more subtle point. Some people understand the difference between how and why explanations, but feel that if the "how" is something to do with the brain, the "why" must be to do with the brain too. They look at brain scans showing that people behave in a certain way because their brain is a certain way (e.g. men like games because their reward system is more activated by games), and they think that there must be a "biological" explanation for why this is.

There might be, but there might not be. Brains are alive; they see and hear; they think; they talk; they feel. Your brain does everything you do, because you are "your" brain. The astonishing thing about brains is that they are both material, biological objects, and concious, living people, at the same time.

Your brain is not your liver, which is only affected by chemical and biological influences, like hormones, toxins, and bacteria. Your liver doesn't care whether you're a Christian or a Muslim, it cares about whether you drink alcohol. Your brain does care about your religion because some pattern of connections in your brain gives you the religion that you have.

Brain scans, by confronting us with the biological, material nature of the brain, make us look for biological, material why explanations. We forget that the brain might be the way it is because of cultural or historical or psychological or sociological or economic factors, because we forget that brains are people. We tend to think of people as being something beyond and above their brains. Ironically, it's this primitive dualism that leads to the most crude materialistic explanations for human behaviour.

3. Beware neuro-fetishists

There's a doctoral thesis in "Science Studies" to be written about how it came to happen, but that we fetishize the brain is obvious. For much of the 20th century, psychology was seen in the same way. Freud joined Nietschze, Marx and Heidegger in the ranks of Germanic names that literary theorists and lefty intellectuals loved to drop.

Then the bottom fell out of psychoanalysis, Prozac and fMRI arrived and the Decade of the Brain was upon us. Today, neuroscience is the new psychology - or perhaps psychology is becoming a branch of neuroscience. (If I asked you to depict psychology visually, you'd probably draw a brain - if you do a Google image search for "psychology", 10 out of the 21 front page hits depict either a brain or a head; this might not surprise you but it would have seemed odd 50 years ago.) There's a presumption that neuroscience is key to answering both how and why questions about the mind.

Neuroscience is now hot, but what people are mostly interested in are psychological and philosophical questions. People care about The Big Questions like -

"Is there life after death? Do we have free will? Is human nature fixed? Are men smarter/more aggressive/more promiscuous/better drivers than women? Why do people become criminals/geniuses/mad?"

These are good questions - but neuroscience has little to say about them, because they're not questions about the brain. They're questions for philosophers, or geneticists, or psychologists. No brain scan is going to tell you whether men are better drivers than women. It might tell you something about the processes by which make decisions while driving, but only a neuroscientist is likely to find that interesting.

P.S It turns out that people were saying similar things about this research back in Feburary. A blogger who writes about research on video games (neat) wrote about it way back then. So why did the Telegraph decide to resurrect the story as if it were new? That's just another one of life's mysteries.

[BPSDB]

(*) Which is so awesome.

ResearchBlogging.orgF HOEFT, C WATSON, S KESLER, K BETTINGER, A REISS (2008). Gender differences in the mesocorticolimbic system during computer game-play Journal of Psychiatric Research, 42 (4), 253-258 DOI: 10.1016/j.jpsychires.2007.11.010

 
powered by Blogger