Who Gets Autism?

According to a major new report from Australia, social and family factors associated with autism are associated with a lower risk of intellectual disability - and vice versa. But why?


The paper is from Leonard et al and it's published in PLoS ONE, so it's open access if you want to take a peek. The authors used a database system in the state of Western Australia which allowed them to find out what happened to all of the babies born between 1984 and 1999 who were still alive as of 2005. There were 400,000 of them.

The records included information on children diagnosed with either an autism spectrum disorder (ASD), intellectual disability aka mental retardation (ID), or both. They decided to only look at singleton births i.e. not twins or triplets.

In total, 1,179 of the kids had a diagnosis of ASD. That's 0.3% or about 1 in 350, much lower than more recent estimates, but these more recent studies used very different methods. Just over 60% of these also had ID, which corresponds well to previous estimates.

There were about 4,500 cases of ID without ASD in the sample, a rate of just over 1%; the great majority of these (90%) had mild-to-moderate ID. They excluded an additional 800 kids with ID associated with a "known biomedical condition" like Down's Syndrome.

So what did they find? Well, a whole bunch, and it's all interesting. Bullet point time.

  • Between 1984 to 1999, rates of ID without ASD fell and rates of ASD rose, although there was a curious sudden fall in the rates of ASD without ID just before the end of the study. In 1984, "mild-moderate ID" without autism was by far the most common diagnosis, with 10 times the rate of anything else. By 1999, it was exactly level with ASD+ID, and ASD without ID was close behind. Here's the graph; note the logarithmic scale:

  • Boys had a much higher rate of autism than girls, especially when it came to autism without ID. This has been known for a long time.
  • Second- and third- born children had a higher rate of ID, and a lower rate of ASD, compared to firstborns.
  • Older mothers had children with more autism - both autism with and without ID, but the trend was bigger for autism with ID. But they had less ID. For fathers, the trend was the same and the effect was even bigger. Older parents are more likely to have autistic children but less likely to have kids with ID.

  • Richer parents had a strongly reduced liklihood of ID. Rates of ASD with ID were completely flat, but rates of ASD without ID were raised in the richer groups, though it was not linear (the middle groups were highest. - and effect was small.)
To summarize: the risk factors for autism were in most cases the exact opposite of those for ID. The more “advantaged” parental traits like being richer, and being older, were associated with more autism, but less ID. And as time went on, diagnosed rates of ASD rose while rates of ID fell (though only slightly for severe ID).

Why is this? The simplest explanation would be that there are many children out there for whom it's not easy to determine whether they have ASD or ID. Which diagnosis any such child gets would then depend on cultural and sociological factors - broadly speaking, whether clinicians are willing to give (and parents willing to accept) one or the other.

The authors note that autism has become a less stigmatized condition in Australia recently. Nowdays, they say, a diagnosis of ASD may be preferable to a diagnosis of "just" "plain old" ID, in terms of access to financial support amongst other things. However, it is also harder to get a diagnosis of ASD, as it requires you to go through a more extensive and complex series of assessments.

Clearly some parents will be better able to achieve this than others. In other countries, like South Korea, autism is still one of the most stigmatized conditions of childhood, and we'd expect that there, the trend would be reversed.

The authors also note the theory that autism rates are rising because of some kind of environmental toxin causing brain damage, like mercury or vaccinations. However, as they point out, this would probably cause more of all neurological/behavioural disorders, including ID; at the least it wouldn't reduce the rates of any.

These data clearly show that rates of ID fell almost exactly in parallel with rates of ASD rising, in Western Australia over this 15 year period. What will the vaccine-vexed folks over at Age of Autism make of this study, one wonders?

ResearchBlogging.orgLeonard H, Glasson E, Nassar N, Whitehouse A, Bebbington A, Bourke J, Jacoby P, Dixon G, Malacova E, Bower C, & Stanley F (2011). Autism and intellectual disability are differentially related to sociodemographic background at birth. PloS one, 6 (3) PMID: 21479223

Peripheral Nervous System (PNS)


The peripheral nervous system connects the central nervous system with the rest of the body. All motor, sensory and autonomic nerve cells and fibers outside the CNS are generally considered part of the PNS. Specifically, the PNS comprises the ventral (motor) nerve roots, dorsal (sensory) nerve roots, spinal ganglia, and spinal and peripheral nerves, and their endings, as well as a major portion of the autonomic nervous system (sympathetic trunk). The first two cranial nerves (the olfactory and optic
nerves) belong to the CNS, but the remainder belong to the PNS.

Peripheral nerves may be purely motor or sensory but are usually mixed, containing variable fractions of motor, sensory, and autonomic nerve fibers (axons). A peripheral nerve is made up of multiple bundles of axons, called fascicles, each of which is covered by a connective tissue sheath (perineurium). The connective tissue lying between axons within a fascicle is called endoneurium, and that between fascicles is called epineurium. Fascicles contain myelinated and unmyelinated axons, endoneurium, and capillaries. Individual axons are surrounded by supportive cells called Schwann cells. A single Schwann cell surrounds several axons of unmyelinated type. Tight winding of the Schwann cell membrane around the axon produces the myelin sheath that covers myelinated axons. The Schwann cells of a myelinated axon are spaced a small distance from one another; the intervals between them are called nodes of Ranvier. The nerve conduction velocity increases with the thickness of the myelin sheath. The specialized contact zone between a motor nerve fiber and the muscle it supplies is called the neuromuscular junction or motor end plate. Impulses arising in the sensory receptors of the skin, fascia, muscles, joints, internal organs, and other parts
of the body travel centrally through the sensory (afferent) nerve fibers. These fibers have their cell bodies in the dorsal root ganglia (pseudounipolar cells) and reach the spinal cord by way of the dorsal roots.

Which regions of the brain lack a significant blood-brain barrier?

Brain regions that lack a significant blood-brain barrier tend to be midline structures located
near ventricular spaces. They include the area postrema, median eminence of the
hypothalamus, and neurohypophysis.

Blood-brain Barrier Components?

The blood-brain barrier is not a single barrier, but a composite of many systems that act to
control the entry of substances from the blood to the brain:
1. Capillary endothelial cells linked by tight junctions and expressing specialized uptake
systems for particular metabolic substrates (e.g., glucose, amino acids)
2. A prominent basement membrane between endothelia and adjacent cells
3. Pericapillary astrocytes with end-feet adjacent to capillaries
A similar system exists for the choroidal epithelium (blood-cerebrospinal fluid [CSF]
barrier).

Understanding the molecular and cellular mechanisms, it's so important, why?

1. Enhancement of diagnostic possibilities and treatment options
2. More appropriate selection of diagnostic tests and interpretation of test results
3. Prediction of drug side effects and interactions
4. Selection of optimal drug regimens
5. Aid to critical review of novel concepts and therapies
6. Understanding of the rationale for current clinical trials
7. Provision of a background for communicating information to patients and families

First Fish, Now Cheese, Get Scanned

Here at Neuroskeptic we have closely followed the development of fMRI scanning on fish.


But a new study has taken it to the next level by scanning... some cheese.

OK, this is not quite true. The study used NMR spectroscopy to analyze the chemistry of some cheeses, in order to measure the effects of different kinds of probiotic bacteria on the composition of the cheese. NMR is the same technology as MRI, and indeed you can use an MRI scanner to gather NMR spectra.

In fact, NMR is Nuclear Magnetic Resonance and MRI is Magnetic Resonance Imaging; it was originally called NMRI, but they dropped the "N" because people didn't like the idea of being scanned by a "nuclear" machine. However, this study didn't actually involve putting cheese into an MRI scanner.

But the important point is that they could have done it by doing that. And if you did that, what with the salmon and now the cheese, you could get a nice MRI-based meal going. All we need is for someone to scan some vegetables, some herbs, and a slice of lemon, and we'd have a delicious dataset. Mmm.

How to cook it? Well, it's actually possible to heat stuff up with an MRI scanner. When scanning people, you set it up to make sure this doesn't happen, but the average fMRI experiment still causes mild heating. It's unavoidable.

I'm not sure what the maximum possible heating effect of an average MRI scanner would be. I doubt anyone has gone out of their way to try and maximize it, but maybe someone ought to look into it. Think of the possibilites.

You've just finished a hard day's scanning and you're really hungry, but the microwave at the MRI building is broken. Not to worry! Just pop your fillet of salmon in probiotic cheese sauce in the magnet, and scan it 'till it's done. You could inspect the images and the chemical composition of the meal before you eat it, to make sure it's just right.

Just make sure you don't use a steel saucepan...



ResearchBlogging.orgRodrigues D, Santos CH, Rocha-Santos TA, Gomes AM, Goodfellow BJ, & Freitas AC (2011). Metabolic Profiling of Potential Probiotic or Synbiotic Cheeses by Nuclear Magnetic Resonance (NMR) Spectroscopy. Journal of agricultural and food chemistry PMID: 21443163

BBC: Something Happened, For Some Reason

According to the BBC, the British recession and spending cuts are making us all depressed.


They found that between 2006 and 2010, prescriptions for SSRI antidepressants rose by 43%. They attribute this to a rise in the rates of depression caused by the financial crisis. OK there are a few caveats, but this is the clear message of an article titled Money woes 'linked to rise in depression'. To get this data they used the Freedom of Information Act.

What they don't do is to provide any of the raw data. So we just have to take their word for it. Maybe someone ought to use the Freedom of Information Act to make them tell us? This is important, because while I'll take the BBC's word about the SSRI rise of 43%, they also say that rates of other antidepressants rose - but they don't say which ones, by how much, or anything else. They don't say how many fell, or stayed flat.

Given which it's impossible to know what to make of this. Here are some alternative explanations:

  • This just represents the continuation of the well-known trend, seen in the USA and Europe as well as the UK, for increasing antidepressant use. This is my personal best guess and Ben Goldacre points out that rates rose 36% during the boom years of 2000-2005.
  • Depression has not got more common, it's just that it's more likely to be treated. This overlaps with the first theory. Support for this comes from the fact that suicide rates haven't risen - at least not by anywhere near 40%.
  • Mental illness is no more likely to be treated, but it's more likely to be treated with antidepressants, as opposed to other drugs. There was, and is, a move to get people off drugs like benzodiazepines, and onto antidepressants. However I suspect this process is largely complete now.
  • Total antidepressant use isn't rising but SSRI use is because doctors increasingly prescribe SSRIs over opposed to other drugs. This was another Ben Goldacre suggestion and it is surely a factor although again, I suspect that this process was largely complete by 2007.
  • People are more likely to be taking multiple different antidepressants, which would manifest as a rise in prescriptions, even if the total number of users stayed constant. Add-on treatment with mirtazapine and others is becoming more popular.
  • People are staying on antidepressants for longer meaning more prescriptions. This might not even mean that they're staying ill for longer, it might just mean that doctors are getting better at convincing people to keep taking them by e.g. prescribing drugs with milder side effects, or by referring people for psychotherapy which could increase use by keeping people "in the system" and taking their medication. This is very likely. I previously blogged about a paper showing that in 1993 to 2005, antidepressant prescriptions rose although rates of depression fell, because of a small rise in the number of people taking them for very long periods.
  • Mental illness rates are rising, but it's not depression: it's anxiety, or something else. Entirely plausible since we know that many people taking antidepressants, in the USA, have no diagnosable depression and even no diagnosable psychiatric disorder at all.
  • People are relying on the NHS to prescribe them drugs, as opposed to private doctors, because they can't afford to go private. Private medicine in the UK is only a small sector so this is unlikely to account for much but it's the kind of thing you need to think about.
  • Rates of depression have risen, but it's nothing to do with the economy, it's something else which happened between 2007 and 2010: the Premiership of Gordon Brown? The assassination of Benazir Bhutto? The discovery of a 2,100 year old Japanese melon?
Personally, my money's on the melon.

Neurology vs Psychiatry

Neurology and psychiatry are related fields - if for no other reason, because neurological disorders can often manifest as, and get misdiagnosed as, psychiatric ones.

But what's the borderline between neurology and psychiatry? What makes one disease "neurological" and another "mental"? Are some psychiatric disorders more "neurological" than others?

It's a rather philosophical question and you could discuss it for as long as you wanted. Rather than doing that I thought I'd have a look to see which disorders are, at the moment, considered to fall into each category.

To do this I did a quick search the archives of two journals, Neurology which the world's leading journal of... well, guess, and the American Journal of Psychiatry. I looked to see how many papers from the past 20 years had either a Title or an Abstract which referred to various different diseases. You can see the results above. Note that the total number of papers varied, obviously, and I've only plotted the proportion.

Some interesting results. Schizophrenia, which is probably considered "the most neurological" psychiatric disorder, is in fact the least talked about in Neurology. Depression is top amongst the "core" psychiatric ones.

Autism occupies a middle ground, discussed by psychiatrists at 70% and neurologists at 30%. That didn't surprise me, but what did was that ADHD is almost as neurological as autism. Mental retardation is also intermediate, though it's 30:70 in favour of neurology. Whether autism is really less neurological than mental retardation, is a good question.

Then out of the disorders with a known neuropathology, Alzheimer's disease, Huntington's disease and "dementia" (which overlaps with Alzheimer's) are a bit psychiatric while stuff like headache and epilepsy is almost 100% neurological. Why this is, is not entirely clear, since both dementia and epilepsy are caused by neurological damage, and they can both cause "psychiatric" symptoms.

I suspect the difference is that it's just much harder to treat Alzheimer's, Huntington's and dementia. With epilepsy or meningitis, neurologists have a very good chance of controlling the symptoms and few patients will be left with ongoing psychiatric problems. But with the neurodegenerative disorders, neurologists can't really do much, leaving a large pool of people for psychiatrists to study.

Someone once said that neurologists take all of the curable diseases and leave psychiatrists with the ones they can't help. These figures suggest that there may be some truth in this.

The Tufnel Effect


In This Is Spin̈al Tap, British heavy metal god Nigel Tufnel says, in reference to one of his band's less succesful creations:

It's such a fine line between stupid and...uh, clever.
This is all too true when it comes to science. You can design a breathtakingly clever experiment, using state of the art methods to address a really interesting and important question. And then at the end you realize that you forgot to type one word when writing the 1,000 lines of software code that runs this whole thing, and as a result, the whole thing's a bust.

It happens all too often. It has happened to me, let me think, three times in my scientific career and, I know of several colleagues who had similar problems and I'm currently struggling to deal with the consequences of someone else's stupid mistake.

Here's my cautionary tale. I once ran an experiment involving giving people a drug or placebo and when I crunched the numbers I found, or thought I'd found, a really interesting effect which was consistent with a lot of previous work giving this drug to animals. How cool is that?

So I set about writing it up and told my supervisor and all my colleagues. Awesome.

About two or three months later, for some reason I decided to reopen the data file, which was in Microsoft Excel, to look something up. I happened to notice something rather odd - one of the experimental subjects, who I remembered by name, was listed with a date-of-birth which seemed wrong: they weren't nearly that old.

Slightly confused - but not worried yet - I looked at all the other names and dates of birth and, oh dear, they were all wrong. But why?

Then it dawned on me and now I was worried: the dates were all correct but they were lined up with the wrong names. In an instant I saw the horrible possibility: m ixed up names would be harmless in themselves but what if the group assignments (1 = drug, 0 = placebo) were lined up with the wrong results? That would render the whole analysis invalid... and oh dear. They were.

As the temperature of my blood plummeted I got up and lurched over to my filing cabinet where the raw data was stored on paper. It was deceptively easy to correct the mix-up and put the data back together. I re-ran the analysis.

No drug effect.

I checked it over and over. Everything was completely watertight - now. I went home. I didn't eat and I didn't sleep much. The next morning I broke the news to my supervisor. Writing that email was one of the hardest things I've ever done.

What happened? As mentioned I had been doing all the analysis in Excel. Excel is not a bad stats package and it's very easy to use but the problem is that it's too easy: it just does whatever you tell it to do, even if this is stupid.

In my data as in most people's, each row was one sample (i.e. a person) and each column was a piece of info. What happened was that I'd tried to take all the data, which was in no particular order, and reorder the rows alphabetically by subject name to make it easier to read.

How could I screw that up? Well, by trying to select "all the data" but actually only selecting a few of the columns. Then I reordered them, but not the others, so all the rows became mixed up. And the crucial column, drug=1 placebo=0, was one of the ones I reordered.

The immediate lesson I learned from this was: don't use Excel, use SPSS, which simply does not allow you to reorder only some of the data. Actually, I still use Excel for making graphs and figures but every time I use it, I think back to that terrible day.

The broader lesson though is that if you're doing something which involves 100 steps, it only takes 1 mistake to render the other 99 irrelevant. This is true in all fields but I think it's especially bad in science, because mistakes can so easily go unnoticed due to the complexity of the data, and the consequences are severe because of the long time-scale of scientific projects.


Here's what I've learned: Look at your data, every step of the way, and look at your methods, every time you use them. If you're doing a neuroimaging study, the first thing you do after you collect the brain scans is to open them up and just look at them. Do they look sensible?

Analyze your data as you go along. Every time some new results come in, put it into your data table and just look at it. Make a graph which just shows absolutely every number all on one massive, meaningless line from Age to Cigarette's Smoked Per Week to EEG Alpha Frequency At Time 58. For every subject. Get to know the data. That way if something weird happens to it, you'll know. Don't wait to the end of the study to do the analysis. And don't rely on just your own judgement - show your data to other experts.

Check and recheck your methods as you go along. If you're running, say, a psychological experiment involving showing people pictures and getting them to push buttons, put yourself in the hot seat and try it on yourself. Not just once, but over and over. Some of the most insidious problems with these kinds of studies will go unnoticed if you only look at the task once - such as the old "randomized"-stimuli-that-aren't-random issue.

Trust no-one. This sounds bad, but it's not. Don't rely on their work, in experimental design or data analysis, until you've checked it yourself. This doesn't mean you're assuming they're stupid, because everyone makes these mistakes. It just means you're assuming they're human like you.

Finally, if the worst happens and you discover a stupid mistake in your own work: admit it. It feels like the end of the world when this happens, but it's not. However, if you don't admit it, or even worse, start fiddling other results to cover it up - that's misconduct, and if you get caught doing that, it is the end of the world, or your career, at any rate.

"1 Boring Old Man" Blog Isn't

Just wanted to let everyone know about a blog called 1 boring old man, which is a very poor name as it isn't boring at all.


I don't know if it's written by an old man or not, one can only assume so, but whoever writes it, it has got a lot of extremely good stuff about psychiatry and psychiatric drugs. Fans of Daniel Carlat's blog or even former readers of the now seemingly defuct Furious Seasons will find it extremely interesting.

It's actually been going since 2005, but for some reason I've only just found out about it (many thanks to regular Neuroskeptic commentator Bernard Carroll).

Women Are Better Connected... Neurally

The search for differences between the brains of men and women has a long and rather confusing history. Any structural differences are small, and their significance is controversial. The one rock-solid finding is that men's brains are slightly bigger on average. Then again, men are slightly bigger on average in general.

A new paper just out from Tomasi and Volkow (of cell-phones-affect-brain fame) offers, on the face of it, extremely strong evidence for a gender difference in the brain, not in structure but in function: Gender Differences in Brain Functional Connectivity Density.

Here's the headline pic:
They used resting-state "functional connectivity" (though see here for why this term may be misleading) fMRI in men and women. This essentially means that they put people in the MRI scanner, told them to just lie there and relax, and measured the degree to which activity in different parts of the brain was correlated to activity in every other part. They had a whopping 561 brains in total, though they didn't scan everyone themselves: they downloaded the data from here.

As you can see the results were highly consistent around the world. In both men and women, the main "connectivity hub" was an area called the ventral precuneus. This is interesting in itself although not a new finding as the precuneus has long been known to be involved in resting-state networks. However, the degree of connectivity was higher in women than in men 14% higher, in fact.

The method they used, which they've dubbed "Local Functional Connectivity Density Mapping", is apparantly a fast way of calculating the degree to which each part of the brain is functionally related to each other part.

You could do this by taking every single voxel and correlating it with every other voxel, for every single person, but this would take forever unless you had a supercomputer. LFCDM is, they say, a short-cut. I'm not really qualified to judge whether it's a valid one, but it looks solid.

Also, men's brains were on average bigger, but interestingly they show that women had, relative to brain size, more grey matter than men. Here's the data (I'm not sure about the color scheme...)

So what does the functional connectivity finding mean? It could mean anything, or nothing. You could interpret the highly interconnected female brain as an explanation for why women are more holistic, better at multi-tasking, and more in touch with their emotions than men with their fragmented faculties. Or whatever.

Or you could say, that that's sexist rubbish, and all this means is that men and women on average are thinking about different things when they lie in MRI scanners. We already know that resting-state functional connectivity centred on the precuneus is suppressed whenever your attention is directed towards an external "task".

That's not a fault of this research, which is excellent as far as it goes and certainly raises lots of interesting questions about functional connectivity. But we don't know what it means quite yet.

ResearchBlogging.orgTomasi D, & Volkow ND (2011). Gender differences in brain functional connectivity density. Human brain mapping PMID: 21425398

A Stroke Of Good Fortune Cures OCD?

A 45 year old female teacher had a history of severe obsessive-compulsive disorder, along with other problems including ADHD. Her daughter, and many other people in her family, had suffered the same problems and in a few cases had Tourette's Syndrome.But all that changed - when she suffered a stroke. This is according to a brief case report from Drs. Diamond and Ondo of Texas:

[she] had a long history of constant intrusive and obsessive thoughts that interrupted her daily activities and sleep. She had constant unfounded fears that something bad would happen to her family and had persistent violent thoughts of using knives to harm family members. She would check the door locks up to 15 times a day. In addition to her OCD symptoms, she had ... inattention, poor concentration, and difficulty sitting still.
She had never been treated for the OCD, despite how it interfered with her life, because she feared losing her job as a teacher if she sought psychiatric help. But then...
Nine months before approaching us, she developed the acute onset of paresthesia [weird sensations] and weakness in the left upper extremity and face, associated with slurred speech. Initially, she was unable to lift her arm against gravity.
These are classic signs of a stroke, but it was a very mild one, because the symptoms only lasted a few minutes and were pretty much gone even before she arrived at the emergency room. She made a full recovery. More than a full recovery in fact:
Within weeks of her stroke, she realized that her obsessive and intrusive thoughts, fears, rituals, and impulsive behavior had completely resolved. In addition, there was some improvement in her temperament. There was no improvement in attention or concentration. Owing to her improvement in neuropsychiatric symptoms, she strongly felt that her stroke was beneficial. These benefits have persisted for 24 months.
Most medical case reports concern patients who died, or got really sick, in a particularly interesting fashion, but this one has a happy ending. Strokes can be devastating, of course, although people also make full recoveries - it all depends on the severity of the stroke, and whether they get prompt treatment.

There have been a few other cases of brain damage which brought unexpectedly beneficial effects. In Vietnam veterans, for example, people with damage to the vmPFC due to combat trauma seemed to be protected from depression.

Whether the stroke really cured her, or whether it was some kind of psychological "placebo" effect, we'll never know. It's hard to see why a stroke would have a placebo effect, but on the other hand, an MRI scan revealed that the stroke occured in an area of the brain - the right frontoparietal cortex - which is fairly low down on the list of "OCD-ish" areas.

The authors make some vague comments about "modulation of the cortical–subcortical circuits" but this is really the neuroscientific equivalent of saying "We guess it did something", because the entire brain is made of cortical-subcortical circuits, given that the cortex is at the top and everything else is, by definition, the sub-cortex. It's quite possible. But we really can't tell.

ResearchBlogging.orgDiamond A, & Ondo WG (2011). Resolution of Severe Obsessive-Compulsive Disorder After a Small Unilateral Nondominant Frontoparietal Infarct. The International journal of neuroscience PMID: 21426244

Depressed or Bereaved? (Part 2)

In Part 1, I discussed a paper by Jerome Wakefield examining the issue of where to draw the line between normal grief and clinical depression.


The line moved in the American Psychiatric Association's DSM diagnostic system when the previous DSM-III edition was replaced by the current DSM-IV. Specifically, the "bereavement exclusion" was made narrower.

The bereavement exclusion says that you shouldn't diagnose depression in someone whose "depressive" symptoms are a result of grief - unless they're particularly severe or prolonged when you should. DSM-IV lowered the bar for "severe" and "prolonged", thus making grief more likely to be classed as depression. Wakefield argued that the change made things worse.

But DSM-V is on its way soon. The draft was put up online in 2010, and it turns out that depression is to have no bereavement exclusion at all. Grief can be diagnosed as depression in exactly the same way as depressive symptoms which come out of the blue.

The draft itself offered just one sentence by way of justification for this. However, big cheese psychiatrist Kenneth S. Kendler recently posted a brief note defending the decision. Wakefield has just published a rather longer paper in response.

Wakefield starts off with a bit of scholarly kung-fu. Kendler says that the precursors to the modern DSM, the 1972 Feighner and 1975 RDC criteria, didn't have a bereavement clause for depression either. But they did - albeit not in the criteria themselves, but in the accompanying how-to manuals; the criteria themselves weren't meant to be self-contained, unlike the DSM. Ouch! And so on.

Kendler's sole substantive argument against the exclusion is that it is "not logically defensible" to exclude depression induced by bereavement, if we don't have a similar provision for depression following other severe loss or traumatic events, like becoming unemployed or being diagnosed with cancer.

Wakefield responds that, yes, he has long made exactly that point, and that in his view we should take the context into account, rather than just looking at the symptoms, in grief and many other cases. However, as he points out, it is better to do this for one class of events (bereavement), than for none at all. He quotes Emerson's famous warning that "A foolish consistency is the hobgoblin of little minds". It's better to be partly right, than consistently wrong.

Personally, I'm sympathetic to Wakefield's argument that the bereavement exclusion should be extended to cover non-bereavement events, but I'm also concerned that this could lead to underdiagnosis if it relied too much on self-report.

The problem is that depression usually feels like it's been caused by something that's happened, but this doesn't mean it was; one of the most insidious features of depression is that it makes things seem much worse than they actually are, so it seems like the depression is an appropriate reaction to real difficulties, when to anyone else, or to yourself looking back on it after recovery, it was completely out of proportion. So it's a tricky one.

Anyway, back to bereavement; Kendler curiously ends up by agreeing that there ought to be a bereavement clause - in practice. He says that just because someone meets criteria for depression does not mean we have to treat them:

...diagnosis in psychiatry as in the rest of medicine provides the possibility but by no means the requirement that treatment be initiated ... a good psychiatrist, on seeing an individual with major depression after bereavement, would start with a diagnostic evaluation.

If the criteria for major depression are met, then he or she would then have the opportunity to assess whether a conservative watch and wait approach is indicated or whether, because of suicidal ideation, major role impairment or a substantial clinical worsening the benefits of treatment outweigh the limitations.
The final sentence is lifted almost word for word from the current bereavement clause, so this seems to be an admission that the exclusion is, after all, valid, as part of the clinical decision-making process, rather than the diagnostic system.

OK, but as Wakefield points out, why misdiagnose people if you can help it? It seems to be tempting fate. Kendler says that a "good psychiatrist" wouldn't treat normal, uncomplicated bereavement as depression. But what about the bad ones? Why on earth would you deliberately make your system such that good psychiatrists would ignore it?

More importantly, scrapping the bereavement criterion would render the whole concept of Major Depression meaningless. Almost everyone suffers grief at some point in their lives. Already, 40% of people meet criteria for depression by age 32, and that's with a bereavement exclusion.

Scrap it and, I don't know, 80% will meet criteria by that age - so the criteria will be useless as a guide to identifying the people who actually have depression as opposed to the ones who have just suffered grief. We're already not far off that point, but this would really take the biscuit.

ResearchBlogging.orgWakefield JC (2011) Should Uncomplicated Bereavement-Related Depression Be Reclassified as a Disorder in the DSM-5? The Journal of nervous and mental disease, 199 (3), 203-8 PMID: 21346493

Black Bile and Black Dogs

Depression is black. That's been the view of Western culture ever since the ancient Greeks, with their concept of "melan cholia" (μελαγχολία) - black bile. The idea was that psychological states were associated with particular bodily fluids; melancholy was associated with the "black bile" of the spleen, as opposed to the go-getting, passionate "yellow bile" of the gall-bladder

What this "black bile" (melan chole) actually was is rather mysterious. The gall bladder does indeed produce bile, a digestive juice which is greenish-yellow, but the spleen doesn't secrete anything as such. It itself is a dark greyish-purple, which might have given rise to the idea that it contained something black. Here's another theory.

The other color associated with depression is blue, of course, as in The Blues. However, when picturing depression-blue, I think most people generally see it as something rather close to black. It's the sky at twilight, not a bright summer's day, right? It's not a happy blue.

Winston Churchill famously referred to his depression as his Black Dog. There's a rather nice correspondence here with Chinese, though I doubt Churchill knew it. Here's the Chinese character for black and (one of) the characters for dog:
Write these as two separate characters and it says, well, black dog (badly). But there's another character which consists of "black" & " dog" combined:

This means silence; quiet; speechless; mute.

This is as good a one-word description of depression as any. Churchill's metaphor has always struck me as slightly misleading in one sense (although it's excellent in others): depression is not a thing; not even a black one. It is a lack, of motivation, energy, joy, imagination; you don't wake up and feel depressed, you wake up depressed and feel terrible, but the depression is hidden, only evident in retrospect, just as you don't tend to notice how quiet it is until a noise breaks the silence.

Neural Correlates of 80s Hip Hop

A ground-breaking new study reveals the neurological basis of seminal East Coast hip-hop pioneers Run-D.M.C.

The study is Diffusion tensor imaging of the hippocampus and verbal memory performance: The RUN DMC Study, and it actually has nothing to do with hip-hop, but it does have one of the best study acronyms I have ever seen.

RUN DMC stands for the "Radboud University Nijmegen Diffusion tensor and Magnetic resonance imaging Cohort study".

Or maybe it does relate to rapping. Because the paper is about verbal memory, and if there's one thing a rapper needs, it's a good memory for words, otherwise they'd forget their lyrics and... OK no, it doesn't relate to hip-hop.

It is however a very nice piece of research. They took no fewer than 503 elderly people - making this by far the single biggest neuroimaging study I have ever read. They used DTI to measure the quality of white-matter tracts in the brain and correlated this with verbal memory function. DTI is an extremely clever technique which allows you to measure the integrity of white matter pathways.

The theory behind the study is that in elderly people, white matter often shows degeneration. This is thought to be caused by vascular disease - problems with the blood flow to the brain, such as cerebral small-vessel disease which means, essentially, a series of mild strokes, which often go unnoticed at the time, but they build up to cause brain damage, specifically white matter disruption.

The symptoms of this are extremely varied and can range from cognitive and memory impairment, to depression, to motor problems (clumsiness), all depending on where in the brain it happens.

All of the people in this study had cerebral small-vessel disease as defined on the basis of symptoms and the presence of visible white matter lesions on the basic MRI scan. The authors found that the integrity of the white matter tracts in the area of the hippocampus, as measured with DTI, correlated with performance on a simple word learning task:


The healthier the hippocampal white matter, the better people did on the task. This makes sense as the hippocampus is a well known memory centre. This is only a correlation, and doesn't prove that the hippocampal damage caused the memory problems, but it seems entirely plausible. The authors controlled for things like age, gender, and the size of the hippocampus, as far as possible.

Should we all be worried about our white matter when we get older? Quite possibly - but luckily, the risk factors for vascular disease are quite well understood, and many of them are things you can change by having a healthy lifestyle.

Smoking is bad news, as are hypertension (high blood pressure), obesity, and high cholesterol. Diabetes is also a risk factor. So you should quit smoking, eat well, and ensure that you're getting tested and if necessary treated for hypertension and diabetes. All of which, of course, is a good idea from the point of view of general health as well.




ResearchBlogging.orgvan Norden AG, de Laat KF, Fick I, van Uden IW, van Oudheusden LJ, Gons RA, Norris DG, Zwiers MP, Kessels RP, & de Leeuw FE (2011). Diffusion tensor imaging of the hippocampus and verbal memory performance: The RUN DMC Study. Human brain mapping PMID: 21391278

Depressed Or Bereaved? (Part 1)

Part 2 is now out here.

My cat died on Tuesday. She may have been a manipulative psychopath, but she was a likeable one. She was 18.On that note, here's a paper about bereavement.

It's been recognized since forever that clinical depression is similar, in many ways, to the experience of grief. Freud wrote about it in 1917, and it was an ancient idea even then. So psychiatrists have long thought that symptoms, which would indicate depression in someone who wasn't bereaved, can be quite normal and healthy as a response to the loss of a loved one. You can't go around diagnosing depression purely on the basis of the symptoms, out of context.

On the other hand, sometimes grief does become pathological - it triggers depression. So equally, you can't just decide to never diagnose depression in the bereaved. How do you tell the difference between "normal" and "complicated" grief, though? This is where opinions differ.

Jerome Wakefield (of Loss of Sadness fame) and colleagues compared two methods. They looked at the NCS survey of the American population, and took everyone who'd suffered a possible depressive episode following bereavement. There were 156 of these.

They then divided these cases into "complicated" grief (depression) vs "uncomplicated" grief, first using the older DSM-III-R criteria, and then with the current DSM-IV ones. Both have a bereavement exclusion for the depression criteria - don't diagnose depression if it's bereavement - but they also have criteria for complicated grief which is depression, exclusions to the exclusion.

The systems differ in two major ways: the older criteria were ambiguous but at the time, they were generally interpreted to mean that you needed to have two features out of a possible five; prolonged duration was one of the list and anything over 12 months was considered "prolonged". In DSM-IV, however, you only need one criterion, and anything over 2 months is prolonged.

What happened? DSM-IV classified many more cases as complicated than the older criteria - 80% vs 45%. That's no surprise there because the criteria are obviously a lot broader. But which was better? In order to evaluate them, they compared the "complicated" vs "normal" episodes on six hallmarks of clinical depression - melancholic features, seeking medical treatment, etc.

They found that "complicated" cases were more severe under both criteria but the difference was much more clear cut using DSM-III-R.

Wakefield et al are not saying that the DSM-III-R criteria were perfect. However, it was better at identifying the severe cases than the DSM-IV, which is worrying because DSM-IV was meant to be an improvement on the old system.

Hang on though. DSM-V is coming soon. Are they planning to put things back to how they were, or invent an even better system? No. They're planning to, er, get rid of the bereavement criteria altogether and treat bereavement just like non-bereavement. Seriously. In other words they are planning to diagnose depression purely on the basis of the symptoms, out of context.

Which is so crazy that Wakefield has written another paper all about it (he's been busy recently), which I'm going to cover in an upcoming post. So stay tuned.

ResearchBlogging.orgWakefield JC, Schmitz MF, & Baer JC (2011). Did narrowing the major depression bereavement exclusion from DSM-III-R to DSM-IV increase validity? The Journal of nervous and mental disease, 199 (2), 66-73 PMID: 21278534

Paxil: The Whole Truth?

Paroxetine, aka Paxil aka Seroxat, is an SSRI antidepressant.

Like other SSRIs, its reputation has see-sawed over time. Hailed as miracle drugs in the 1990s and promoted for everything from depression to "separation anxiety" in dogs, they fell from grace over the past decade.

First, concerns emerged over withdrawal symptoms and suicidality especially in young people. Then more recently their antidepressant efficacy came into serious question. Paroxetine has arguably the worst image of all SSRIs, although whether it's much different to the rest is unclear.

Now a new paper claims to provide a definitive assessment of the safety and efficacy of paroxetine in adults (age 18+). The lead authors are from GlaxoSmithKline, who invented paroxetine. So it's no surprise that the text paints GSK and their product in a favourable light, but the data warrant a close look and the results are rather interesting - and complicated.

They took all of the placebo-controlled trials on paroxetine for any psychiatric disorder - because it wasn't just trialled in depression, but also in PTSD, anxiety, and more. They excluded studies with fewer than 30 people; this makes sense though it's somewhat arbitrary, why not 40 or 20? Anyway, they ended up with 61 trials.

First they looked at suicide. In a nutshell paroxetine increased suicidal "behaviour or ideation" in younger patients (age 25 or below) relative to placebo, whether or not they were being treated for depression. In older patients, it only increased suicidality in the depression trials, and the effect was smaller. I've put a red dot where paroxetine was worse than placebo; this doesn't mean the effect was "statistically significant", but the numbers are so small that this is fairly meaningless. Just look at the numbers.

This is not very new. It's been accepted for a while that broadly the same applies when you look at trials of other antidepressants. Whether this causes extra suicides in the real world is a big question.

When it comes to efficacy, however, we find some rather startling info that's not been presented together in one article before, to my knowledge. Here's a graph showing the effect of paroxetine over-and-above placebo in all the different disorders, expressed as a proportion of the improvement seen in the placebo group.

Now I should point out that I just made this measure up. It's not ideal. If the placebo response is very small, then a tiny drug effect will seem large by comparison, even if what this really means is that neither drug nor placebo do any good.

However the flip side of that coin is that it controls for the fact that rating scales for different disorders might be just more likely to show change than others. The d score is a more widely used standardized measure of effect size - though it has its own shortcomings - and I'd like to know those, but the data they provide don't allow us to easily calculate it. You could do it from the GSK database but it would take ages.

Anyway as you can see paroxetine was better, relative to placebo, against PTSD, PMDD, obsessive-compulsive disorder, and social anxiety, than it was against depression measured with the "gold-standard" HAMD scale! In fact the only thing it was worse against was Generalized Anxiety Disorder. Using the alternative MADRS depression scale, the antidepressant effect was bigger, but still small compared to OCD and social anxiety.

This is rather remarkable. Everyone calls paroxetine "an antidepressant", yet at least in one important sense it works better against OCD and social anxiety than it does against depression!

In fact, is paroxetine an antidepressant at all? It works better on MADRS and very poorly on the HAMD; is this because the HAMD is a better scale of depression, and the MADRS actually measures anxiety or OCD symptoms?

That's a lovely neat theory... but in fact the HAMD-17 has two questions about anxiety, scoring 0-4 points each, so you can score up to 8 (or 12 if you count "hypochondriasis", which is basically health anxiety, so you probably should), out of a total maximum of 52. The MADRS has one anxiety item with a max score of 6 on a total of 60. So the HAMD is more "anxious" than the MADRS.

This is more than just a curiosity. Paroxetine's antidepressant effect was tiny in those aged 25 or under on the HAMD - treatment just 9% of the placebo effect - but on the MADRS in the same age group, the benefit was 35%! So what is the HAMD measuring and why is it different to the MADRS?

Honestly, it's hard to tell because the Hamilton scale is so messy. It measures depression and the other distressing symptoms which commonly go along with it. The idea, I think, was that it was meant to be a scale of the patient's overall clinical severity - how seriously they were suffering - rather than a measure of depression per se.

Which is fine. Except that most modern trials carefully exclude anyone with "comorbid" symptoms like anxiety, and on the other hand, recruit people with symptoms quite different to the depressed inpatients that Dr Max Hamilton would have seen when he invented the scale in 1960.

Yet 50 years later the HAMD17, unmodified, is still the standard scale. It's been repeatedly shown to be multi-factorial (it doesn't measure one thing), no-one even agrees on how to interpret it, and a "new scale", the HAMD6, which consists of simply chucking out 11 questions and keeping the 6 that actually measure depression, has been shown to be better. Yet everyone still uses the HAMD17 because everyone else does.

Link: I recently covered a dodgy paper about paroxetine in adolescents with depression; it wasn't included in this analysis because this was about adults.

ResearchBlogging.orgCarpenter DJ, Fong R, Kraus JE, Davies JT, Moore C, & Thase ME (2011). Meta-analysis of efficacy and treatment-emergent suicidality in adults by psychiatric indication and age subgroup following initiation of paroxetine therapy: a complete set of randomized placebo-controlled trials. The Journal of clinical psychiatry PMID: 21367354

Amy Bishop, Neuroscientist Turned Killer

Across at Wired, Amy Wallace has a long but riveting article about Amy Bishop, the neuroscience professor who shot her colleagues at the University of Alabama last year, killing three.

It's a fascinating article because of the picture it paints of a killer and it's well worth the time to read. Yet it doesn't really answer the question posed in the title: "What Made This University Scientist Snap?"

Wallace notes the theory that Bishop snapped because she was denied tenure at the University, a serious blow to anyone's career and especially to someone who, apparantly, believed she was destined for great things. However, she points out that the timing doesn't fit: Bishop was denied tenure several months before the shooting. And she shot at some of the faculty who voted in her favor, ruling out a simple "revenge" motive.

But even if Bishop had snapped the day after she found out about the tenure decision, what would that explain? Thousands of people are denied tenure every year. This has been going on for decades. No-one except Bishop has ever decided to pick up a gun in response.

Bishop had always displayed a streak of senseless violence; in 1986, she killed her 18 year old brother with a shotgun in her own kitchen. She was 21. The death was ruled an accident, but probably wasn't. It's not clear what it was, though: Bishop had no clear motive.

Amy had said something that upset her father. That morning they’d squabbled, and at about 11:30 am, Sam, a film professor at Northeastern University, left the family’s Victorian home to go shopping... Amy, 21, was in her bedroom upstairs. She was worried about “robbers,” she would later tell the police. So she loaded her father’s 12-gauge pump-action shotgun and accidentally discharged a round in her room. The blast struck a lamp and a mirror and blew a hole in the wall...

The gun, a Mossberg model 500A, holds multiple rounds and must be pumped after each discharge to chamber another shell. Bishop had loaded the gun with number-four lead shot. After firing the round into the wall, she could have put the weapon aside. Instead, she took it downstairs and walked into the kitchen. At some point, she pumped the gun, chambering another round.

...[her mother] told police she was at the sink and Seth was by the stove when Amy appeared. “I have a shell in the gun, and I don’t know how to unload it,” Judy told police her daughter said. Judy continued, “I told Amy not to point the gun at anybody. Amy turned toward her brother and the gun fired, hitting him.”

Years later Bishop, possibly with the help of her husband, sent a letter-bomb to a researcher who'd sacked her, Paul Rosenberg. Rosenberg avoided setting off the suspicious package and police disarmed it; Bishop was questioned, but never charged.

Wallace argues that Bishop's "eccentricity", or instability, was fairly evident to those who knew her but that in the environment of science, it went unquestioned because science is full of eccentrics.

I'm not sure this holds up. It's certainly true that science has more than its fair share of oddballs. The "mad scientist" trope is a stereotype but it has its basis in fact and it has done at least since Newton; many say that you can't be a great scientist and be entirely 'normal'.

But the problem with this, as a theory for why Bishop wasn't spotted sooner, is that she was spotted sooner, as unhinged, albeit not as a potential killer,by a number of people. Rosenberg sacked her, in 1993, on the grounds that her work was inadaquate and said that "Bishop just didn’t seem stable". And in 2009, the reason Bishop was denied tenure in Alabama was partially that one of her assessors referred to her as "crazy", more than once; she filed a complaint on that basis.

Bishop also published a bizarre paper in 2009 written by herself, her husband, and her three children, of "Cherokee Lab Systems", a company which was apparantly nothing more than a fancy name for their house. There may be a lot of eccentrics in science, but that's really weird.

So I think that all of these attempts at an explanation fall short. Amy Bishop is a black swan; she is the first American professor to do what she did. Hundreds of thousands of scientists have been through the same academic system and only one ended up shooting their colleagues. If there is an explanation, it lies within Bishop herself.

Whether she was suffering from a diagnosable mental illness is unclear. Her lawyer has said so, but he would; it's her only defence. Maybe we'll learn more at the trial.#

H/T: David Dobbs for linking to this.

The Mystery of "Whoonga"


According to a disturbing BBC news story, South African drug addicts are stealing medication from HIV+ people and using it to get high:

'Whoonga' threat to South African HIV patients

"Whoonga" is, allegedly, the street name for efavirenz (aka Stocrin), one of the most popular antiretroviral drugs. The pills are apparantly crushed, mixed with marijuana, and smoked for its hallucinogenic effects.

This is not, in fact, a new story; Scientific American covered it 18 months ago and the BBC themselves did in 2008 (although they didn't name efavirenz.)

Edit 16.00 pm: In fact the picture is even messier than I first thought. Some sources, e.g. Wikipedia and the articles it links to, mostly from South Africa, suggest that "whoonga" is actually a 'brand' of heroin and that the antiretrovirals may not be the main ingredient, if they're an ingredient at all. If this is true, then the BBC article is misleading. Edit and see the Comments for more on this...

Why would an antiviral drug get you high? This is where things get rather mysterious. Efavirenz is known to enter the brain, unlike most other HIV drugs, and psychiatric side-effects including anxiety, depression, altered dreams, and even hallucinations are common in efavirenz use, especially with high doses (1,2,3), but they're usually mild and temporary. But what's the mechanism?

No-one knows, basically. Blank et al found that efavirenz causes a positive result on urine screening for benzodiazepines (like Valium). This makes sense given the chemical structure:
Efavirenz is not a benzodiazepine, because it doesn't have the defining diazepine ring (the one with two Ns). However, as you can see, it has a lot in common with certain benzos such as oxazepam and lorazepam.

However, while this might well explain why it confuses urine tests, it doesn't by itself go far to explaining the reported psychoactive effects. Oxazepam and lorazepam don't cause hallucinations or psychosis, and they reduce anxiety, rather than causing it.

They also found that efavirenz caused a false positive for THC, the active ingredient in marijuana; this was probably caused by the gluconuride metabolite. Could this metabolite have marijuana-like effects? No-one knows at present.

Beyond that there's been little research on the effects of efavirenz in the brain. This 2010 paper reviewed the literature and found almost nothing. There were some suggestions that it might affect inflammatory cytokines or creatine kinase, but these are not obvious candidates for the reported effects.

Could the liver be responsible, rather than the brain? Interestingly, the 2010 paper says that efavirenz inhibits three liver enzymes: CYPs 2C9, 2C19, and 3A4. All three are involved in the breakdown of THC, so, in theory, efavirenz might boost the effects of marijauna by this mechanism - but that wouldn't explain the psychiatric side effects seen in people who are taking the drug for HIV and don't smoke weed.

Drugs that cause hallucinations generally either agonize 5HT2A receptors or block NMDA receptors. Off the top of my head, I can't see any similarities between efavirenz and drugs that target those systems like LCD (5HT2A) or ketamine or PCP (NMDA), but I'm no chemist and anyway, structural similarity is not always a good guide to what drugs do.

If I were interested in working out what's going on with efavirenz, I'd start by looking at GABA, the neurotransmitter that's the target of benzos. Maybe the almost-a-benzodiazepine-but-not-quite structure means that it causes some unusual effects on GABA receptors? No-one knows at present. Then I'd move on to 5HT2A and NMDA receptors.

Finally, it's always possible that the users are just getting stoned on cannabis and mistakenly thinking that the efavirenz is making it better through the placebo effect. Stranger things have happened. If so, it would make the whole situation even more tragic than it already is.

ResearchBlogging.orgCavalcante GI, Capistrano VL, Cavalcante FS, Vasconcelos SM, Macêdo DS, Sousa FC, Woods DJ, & Fonteles MM (2010). Implications of efavirenz for neuropsychiatry: a review. The International journal of neuroscience, 120 (12), 739-45 PMID: 20964556

The Brain's Sarcasm Centre? Wow, That's Really Useful

A team of Japanese scientists have found the most sarcastic part of the brain known to date. They also found the metaphor centre of the brain and, well, it's kind of like a pair of glasses.

The paper is Distinction between the literal and intended meanings of sentences and it's brought to you by Uchiyama et al. They took 20 people and used fMRI to record neural activity while the volunteers read 4 kinds of statements:

  • Literally true
  • Nonsensical
  • Sarcastic
  • Metaphorical
The neat thing was that the statements themselves were the same in each case. The preceding context determined how they were to be interpreted. So for example, the statement "It was bone-breaking" was literally true when it formed part of a story about someone in hospital describing an accident; it was metaphorical in the context of someone describing how hard it was to do something difficult; and it was nonsensical if the context was completely unrelated ("He went to the bar and ordered:...").

Here's what they found. Compared to the literally-true and the nonsensical statements, which were a control condition, metaphorical statements activated the head of the caudate nucleus, the thalamus, and an area of the medial PFC they dub the "arMPFC" but which other people might call the pgACC or something even more exotic; names get a bit vague in the frontal lobe.


The caudate nucleus, as I said, looks like a pair of glasses. Except without the nose bit. The area activated by metaphors was the "lenses". Kind of.

Sarcasm however activated the same mPFC region, but not the caudate:

Sarcasm also activated the amygdala.

*

So what? This is a very nice fMRI study. 20 people is a lot, the task was well-designed and the overlap of the mPFC blobs in the sarcasm-vs-control and the metaphor-vs-control tasks was impressive. There's clearly something going on there in both cases, relative to just reading literal statements. Something's going on in the caudate and thalamus with metaphor but not sarcasm, too.

But what can this kind of study tell us about the brain? They've localized something-about-metaphor to the caudate nucleus, but what is it, and what does the caudate actually do to make that thing happen?

The authors offer a suggestion - the caudate is involved in "searching for the meaning" of the metaphorical statement in order to link it to the context, and work out what the metaphor is getting at. This isn't required for sarcasm because there's only one, literal, meaning - it's just reversed, the speaker actually thinks the exact opposite. Whereas with both sarcasm and metaphor you need to attribute intentions (mentalizing or "Theory of Mind").

That's as plausible an account as any but the problem is that we have no way of knowing, at least not from imaging studies, if it's true or not. As I said this is not the fault of this study but rather an inherent challenge for the whole enterprise. The problem is - switch on your caudate, metaphor coming up - a lot like the challenge facing biology in the aftermath of the Human Genome Project.

The HGP mapped the human genome, and like any map it told us where stuff is, in this case where genes are on chromosomes. You can browse it here. But by itself this didn't tell us anything about biology. We still have to work out what most of these genes actually do; and then we have to work out how they interact; and they we have to work out how those interactions interact with other genes and the environment...

Genomics people call this, broadly speaking, "annotating" the genome, although this is not perhaps an ideal term because it's not merely scribbling notes in the margins, it's the key to understanding. Without annotation, the genome's just a big list.

fMRI is building up a kind of human localization map, a blobome if you will, but by itself this doesn't really tell us much; other tools are required.

ResearchBlogging.orgUchiyama HT, Saito DN, Tanabe HC, Harada T, Seki A, Ohno K, Koeda T, & Sadato N (2011). Distinction between the literal and intended meanings of sentences: A functional magnetic resonance imaging study of metaphor and sarcasm. Cortex; a journal devoted to the study of the nervous system and behavior PMID: 21333979

 
powered by Blogger