Tampilkan postingan dengan label genes. Tampilkan semua postingan
Tampilkan postingan dengan label genes. Tampilkan semua postingan

The Brain's Sarcasm Centre? Wow, That's Really Useful

A team of Japanese scientists have found the most sarcastic part of the brain known to date. They also found the metaphor centre of the brain and, well, it's kind of like a pair of glasses.

The paper is Distinction between the literal and intended meanings of sentences and it's brought to you by Uchiyama et al. They took 20 people and used fMRI to record neural activity while the volunteers read 4 kinds of statements:

  • Literally true
  • Nonsensical
  • Sarcastic
  • Metaphorical
The neat thing was that the statements themselves were the same in each case. The preceding context determined how they were to be interpreted. So for example, the statement "It was bone-breaking" was literally true when it formed part of a story about someone in hospital describing an accident; it was metaphorical in the context of someone describing how hard it was to do something difficult; and it was nonsensical if the context was completely unrelated ("He went to the bar and ordered:...").

Here's what they found. Compared to the literally-true and the nonsensical statements, which were a control condition, metaphorical statements activated the head of the caudate nucleus, the thalamus, and an area of the medial PFC they dub the "arMPFC" but which other people might call the pgACC or something even more exotic; names get a bit vague in the frontal lobe.


The caudate nucleus, as I said, looks like a pair of glasses. Except without the nose bit. The area activated by metaphors was the "lenses". Kind of.

Sarcasm however activated the same mPFC region, but not the caudate:

Sarcasm also activated the amygdala.

*

So what? This is a very nice fMRI study. 20 people is a lot, the task was well-designed and the overlap of the mPFC blobs in the sarcasm-vs-control and the metaphor-vs-control tasks was impressive. There's clearly something going on there in both cases, relative to just reading literal statements. Something's going on in the caudate and thalamus with metaphor but not sarcasm, too.

But what can this kind of study tell us about the brain? They've localized something-about-metaphor to the caudate nucleus, but what is it, and what does the caudate actually do to make that thing happen?

The authors offer a suggestion - the caudate is involved in "searching for the meaning" of the metaphorical statement in order to link it to the context, and work out what the metaphor is getting at. This isn't required for sarcasm because there's only one, literal, meaning - it's just reversed, the speaker actually thinks the exact opposite. Whereas with both sarcasm and metaphor you need to attribute intentions (mentalizing or "Theory of Mind").

That's as plausible an account as any but the problem is that we have no way of knowing, at least not from imaging studies, if it's true or not. As I said this is not the fault of this study but rather an inherent challenge for the whole enterprise. The problem is - switch on your caudate, metaphor coming up - a lot like the challenge facing biology in the aftermath of the Human Genome Project.

The HGP mapped the human genome, and like any map it told us where stuff is, in this case where genes are on chromosomes. You can browse it here. But by itself this didn't tell us anything about biology. We still have to work out what most of these genes actually do; and then we have to work out how they interact; and they we have to work out how those interactions interact with other genes and the environment...

Genomics people call this, broadly speaking, "annotating" the genome, although this is not perhaps an ideal term because it's not merely scribbling notes in the margins, it's the key to understanding. Without annotation, the genome's just a big list.

fMRI is building up a kind of human localization map, a blobome if you will, but by itself this doesn't really tell us much; other tools are required.

ResearchBlogging.orgUchiyama HT, Saito DN, Tanabe HC, Harada T, Seki A, Ohno K, Koeda T, & Sadato N (2011). Distinction between the literal and intended meanings of sentences: A functional magnetic resonance imaging study of metaphor and sarcasm. Cortex; a journal devoted to the study of the nervous system and behavior PMID: 21333979

Fat Genes Make You Happy?

Does being heavier make you happier?

An interesting new paper from a British/Danish collaboration uses a clever trick based on genetics to untangle the messy correlation between obesity and mental health.

They had a huge (53,221) sample of people from Copenhagen, Denmark. It measured people's height and weight to calculate their BMI, and asked them some simple questions about their mood, such as "Do you often feel nervous or stressed?"

Many previous studies have found that being overweight is correlated with poor mental health, or at least with unhappiness ("psychological distress"). And this was exactly what the authors found in this study, as well.

Being very underweight was also correlated with distress; perhaps these were people with eating disorders or serious medical illnesses. But if you set those small number of people aside, there was a nice linear correlation between BMI and unhappiness. When they controlled for various other variables like income, age, and smoking, the effect of BMI became smaller but it was still significant.

But that's just a correlation, and as we all know, "correlation doesn't imply causation". Actually, it does; something must be causing the correlation, it didn't just magically appear out of nowhere. The point is that shouldn't make simplistic assumptions about what the causal direction is.

It would be easy to make these assumptions. Maybe being miserable makes you fat, due to comfort eating. Or maybe being fat makes you miserable, because overweight is considered bad in our society. Or both. Or neither. We don't know.

Finding this kind of correlation and then speculating about it is where a lot of papers finish, but for these authors, it was just the start. They genotyped everyone for two different genetic variants known, from lots of earlier work, to consistently affect body weight (FTO rs9939609 and MC4R rs17782313).

They confirmed that they were indeed associated with BMI; no surprise there. But here's the surprising bit: the "fat" variants of each gene were associated with less psychological distress. The effects were very modest, but then again, their effects on weight are small too (see the graph above; the effects are in terms of z scores and anything below 0.3 is considered "small".)

The picture was very similar for the other gene.

This allows us to narrow down the possibilities about causation. Being depressed clearly can't change your genotype. Nothing short of falling into a nuclear reactor can change your genotype. It also seems unlikely that genotype was correlated with something else which protects against depression. That's not impossible; it's the problem of population stratification, and it's a serious issue with multi-ethnic samples, but this paper only included white Danish people.

So the author's conclusion is that being slightly heavier causes you to be slightly happier, even though overall, weight is strongly correlated with being less happy. This seems paradoxical, but that's what the data show.

That conclusion would fall apart, though, if these genes directly effect mood, and also, separately, make you fatter. The authors argue that this is unlikely, but I wonder. Both FTO and MC4R are active in the brain: they influence weight by making you eat more. If they can affect appetite, they might also affect mood. A quick PubMed search only turns up a couple of rather speculative papers about MC4R and its possible links to mood, so there's no direct evidence for this, but we can't rule it out.

But this paper is still an innovative and interesting attempt to use genetics to help get beneath the surface of complex correlations. It doesn't explain the observed correlation between BMI and unhappiness - it actually makes it more mysterious. But that's a whole lot better than just speculating about it.

ResearchBlogging.orgLawlor DA, Harbord RM, Tybjaerg-Hansen A, Palmer TM, Zacho J, Benn M, Timpson NJ, Smith GD, & Nordestgaard BG (2011). Using genetic loci to understand the relationship between adiposity and psychological distress: a Mendelian Randomization study in the Copenhagen General Population Study of 53,221 adults. Journal of internal medicine PMID: 21210875

XMRV - Innocent on All Counts?

A bombshell has just gone off in the continuing debate over XMRV, the virus that may or may not cause chronic fatigue syndrome. Actually, 4 bombshells.

A set of papers out today in Retrovirology (1,2,3,4) claim that many previous studies claiming to have found the virus haven't actually been detecting XMRV at all.

Here's the rub. XMRV is a retrovirus, a class of bugs that includes HIV. Retroviruses are composed of RNA, but they can insert themselves into the genetic material of host cells as DNA. This is how they reproduce: once their DNA is part of the host cell's chromosomes, that cell is ends up making more copies of the virus.

But there are lots of retroviruses out there, and there used to be yet others that are now extinct. So bits of retroviral DNA are scattered throughout the genome of animals. These are called endogenonous retro-viruses (ERVs).

XMRV is extremely similar to certain ERVs found in the DNA of mice. And mice are the most popular laboratory mammals in the world. So you can see the potential problem: laboratories all over the world are full of mice, but mouse DNA might show up as "XMRV" DNA on PCR tests.

Wary virologists take precautions against this by checking specifically for mouse DNA. But most mouse-contamination tests are targeted at mouse mitochondrial DNA (mtDNA). In theory, a test for mouse mtDNA is all you need, because mtDNA is found in all mouse cells. In theory.

Now the four papers (or are they the Four Horsemen?) argue, in a nutshell, that mouse DNA shows up as "XMRV" on most of the popular tests that have been used in the past, that mouse contamination is very common - even some of the test kits are affected! - and that tests for mouse mtDNA are not good enough to detect the problem.

  • Hue et al say that "Taqman PCR primers previously described as XMRV-specific can amplify common murine ERV sequences from mouse suggesting that mouse DNA can contaminate patient samples and confound specific XMRV detection." They go on to show that some human samples previously reported as infected with XMRV, are actually infected with a hybrid of XMRV and a mouse ERV which we know can't infect humans.
  • Sato et al report that PCR testing kits from Invitrogen, a leading biotech company, are contaminated with mouse genes including an ERV almost identical to XMRV, and that this shows up as a false positive using commonly used PCR primers "specific to XMRV".
  • Oakes et al say that in 112 CFS patients and 36 healthy control, they detected "XMRV" in some samples but all of these samples were likely contaminated with mouse DNA because "all samples that tested positive for XMRV and/or MLV DNA were also positive for the highly abundant IAP long terminal repeat [found only in mice] and most were positive for murine mitochondrial cytochrome oxidase sequences [found only in mice]"
  • Robinson et al agree with Oakes et al: they found "XMRV" in some human samples, in this case prostate cancer cells, but they then found that all of the "infected" samples were contaminated with mouse DNA. They recommend that in future, samples should be tested for mouse genes such as the IAP long terminal repeat or cytochrome oxidase, and that researchers should not rely on tests for mouse mtDNA.
They're all open-access so everyone can take a peek. For another overview see this summary published alongside them in Retrovirology.

I lack the technical knowledge to evaluate these claims, no doubt plenty of people will be rushing to do that before long. (Update: The excellent virologyblog has a more technical discussion of these studies.) But there are a couple of things to bear in mind.

Firstly, these papers cast doubt on tests using PCR to detect XMRV DNA. However, they don't have anything to say about studies which have looked for antibodies against XMRV in human blood, at least not directly. There haven't been many of these, but the paper which started the whole story, Lombardi et al (2009), did look for, and found, anti-XMRV immunity, and also used various other methods to support the idea that XMRV is present in humans. So this isn't an "instant knock-out" of the XMRV theory, although it's certainly a serious blow.

Secondly, if the 'mouse theory' is true, it has serious implications for the idea that XMRV causes chronic fatigue syndrome and also for the older idea that it's linked to prostate cancer. But it still leaves a mystery: why were the samples from CFS or prostate cancer patients more likely to be contaminated with mouse DNA than the samples from healthy controls?

ResearchBlogging.orgRobert A Smith (2010). Contamination of clinical specimens with MLV-encoding nucleic acids: implications for XMRV and other candidate human retroviruses Retrovirology : 10.1186/1742-4690-7-112

Autism and Old Fathers

A new study has provided the strongest evidence yet that the rate of autism in children rises with the father's age: Advancing paternal age and risk of autism. But questions remain.

The association between old fathers and autism has been known for many years, and the most popular explanation has been genetic: sperm from older men are more likely to have accumulated DNA damage, which might lead to autism.

As I've said before, this might explain some other puzzling things such as the fact that autism is more common in the wealthy; it might even explain any recent increases in the prevalence of autism, if people nowadays are waiting longer to have kids.

But there are other possibilities. It might be that the fathers of autistic people tend to have mild autistic symptoms themselves (which they do), and this makes them likely to delay having children, because they're socially anxious and so take longer to get married, or whatever. It's not implausible.

The new study aimed to control for this, by looking at parents who had two or more children, at least one of them with autism, and at least one without it. Even within such families, the autistic children tended to have older fathers when they were born - that is to say, they were born later. See the graphs below for details. This seems to rule out explanations based on the characteristics of the parents.

However, there's another objection, the "experienced parent" theory. Maybe if parents have already had one neurotypical child, they're better at spotting the symptoms of autism in subsequent children, by comparison with the first one.

The authors tried to account for this as well, by controlling for the birth-order ("parity") of the kids. They also controlled for the mother's age amongst several other factors such as year of birth and history of mental illness in the parents. The results were still highly significant: older fathers meant a higher risk of autism. As if that wasn't enough, they also did a meta-analysis of all the previous studies and confirmed the same thing.

So overall, this is a very strong study, but there's a catch. The study population included over a million children (1,075,588) born in Sweden between 1983 and 1992. Of these, there was a total of 883 diagnosed cases of autism. That's a rate of 0.08%. In other words, although older fathers raised the risk of autism by quite a lot relatively speaking, the absolute rate was still tiny.

The most recent estimates of autism prevalence in Britain have put the figure at somewhere in the region of between 1% and 2% e.g. Baird et al (2006) and Baron-Cohen et al (2009) with American studies, using slightly different methods, generally coming in just below 1%. So the Swedish figure is more than 10 times lower than modern estimates. Whether this reflects different criteria for diagnosis, national differences, or increased prevalence over time, is debatable but it does raise the question of whether these findings still apply today.

The only way to know for sure would be to do a randomized controlled trial - get half your volunteer men to wait 10 years before having children - but I don't think that's going to happen any time soon...

ResearchBlogging.orgHultman CM, Sandin S, Levine SZ, Lichtenstein P, & Reichenberg A (2010). Advancing paternal age and risk of autism: new evidence from a population-based study and a meta-analysis of epidemiological studies. Molecular psychiatry PMID: 21116277

Genes To Brains To Minds To... Murder?

A group of Italian psychiatrists claim to explain How Neuroscience and Behavioral Genetics Improve Psychiatric Assessment: Report on a Violent Murder Case.

The paper presents the horrific case of a 24 year old woman from Switzerland who smothered her newborn son to death immediately after giving birth in her boyfriend's apartment. After her arrest, she claimed to have no memory of the event. She had a history of multiple drug abuse, including heroin, from the age of 13.


Forensic psychiatrists were asked to assess her case and try to answer the question of whether "there was substantial evidence that the defendant had an irresistible impulse to commit the crime." The paper doesn't discuss the outcome of the trial, but the authors say that in their opinion she exhibits a pattern of "pathologically impulsivity, antisocial tendencies, lack of planning...causally linked to the crime, thus providing the basis for an insanity defense."

But that's not all. In the paper, the authors bring neuroscience and genetics into the case in an attempt to provide
a more “objective description” of the defendant’s mental disease by providing evidence that the disease has “hard” biological bases. This is particularly important given that psychiatric symptoms may be easily faked as they are mostly based on the defendant’s verbal report.
So they scanned her brain, and did DNA tests for 5 genes which have been previously linked to mental illness, impulsivity, or violent behaviour. What happened? Apparently her brain has "reduced gray matter volume in the left prefrontal cortex" - but that was compared to just 6 healthy control women. You really can't do this kind of analysis on a single subject, anyway.

As for her genes, well, she had genes. On the famous and much-debated 5HTTLPR polymorphism, for example, her genotype was long/short; while it's true that short is generally considered the "bad" genotype, something like 40% of white people, and an even higher proportion of East Asians, carry it. The situation was similar for the other four genes (STin2 (SCL6A4), rs4680 (COMT), MAOA-uVNTR, DRD4-2/11, for gene geeks).

I've previously posted about cases in which a well-defined disorder of the brain led to criminal behaviour. There was the man who became obsessed with child pornography following surgical removal of a tumour in his right temporal lobe. There are the people who show "sociopathic" behaviour following fronto-temporal degeneration.

However this woman's brain was basically "normal" at least as far as a basic MRI scan could determine. All the pieces were there. Her genotypes was also normal in that lots of normal people carry the same genes; it's not (as far as we know) that she has a rare genetic mutation like Brunner syndrome in which an important gene is entirely missing. So I don't think neurobiology has much to add to this sad story.

*

We're willing to excuse perpetrators when there's a straightforward "biological cause" for their criminal behaviour: it's not their fault, they're ill. In all other cases, we assign blame: biology is a valid excuse, but nothing else is.

There seems to be a basic difference between the way in which we think about "biological" as opposed to "environmental" causes of behaviour. This is related, I think, to the Seductive Allure of Neuroscience Explanations and our fascination with brain scans that "prove that something is in the brain". But when you start to think about it, it becomes less and less clear that this distinction works.

A person's family, social and economic background is the strongest known predictor of criminality. Guys from stable, affluent families rarely mug people; some men from poor, single-parent backgrounds do. But muggers don't choose to be born into that life any more than the child-porn addict chose to have brain cancer.

Indeed, the mugger's situation is a more direct cause of his behaviour than a brain tumour. It's not hard to see how a mugger becomes, specifically, a mugger: because they've grown up with role-models who do that; because their friends do it or at least condone it; because it's the easiest way for them to make money.

But it's less obvious how brain damage by itself could cause someone to seek child porn. There's no child porn nucleus in the brain. Presumably, what it does is to remove the person's capacity for self-control, so they can't stop themselves from doing it.

This fits with the fact that people who show criminal behaviour after brain lesions often start to eat and have (non-criminal) sex uncontrollably as well. But that raises the question of why they want to do it in the first place. Were they, in some sense, a pedophile all along? If so, can we blame them for that?

ResearchBlogging.orgRigoni D, Pellegrini S, Mariotti V, Cozza A, Mechelli A, Ferrara SD, Pietrini P, & Sartori G (2010). How neuroscience and behavioral genetics improve psychiatric assessment: report on a violent murder case. Frontiers in behavioral neuroscience, 4 PMID: 21031162

Genes for ADHD, eh?

The first direct evidence of a genetic link to attention-deficit hyperactivity disorder has been found, a study says.
Wow! That's the headline. What's the real story?

The research was published in The Lancet, and it's brought to you by Wilson et al from Cardiff University: Rare chromosomal deletions and duplications in attention-deficit hyperactivity disorder.

The authors looked at copy-number variations (CNVs) in 410 children with ADHD, compared to 1156 healthy controls. A CNV is simply a catch-all term for when a large chunk of DNA is either missing ("deletions") or repeated ("duplications"), compared to normal human DNA. CNVs are extremely common - we all have a handful - and recently there's been loads of interest in them as possible causes for psychiatric disorders.

What happened? Out of everyone with high quality data available, 15.6% of the ADHD kids had at least one large, rare CNV, compared to 7.5% of the controls. CNVs were especially common in children with ADHD who also suffered mental retardation (defined as having an IQ less than 70) - 36% of this group carried at least one CNV. However, the rate was still elevated in those with normal IQs (11%).

A CNV could occur anywhere in the genome, and obviously what it does depends on where it is - which genes are deleted, or duplicated. Some CNVs don't cause any problems, presumably because they don't disrupt any important stuff.

The ADHD variants were very likely to affect genes which had been previously linked to either autism, or schizophrenia. In fact, no less than 6 of the ADHD kids carried the same 16p13.11 duplication, which has been found in schizophrenic patients too.

So...what does this mean? Well, the news has been full of talking heads only too willing to tell us. Pop-psychologist Oliver James was on top form - by his standards - making a comment which was reasonably sensible, and only involved one error:
Only 57 out of the 366 children with ADHD had the genetic variant supposed to be a cause of the illness. That would suggest that other factors are the main cause in the vast majority of cases. Genes hardly explain at all why some kids have ADHD and not others.
Well, there was no single genetic variant, there were lots. Plus, unusual CNVs were also carried by 7% of controls, so the "extra" mutations presumably only account for 7-8%. James also accused The Lancet of "massive spin" in describing the findings. While you can see his point, given that James's own output nowadays consists mostly of a Guardian column in which he routinely over/misinterprets papers, this is a bit rich.

The authors say that
the findings allow us to refute the hypothesis that ADHD is purely a social construct, which has important clinical and social implications for affected children and their families.
But they've actually proven that "ADHD" is a social construct. Yes, they've found that certain genetic variants are correlated with certain symptoms. Now we know that, say, 16p13.11-duplication-syndrome is a disease, and that its symptoms include (but aren't limited to) attention deficit and hyperactivity. But that doesn't tell us anything about all the other kids who are currently diagnosed with "ADHD", the ones who don't have that mutation.

"ADHD" is evidently an umbrella term for many different diseases, of which 16p13.11-duplication-syndrome is one. One day, when we know the causes of all cases of attention deficit and hyperactivity symptoms, the term "ADHD" will become extinct. There'll just be "X-duplication-syndrome", "Y-deletion-syndrome" and (because it's not all about genes) "Z-exposure-syndrome".

When I say that "ADHD" is a social construct, I don't mean that people with ADHD aren't ill. "Cancer" is also a social construct, a catch-all term for hundreds of diseases. The diseases are all too real, but the concept "cancer" is not necessarily a helpful one. It leads people to talk about Finding The Cure for Cancer, for example, which will never happen. A lot of cancers are already curable. One day, they might all be curable. But they'll be different cures.

So the fact that some cases of "ADHD" are caused by large rare genetic mutations, doesn't prove that the other cases are genetic. They might or might not be - for one thing, this study only looked at large mutations, affecting at least 500,000 bases. Given that even a deletion or insertion of just one base in the wrong place could completely screw up a gene, these could be just the tip of the iceberg.

But the other problem with claiming that this study shows "a genetic basis for ADHD" is that the variants overlapped with the ones that have recently been linked to autism, and schizophrenia. In other words, these genes don't so much cause ADHD, as protect against all kinds of problems, if you have the right variants.

If you don't, you might get ADHD, but you might get something else, or nothing, depending on... we don't know. Other genes and the environment, presumably. But "7% of cases of ADHD associated with mutations that also cause other stuff" wouldn't be a very good headline...

ResearchBlogging.orgN. M. Williams et al (2010). Rare chromosomal deletions and duplications in attention deficit hyperactivity disorder: a genome-wide analysis The Lancet

A Tale of Two Genes

An unusually gripping genetics paper from Biological Psychiatry: Pagnamenta et al.

The authors discuss a family where two out of the three children were diagnosed with autism. In 2009, they detected a previously unknown copy number variant mutation in the two affected brothers: a 594 kb deletion knocking out two genes, called DOCK4 and IMMP2L.

Yet this mutation was also carried by their non-autistic mother and sister, suggesting that it wasn't responsible for the autism. The mother's side of the family, however, have a history of dyslexia or undiagnosed "reading difficulties"; all of the 8 relatives with the mutation "performed poorly on reading assessment".

Further investigation revealed that the affected boys also carried a second, entirely separate, novel deletion, affecting the gene CNTNAP5. Their mother and sister did not. This mutation came from their father, who was not diagnosed with autism but apparently had "various autistic traits".

Perhaps it was the combination of the two mutations that caused autism in the two affected boys. The mother's family had a mutation that caused dyslexia; the father's side had one that caused some symptoms of autism but was not, by itself, enough to cause the disorder per se.

However, things aren't so clear. There were cases of diagnosed autism spectrum disorders in the father's family, although few details are given and DNA was only available from one of the father's relatives. So it may have been that the autism was all about the CNTNAP5, and this mutation just has a variable penetrance, causing "full-blown" autism in some people and merely traits in others (like the father).

In order to try to confirm whether these two mutations do indeed cause dyslexia and autism, they searched for them in several hundred unrelated autism and dyslexia patients as well as healthy controls. They detected the a DOCK4 deletion in 1 out of 600 dyslexics (and in his dyslexic father, but not his unaffected sister), but not in 2000 controls. 3 different CNTNAP5 mutations were found in the affected kids from 3 out of 143 autism families, although one of them was also found in over 1000 controls.

This is how psychiatric genetics is shaping up: someone finds a rare mutation in one family, they follow it up, and it's only carried by one out of several hundred other cases. So there are almost certainly hundreds of genes "for" disorders like autism, and it only takes a mutation in one (or two) to cause autism.

Here's another recent example: they found PTCHD1 variants in a full 1% of autism cases. It seems to me that autism, for example, is one of the things that happens when something goes wrong during brain development. Hundreds of genes act in synchrony to build a brain; it only takes one playing out of tune to mess things up, and autism is one common result.

Mental retardation and epilepsy are the other main ones, and we know that there are dozens or hundreds of different forms of these conditions each caused by a different gene or genes. The million dollar question is what it is that makes the autistic brain autistic, as opposed to, say, epileptic.

The "rare variants" model has some interesting implications. The father in the Pagnamenta et al. study had never been diagnosed with anything. He had what the authors call "autistic traits", but presumably he and everyone just thought of those as part of who he was - and they could have been anything from shyness, to preferring routine over novelty, to being good at crosswords.

Had he not carried the
CNTNAP5 mutation, he'd have been a completely different person. He might well have been drawn to a very different career, he'd probably never have married the woman he did, etc.

Of course, that doesn't mean that it's "the gene for being him"; all of his other 23,000 genes, and his environment, came together to make him who he was. But the point is that these differences don't just pile up on top of each other; they interact. One little change can change everything.

Link: BishopBlog on why behavioural genetics is more complicated than some people want you to think.

ResearchBlogging.orgPagnamenta, A., Bacchelli, E., de Jonge, M., Mirza, G., Scerri, T., Minopoli, F., Chiocchetti, A., Ludwig, K., Hoffmann, P., & Paracchini, S. (2010). Characterization of a Family with Rare Deletions in CNTNAP5 and DOCK4 Suggests Novel Risk Loci for Autism and Dyslexia Biological Psychiatry, 68 (4), 320-328 DOI: 10.1016/j.biopsych.2010.02.002

Autism And Wealth

We live in societies where some people are richer than others - though the extent of wealth inequality varies greatly around the world.

In general, it's sad but true that poor people suffer more diseases. Within a given country almost all physical and mental illnesses are more common amongst the poor, although this isn't always true between countries.

So if a certain disease is more common in rich people within a country, that's big news because it suggests that something unusual is going on. Autism spectrum disorders (ASDs) have long been known to show this pattern, at least in some countries, but this has often been thought to be a product of diagnostic ascertainment bias. Maybe richer and better-educated parents are more likely to have access to services that can diagnose autism. This is a serious issue because autism often goes undiagnosed and diagnosis is rarely clear-cut.

An important new PLoS paper from Wisconsin's Durkin et al suggests that, while ascertainment bias does happen, it doesn't explain the whole effect in the USA: richer American families really do have more autism than poorer ones. The authors made use of the ADDM Network which covers about 550,000 8 year old children from several sites across the USA. (This paper also blogged about here at C6-H12-O6 blog.)

ADDM attempts to count the number of children with autism based on

abstracted data from records of multiple educational and medical sources to determine the number of children who appear to meet the ASD case definition, regardless of pre-existing diagnosis. Clinicians determine whether the ASD case definition is met by reviewing a compiled record of all relevant abstracted data.
Basically, this allowed them to detect autism even in kids who haven't got a formal diagnosis, based on reports of behavioural problems at school etc indicative of autism. Clearly, this is going to underestimate autism somewhat, because some autistic kids do well at school and don't cause any alarm bells, but it has the advantage of reducing ascertainment bias.

What happened? The overall prevalence of autism was 0.6%. This is a lot lower than recent estimates in 5-9 year olds in the UK (1.5%), but the UK estimates used an even more detailed screening technique which was less likely to leave kids undetected.

The headline result: autism was more common in kids of richer parents. This held true within all ethnic groups: richer African-American or Hispanic parents were more likely to have autistic children compared to poorer people of the same ethnicity. So it wasn't a product of ethnic disparities.

Crucially, the pattern held true in children who had never been diagnosed with autism, although the effects of wealth were quite a bit smaller:

The difference in the slope of the two lines suggests that there is some ascertainment bias, with richer parents being more likely to get a diagnosis for their children, but this can't explain the whole story. There really is a correlation with wealth.

So what does this mean? This is a correlation - the causality remains to be determined. There are two obvious possibilities: to put it bluntly, either being rich makes your kids autistic, or having autistic kids makes you rich.

How could being rich make your children autistic? There could be many reasons, but a big one is paternal age: it's known that the risk of autism rises with the age of the father, maybe because the sperm of older men accumulates more genetic damage, and this damage can cause autism. In general richer people wait longer to have kids (I think, although I can't actually find the data on this) so maybe that's the cause.

How could having autistic kids make you richer? Well, unfortunately I don't think it does directly, but maybe being the kind of person who is likely to have an autistic child could. Autism is highly heritable, so the parents of autistic children are likely to carry some "autism genes". These could give them autistic traits, or indeed autism, and autistic traits, like being intensely interested in complex intellectual matters, can be a positive advantage in many relatively well paid professions like scientific research, or computing. Marginal Revolution's Tyler Cowen recently wrote a book all about that. I hope I will not offend too many when I say that in my experience it's rare to meet a scientist, IT person or, say, neuroscience blogger, who doesn't have a few...

ResearchBlogging.orgDurkin, M., Maenner, M., Meaney, F., Levy, S., DiGuiseppi, C., Nicholas, J., Kirby, R., Pinto-Martin, J., & Schieve, L. (2010). Socioeconomic Inequality in the Prevalence of Autism Spectrum Disorder: Evidence from a U.S. Cross-Sectional Study PLoS ONE, 5 (7) DOI: 10.1371/journal.pone.0011551

The Hunt for the Prozac Gene

One of the difficulties doctors face when prescribing antidepressants is that they're unpredictable.

One person might do well on a certain drug, but the next person might get no benefit from the exact same pills. Finding the right drug for each patient is often a matter of trying different ones until one works.

So a genetic test to work out whether a certain drug will help a particular person would be really useful. Not to mention really profitable for whoever patented it. Three recent papers, published in three major journals, all claim to have found genes that predict antidepressant response. Great! The problem is, they were different genes.

First up, American team Binder et al looked at about 200 variants in 10 genes involved in the corticosteroid stress response pathway. They found one, in a gene called CRHBP, that was significantly associated with poor response to the popular SSRI antidepressant citalopram (Celexa), using the large STAR*D project data set. But this was only true of African-Americans and Latinos, not whites.

Garriock et al used the exact same dataset, but they did a genome-wide association study (GWAS), which looks at variants across the whole genome, unlike Binder et al who focussed on a small number of specific candidate genes. Sadly no variants were statistically significantly correlated with response to citalopram, although in a GWAS, the threshold for genome-wide significance is very high due to multiple comparisons correction. Some were close to being significant, but they weren't obviously related to CRHBP, and most weren't anything to do with the brain.

Uher et al did another GWAS of response to escitalopram and nortriptyline in a different sample, the European GENDEP study. Escitalopram is extremely similar to citalopram, the drug in the STAR*D studies; nortriptyline however is very different. They found one genome-wide significant hit. A variant in a gene called UST was associated with response to nortriptyline, but not escitalopram. No variants were associated with response to escitalopram, although one in the gene IL11 was close. There were some other nearly-significant results, but they didn't overlap with either of the STAR*D studies.

Finally, one of the STAR*D studies found a variant significantly linked to tolerability (side effects) of citalopram. GENDEP didn't look at this.

*

The UST link to nortriptyline finding is the strongest thing here, but for citalopram / escitalopram, no consistent pharmacogenetic results emerged at all. What does this mean? Well, it's possible that there just aren't any genes for citalopram response, but that seems unlikely. Even if you believe that antidepressants only work as placebos, you'd expect there would be genes that alter placebo responses, or at the very least, that affect side-effects and hence the active placebo improvement.

The thing is that the "antidepressant response" in these studies isn't really that: it's a mix of many factors. We know that a lot of the improvement would have happened even with placebo pills, so much of it isn't a pharmacological effect. There are probably genes associated with placebo improvement, but they might not be the same ones that are associated with drug improvement and a gene might even have opposite effects that cancel out (better drug effect, worse placebo effect). Some of the recorded improvement won't even be real improvement at all, just people saying they feel better because they know they're expected to.

If I were looking for the genes for SSRI response, not that I plan to, here's what I'd do. To stack the odds in my favour, I'd forget people with an moderate or partial response, and focus on those who either do really well, or those who get no benefit at all, with a certain drug. I'd also want to exclude people who respond really well, but not due to the specific effects of the drug.

That would be hard but one angle would be to only include people whose improvement is specifically reversed by acute tryptophan depletion, which reduces serotonin levels thus counteracting SSRIs. This would be a hard study to do, though not impossible. (In fact there are dozens of patients on record who meet my criteria, and their blood samples are probably still sitting in freezers in labs around the world... maybe someone should dig them out).

Still, even if you did find some genes that way, would they be useful? We'd have had to go to such lengths to find them, that they're not going to help doctors decide what to do with the average patient who comes through the door with depression. That's true, but they might just help us to work out who will respond to SSRIs, as opposed to other drugs.

ResearchBlogging.orgBinder EB, Owens MJ, Liu W, Deveau TC, Rush AJ, Trivedi MH, Fava M, Bradley B, Ressler KJ, & Nemeroff CB (2010). Association of polymorphisms in genes regulating the corticotropin-releasing factor system with antidepressant treatment response. Archives of general psychiatry, 67 (4), 369-79 PMID: 20368512

Uher, R., Perroud, N., Ng, M., Hauser, J., Henigsberg, N., Maier, W., Mors, O., Placentino, A., Rietschel, M., Souery, D., Zagar, T., Czerski, P., Jerman, B., Larsen, E., Schulze, T., Zobel, A., Cohen-Woods, S., Pirlo, K., Butler, A., Muglia, P., Barnes, M., Lathrop, M., Farmer, A., Breen, G., Aitchison, K., Craig, I., Lewis, C., & McGuffin, P. (2010). Genome-Wide Pharmacogenetics of Antidepressant Response in the GENDEP Project American Journal of Psychiatry DOI: 10.1176/appi.ajp.2009.09070932

Garriock, H., Kraft, J., Shyn, S., Peters, E., Yokoyama, J., Jenkins, G., Reinalda, M., Slager, S., McGrath, P., & Hamilton, S. (2010). A Genomewide Association Study of Citalopram Response in Major Depressive Disorder Biological Psychiatry, 67 (2), 133-138 DOI: 10.1016/j.biopsych.2009.08.029

Drunk on Alcohol?

When you drink alcohol and get drunk, are you getting drunk on alcohol?

Well, obviously, you might think, and so did I. But it turns out that some people claim that the alcohol (ethanol) in drinks isn't the only thing responsible for their effects - they say that acetaldehyde may be important, perhaps even more so.

South Korean researchers Kim et al report that it's acetaldehyde, rather than ethanol, which explains alcohol's immediate effects on cognitive and motor skills. During the metabolism of ethanol in the body, it's first converted into acetaldehyde, which then gets converted into acetate and excreted. Acetaldehyde build-up is popularly renowned as a cause of hangovers (although it's unclear how true this is), but could it also be involved in the acute effects?

Kim et al gave 24 male volunteers a range of doses of ethanol (in the form of vodka and orange juice). Half of them carried a genetic variant (ALDH2*2) which impairs the breakdown of acetaldehyde in the body. About 50% of people of East Asian origin, e.g. Koreans, carry this variant, which is rare in other parts of the world.

As expected, compared to the others, the ALDH2*2 carriers had much higher blood acetaldehyde levels after drinking alcohol, while there was little or no difference in their blood ethanol levels.

Interestingly, though, the ALDH2*2 group also showed much more impairment of cognitive and motor skills, such as reaction time or a simulated driving task. On most measures, the non-carriers showed very little effect of alcohol, while the carriers were strongly affected, especially at high doses. Blood acetaldehyde was more strongly correlated with poor performance than blood alcohol was.

So the authors concluded that:

Acetaldehyde might be more important than alcohol in determining the effects on human psychomotor function and skills.
So is acetaldehyde to blame when you spend half an hour trying and failing to unlock your front door after a hard nights drinking? Should we be breathalyzing drivers for it? Maybe: this is an interesting finding, and there's quite a lot of animal evidence that acetaldehyde has acute sedative, hypnotic and amnesic effects, amongst others.

Still, there's another explanation for these results: maybe the
ALDH2*2 carriers just weren't paying much attention to the tasks, because they felt ill, as ALDH2*2 carriers generally do after drinking, as a result of acetaldehyde build-up. No-one's going to be operating at peak performance if they're suffering the notorious flush reaction or "Asian glow", which includes skin flushing, nausea, headache, and increased pulse...

ResearchBlogging.orgKim SW, Bae KY, Shin HY, Kim JM, Shin IS, Youn T, Kim J, Kim JK, & Yoon JS (2009). The Role of Acetaldehyde in Human Psychomotor Function: A Double-Blind Placebo-Controlled Crossover Study. Biological psychiatry PMID: 19914598

Good News for Armchair Neuropathologists

Ever wanted to crack the mysteries of the brain? Dreamed of discovering the cause of mental illness?

Well, now, you can - or, at any rate, you can try - and you can do it from the comfort of your own home, thanks to the new Stanley Neuropathology Consortium Integrative Database.

Just register (it's free and instant) and you get access to a pool of data derived from the Stanley Neuropathology Consortium brain collection. The collection comprises 60 frozen brains - 15 each from people with schizophrenia, bipolar disorder, and clinical depression, and 15 "normals".

In a Neuropsychopharmacology paper announcing the project, administrators Sanghyeon Kim and Maree Webster point out that

Data sharing has become more important than ever in the biomedical sciences with the advance of high-throughput technology and web-based databases are one of the most efficient available resources to share datasets.
The Institute's 60 brains have long been the leading source of human brain tissue for researchers in biological psychiatry. Whenever you read about a new discovery relating to schizophrenia or bipolar disorder, chances are the Stanley brains were involved. The Institute provide slices of the brains free of charge to scientists who request them, and they've sent out over 200,000 to date.

Until now, if you wanted to find out what these scientists discovered about the brains, you'd have to look up the results in the many hundreds of scientific papers where the various results were published. If you knew where to look, and if you had a lot of time on your hands. The database collates all of the findings. That's a good idea. To ensure that they get all of the results, the Institute have another good idea:
Coded specimens are sent to researchers with the code varying from researcher to researcher to ensure that all studies are blinded. The code is released to the researcher only when the data have been collected and submitted to the Institute.
The data we're provided about the brains is quite exciting, if you like molecules, comprising 1749 markers from 12 different parts of the brain. Markers include levels of proteins, RNA, and the number and shape of various types of cells.

It's easy to use. While waiting for my coffee to brew, I compared the amount of the protein GFAP76 in the frontal cortex between the four groups. There was no significant difference. I guess GFAP76 doesn't cause mental illness - darn. So much for my Nobel Prize winning theory. But I did find that levels of GFAP76 were very strongly correlated with levels of another protein, "phosphirylated" (I think they mean "phosphorylated") PRKCA. You read it here first.

In the paper, Kim and Webster used the Database to find many differences between normal brains and diseased brains, including increased levels of dopamine in schizophrenia, and increased levels of glutamate in depression and bipolar. And decreased GAD67 proteins in the frontal cortex in bipolar and schizophrenia. And decreased reelin mRNA in the frontal cortex and cerebellum in bipolar and schizophrenia. And...

This leaves open the vital questions of what these differences mean, as I have complained before. And the problem with giving everyone in the world the results of 1749 different tests, and letting us cross-correlate them with each other and look for differences between 4 patient groups, is that you're making possible an awful lot of comparisons. With only 15 brains per group, none of the results can be considered anything more than provisional, anyway - what we really need are lots more brains.

But this database is still a welcome move. This kind of data pooling is the only sensible approach to doing modern science, and it's something people are advocating in other fields of neuroscience as well. It just makes sense to share results rather than leaving everyone to do there own thing in near-isolation from each other, now that we have the technology to do so. In fact, I'd say it's a... no-brainer.

ResearchBlogging.orgKim, S., & Webster, M. (2009). The Stanley Neuropathology Consortium Integrative Database: a Novel, Web-Based Tool for Exploring Neuropathological Markers in Psychiatric Disorders and the Biological Processes Associated with Abnormalities of Those Markers Neuropsychopharmacology, 35 (2), 473-482 DOI: 10.1038/npp.2009.151

Predicting Antidepressant Response with EEG

One of the limitations of antidepressants is that they don't always work. Worse, they don't work in an unpredictable way. Some people benefit from some drugs, and others don't, but there's no way of knowing in advance what will happen in any particular case - or of telling which pill is right for which person.

As a result, drug treatment for depression generally involves starting with a cheap medication with relatively mild side-effects, and if that fails, moving onto a series of other drugs until one helps. But since it can take several weeks for any new drug to work, this can be a frustrating process for patients and doctors alike.

Some means of predicting the antidepressant response would thus be very useful. Many have been proposed, but none have entered widespread clinical use. Now, a pair of papers(1,2) from UCLA's Andrew Leuchter et al make the case for prediction using quantitative EEG (QEEG).

EEG, electroencephalography, is a crude but effective way of recording electrical activity in the brain via electrodes attached to the head. "Quantitative" EEG just means using EEG to precisely measure the level of certain kinds of activity in the brain.

Leuchter et al's system is straightforward: it uses six electrodes on the front of the head. The patient simply relaxes with their eyes closed for a few minutes while neural activity is recorded.

This procedure is performed twice, once just before antidepressant treatment begins and then again a week later. The claim is that by examining the changes in the EEG signal after one week of drug treatment, the eventual benefit of the drug can be predicted. It's not an implausible idea, and if it did work, it would be rather helpful. But does it?

Leuchter et al say: yes! The first paper reports that in 73 depressed patients who were given the antidepressant escitalopram 10mg/day, QEEG changes after one week predicted clinical improvement six weeks later. Specifically, people who got substantially better at seven weeks had a higher "Antidepressant Treatment Response Index" (ATR) at one week than people who didn't: 59.0 ± 10.2 vs 49.8 ± 7.8, which is highly significant (
p less than 0.001).

In the companion paper, the authors examined patients who started on escitalopram and then either kept taking it or switched to a different antidepressant, bupropion. They found that patients who had a high ATR after a week of escitalopram tended to do well if they stayed on it, while patients who had a low ATR to escitalopram did better when they switched to the other drug.

These are interesting results, and they follow from ten years of previous work (mostly, but not exclusively, from the same group) on the topic. Because the current study didn't include a placebo group, we can't say that the QEEG predicts antidepressant response as such, only that it predicts improvement in depression symptoms. But even this is pretty exciting, if it really works.

In order to verify that it does, other researchers need to replicate this experiment. But they may find this a little difficult. What is the Antidepressant Treatment Response Index use in this study? It's derived from an analysis of the EEG signal, and we're told that you get it from this formula:

Some of the terms here are common parameters that any EEG expert will understand. But "A", "B", and "C" are not. They're constants, which are not given in the paper. They're secret numbers. Without knowing what those numbers are, no-one can calculate the "ATR" even if they have an EEG machine.

Why
keep them secret? Well...

"Financial support of this project was provided by Aspect Medical Systems. Aspect participated in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation and review of the manuscript."
Aspect is a large medical electronics company who developed the system used here. Presumably, they want to patent it (or already have). We're told that
"To facilitate independent replication of the work reported here, Aspect intends to make available a limited number of investigational systems for academic researchers. Please contact Scott Greenwald, Ph.D... for further information."
All very nice of them, but if they'd told us the three magic numbers, academics could start trying to independently replicate these results tomorrow. As it is, anyone who wants to do so will have to get Aspect's blessing, which, with the best will in the world, means they will not be entirely "independent".

[BPSDB]


ResearchBlogging.orgLeuchter AF, Cook IA, Gilmer WS, Marangell LB, Burgoyne KS, Howland RH, Trivedi MH, Zisook S, Jain R, Fava M, Iosifescu D, & Greenwald S (2009). Effectiveness of a quantitative electroencephalographic biomarker for predicting differential response or remission with escitalopram and bupropion in major depressive disorder. Psychiatry research PMID: 19709754

Leuchter AF, Cook IA, Marangell LB, Gilmer WS, Burgoyne KS, Howland RH, Trivedi MH, Zisook S, Jain R, McCracken JT, Fava M, Iosifescu D, & Greenwald S (2009). Comparative effectiveness of biomarkers and clinical indicators for predicting outcomes of SSRI treatment in Major Depressive Disorder: Results of the BRITE-MD study. Psychiatry research PMID: 19712979

 
powered by Blogger