Tampilkan postingan dengan label woo. Tampilkan semua postingan
Tampilkan postingan dengan label woo. Tampilkan semua postingan

Who Gets Autism?

According to a major new report from Australia, social and family factors associated with autism are associated with a lower risk of intellectual disability - and vice versa. But why?


The paper is from Leonard et al and it's published in PLoS ONE, so it's open access if you want to take a peek. The authors used a database system in the state of Western Australia which allowed them to find out what happened to all of the babies born between 1984 and 1999 who were still alive as of 2005. There were 400,000 of them.

The records included information on children diagnosed with either an autism spectrum disorder (ASD), intellectual disability aka mental retardation (ID), or both. They decided to only look at singleton births i.e. not twins or triplets.

In total, 1,179 of the kids had a diagnosis of ASD. That's 0.3% or about 1 in 350, much lower than more recent estimates, but these more recent studies used very different methods. Just over 60% of these also had ID, which corresponds well to previous estimates.

There were about 4,500 cases of ID without ASD in the sample, a rate of just over 1%; the great majority of these (90%) had mild-to-moderate ID. They excluded an additional 800 kids with ID associated with a "known biomedical condition" like Down's Syndrome.

So what did they find? Well, a whole bunch, and it's all interesting. Bullet point time.

  • Between 1984 to 1999, rates of ID without ASD fell and rates of ASD rose, although there was a curious sudden fall in the rates of ASD without ID just before the end of the study. In 1984, "mild-moderate ID" without autism was by far the most common diagnosis, with 10 times the rate of anything else. By 1999, it was exactly level with ASD+ID, and ASD without ID was close behind. Here's the graph; note the logarithmic scale:

  • Boys had a much higher rate of autism than girls, especially when it came to autism without ID. This has been known for a long time.
  • Second- and third- born children had a higher rate of ID, and a lower rate of ASD, compared to firstborns.
  • Older mothers had children with more autism - both autism with and without ID, but the trend was bigger for autism with ID. But they had less ID. For fathers, the trend was the same and the effect was even bigger. Older parents are more likely to have autistic children but less likely to have kids with ID.

  • Richer parents had a strongly reduced liklihood of ID. Rates of ASD with ID were completely flat, but rates of ASD without ID were raised in the richer groups, though it was not linear (the middle groups were highest. - and effect was small.)
To summarize: the risk factors for autism were in most cases the exact opposite of those for ID. The more “advantaged” parental traits like being richer, and being older, were associated with more autism, but less ID. And as time went on, diagnosed rates of ASD rose while rates of ID fell (though only slightly for severe ID).

Why is this? The simplest explanation would be that there are many children out there for whom it's not easy to determine whether they have ASD or ID. Which diagnosis any such child gets would then depend on cultural and sociological factors - broadly speaking, whether clinicians are willing to give (and parents willing to accept) one or the other.

The authors note that autism has become a less stigmatized condition in Australia recently. Nowdays, they say, a diagnosis of ASD may be preferable to a diagnosis of "just" "plain old" ID, in terms of access to financial support amongst other things. However, it is also harder to get a diagnosis of ASD, as it requires you to go through a more extensive and complex series of assessments.

Clearly some parents will be better able to achieve this than others. In other countries, like South Korea, autism is still one of the most stigmatized conditions of childhood, and we'd expect that there, the trend would be reversed.

The authors also note the theory that autism rates are rising because of some kind of environmental toxin causing brain damage, like mercury or vaccinations. However, as they point out, this would probably cause more of all neurological/behavioural disorders, including ID; at the least it wouldn't reduce the rates of any.

These data clearly show that rates of ID fell almost exactly in parallel with rates of ASD rising, in Western Australia over this 15 year period. What will the vaccine-vexed folks over at Age of Autism make of this study, one wonders?

ResearchBlogging.orgLeonard H, Glasson E, Nassar N, Whitehouse A, Bebbington A, Bourke J, Jacoby P, Dixon G, Malacova E, Bower C, & Stanley F (2011). Autism and intellectual disability are differentially related to sociodemographic background at birth. PloS one, 6 (3) PMID: 21479223

The Tree of Science

How do you know whether a scientific idea is a good one or not?


The only sure way is to study it in detail and know all the technical ins and outs. But good ideas and bad ideas behave differently over time, and this can provide clues as to which ones are solid; useful if you're a non-expert trying to evaluate a field, or a junior researcher looking for a career.

Today's ideas are the basis for tomorrow's experiments. A good idea will lead to experiments which provide interesting results, generating new ideas, which will lead to more experiments, and so on.

Before long, it will be taken as granted that it's true, because so many successful studies assumed it was. The mark of a really good idea is not that it's always being tested and found to be true; it's that it's an unstated assumption of studies which could only work if it were true. Good ideas grow onwards and upwards, in an expanding tree, with each exciting new discovery becoming the boring background of the next generation.

Astronomers don't go around testing whether light travels at a finite speed as opposed to an infinite one; rather, if it were infinite, their whole set-up would fail.

Bad ideas generate experiments too, but they don't work out. The assumptions are wrong. You try to explain why something happens, and you find that it doesn't happen at all. Or you come up with an "explanation", but next time, someone comes along and finds evidence suggesting the "true" explanation is the exact opposite.

Unfortunately, some bad ideas stick around, for political or historical reasons or just because people are lazy. What tends to happen is that these ideas are, ironically, more "productive" than good ideas: they are always giving rise to new hypotheses. It's just that these lines of research peter out eventually, meaning that new ones have to take their place.

As an example of a bad idea, take the theory that "vaccines cause autism". This hypothesis is, in itself, impossible to test: it's too vague. Which vaccines? How do they cause autism? What kind of autism? In which people? How often?

The basic idea that some vaccines, somewhere, somehow, cause some autism, has been very productive. It's given rise to a great many, testable, ideas. But every one which has been tested has proven false.

First there was the idea that the MMR vaccine causes autism, linked to a "leaky gut" or "autistic enterocolitis". It doesn't, and it's not linked to that. Then along came the idea that actually it's mercury preservatives in vaccines that cause autism. It doesn't. No problem - maybe it's aluminium? Or maybe it's just the Hep B vaccine? And so on.

At every turn, it's back to square one after a few years, and a new idea is proposed. "We know this is true; now we just need to work out why and how...". Except that turns out to be tricky. Hmm. Maybe, if you keep ending up back at square one, you ought to find a new square to start from.

Brain Scans Prove That The Brain Does Stuff

According to the BBC (and many others)...

Libido problems 'brain not mind'

Scans appear to show differences in brain functioning in women with persistently low sex drives, claim researchers.

The US scientists behind the study suggest it provides solid evidence that the problem can have a physical origin.

The research in question (which hasn't been published yet) has been covered very well over at The Neurocritic. Basically the authors took some women with a diagnosis of "Hypoactive Sexual Desire Disorder" (HSDD), and some normal women, put them in an fMRI scanner and showed them porn. Different areas of the brain lit up.

So what? For starters we have no idea if these differences are real or not because the study only had a tiny 7 normal women, although strangely, it included a full 19 women with HSDD. Maybe they had difficulty finding women with healthy appetites in Detroit?

Either way, a study is only as big as its smallest group so this was tiny. We're also not told anything about the stats they used so for all we know they could have used the kind that give you "results" if you use them on a dead fish.

But let's grant that the results are valid. This doesn't tell us anything we didn't already know. We know the women differ in their sexual responses - because that's the whole point of the study. And we know that this must be something to do with their brain, because the brain is where sexual responses, and every other mental event, happ
en.

So we already know that HSDD "has a physical origin", but only in the sense that everything does; being a Democrat or a Republican has a physical origin; being Christian or Muslim has a physical origin; speaking French as opposed to English has a physical origin; etc. etc.
None of which is interesting or surprising in the slightest.

The point is that the fact that something is physical doesn't stop it being also psychological. Because psychology happens in the brain. Suppose you see a massive bear roaring and charging towards you, and as a result, you feel scared. The fear has a physical basis, and plenty of physical correlates like raised blood pressure, adrenaline release, etc.

But if someone asks "Why are you scared?", you would answer "Because there's a bear about to eat us", and you'd be right. Someone who came along and said, no, your anxiety is purely physical - I can measure all these physiological differences between you and a normal person - would be an idiot (and eaten).

Now sometimes anxiety is "purely physical" i.e. if you have a seizure which affects certain parts of the temporal lobe, you may experience panic and anxiety as a direct result of the abnormal brain activity. In that case the fear has a physiological cause, as well as a physiological basis.

Maybe "HSDD" has a physiological cause. I'm sure it sometimes does; it would be very weird if it didn't in some cases because physiology can cause all kinds of problems. But fMRI scans don't tell us anything about that.

Link: I've written about HSDD before in the context of flibanserin, a drug which was supposed to treat it (but didn't). Also, as always, British humour website The Daily Mash hit this one on the head.
..

Genes for ADHD, eh?

The first direct evidence of a genetic link to attention-deficit hyperactivity disorder has been found, a study says.
Wow! That's the headline. What's the real story?

The research was published in The Lancet, and it's brought to you by Wilson et al from Cardiff University: Rare chromosomal deletions and duplications in attention-deficit hyperactivity disorder.

The authors looked at copy-number variations (CNVs) in 410 children with ADHD, compared to 1156 healthy controls. A CNV is simply a catch-all term for when a large chunk of DNA is either missing ("deletions") or repeated ("duplications"), compared to normal human DNA. CNVs are extremely common - we all have a handful - and recently there's been loads of interest in them as possible causes for psychiatric disorders.

What happened? Out of everyone with high quality data available, 15.6% of the ADHD kids had at least one large, rare CNV, compared to 7.5% of the controls. CNVs were especially common in children with ADHD who also suffered mental retardation (defined as having an IQ less than 70) - 36% of this group carried at least one CNV. However, the rate was still elevated in those with normal IQs (11%).

A CNV could occur anywhere in the genome, and obviously what it does depends on where it is - which genes are deleted, or duplicated. Some CNVs don't cause any problems, presumably because they don't disrupt any important stuff.

The ADHD variants were very likely to affect genes which had been previously linked to either autism, or schizophrenia. In fact, no less than 6 of the ADHD kids carried the same 16p13.11 duplication, which has been found in schizophrenic patients too.

So...what does this mean? Well, the news has been full of talking heads only too willing to tell us. Pop-psychologist Oliver James was on top form - by his standards - making a comment which was reasonably sensible, and only involved one error:
Only 57 out of the 366 children with ADHD had the genetic variant supposed to be a cause of the illness. That would suggest that other factors are the main cause in the vast majority of cases. Genes hardly explain at all why some kids have ADHD and not others.
Well, there was no single genetic variant, there were lots. Plus, unusual CNVs were also carried by 7% of controls, so the "extra" mutations presumably only account for 7-8%. James also accused The Lancet of "massive spin" in describing the findings. While you can see his point, given that James's own output nowadays consists mostly of a Guardian column in which he routinely over/misinterprets papers, this is a bit rich.

The authors say that
the findings allow us to refute the hypothesis that ADHD is purely a social construct, which has important clinical and social implications for affected children and their families.
But they've actually proven that "ADHD" is a social construct. Yes, they've found that certain genetic variants are correlated with certain symptoms. Now we know that, say, 16p13.11-duplication-syndrome is a disease, and that its symptoms include (but aren't limited to) attention deficit and hyperactivity. But that doesn't tell us anything about all the other kids who are currently diagnosed with "ADHD", the ones who don't have that mutation.

"ADHD" is evidently an umbrella term for many different diseases, of which 16p13.11-duplication-syndrome is one. One day, when we know the causes of all cases of attention deficit and hyperactivity symptoms, the term "ADHD" will become extinct. There'll just be "X-duplication-syndrome", "Y-deletion-syndrome" and (because it's not all about genes) "Z-exposure-syndrome".

When I say that "ADHD" is a social construct, I don't mean that people with ADHD aren't ill. "Cancer" is also a social construct, a catch-all term for hundreds of diseases. The diseases are all too real, but the concept "cancer" is not necessarily a helpful one. It leads people to talk about Finding The Cure for Cancer, for example, which will never happen. A lot of cancers are already curable. One day, they might all be curable. But they'll be different cures.

So the fact that some cases of "ADHD" are caused by large rare genetic mutations, doesn't prove that the other cases are genetic. They might or might not be - for one thing, this study only looked at large mutations, affecting at least 500,000 bases. Given that even a deletion or insertion of just one base in the wrong place could completely screw up a gene, these could be just the tip of the iceberg.

But the other problem with claiming that this study shows "a genetic basis for ADHD" is that the variants overlapped with the ones that have recently been linked to autism, and schizophrenia. In other words, these genes don't so much cause ADHD, as protect against all kinds of problems, if you have the right variants.

If you don't, you might get ADHD, but you might get something else, or nothing, depending on... we don't know. Other genes and the environment, presumably. But "7% of cases of ADHD associated with mutations that also cause other stuff" wouldn't be a very good headline...

ResearchBlogging.orgN. M. Williams et al (2010). Rare chromosomal deletions and duplications in attention deficit hyperactivity disorder: a genome-wide analysis The Lancet

Rowe No No

Neuroskeptic readers will know that I'm no fan of the American Psychiatric Association's DSM-4 system of psychiatric diagnosis. And judging by the draft version, DSM-5 is going to achieve an even lower place in my affections. The way things are going, I see it slotting in there just below pinworms, and just above celery.

But while there are many good reasons to criticize the DSM - see my numerous scribblings or try these books - there are plenty of bad reasons too. Psychologist and author Dorothy Rowe has just provided some in a recent Guardian article. I don't propose to spend much time on this confused piece, but one sentence is nonetheless instructive, as it exemplifies the danger of facile psychological explanations in psychiatry:

The people who come to the attention of psychiatrists and psychologists are feeling intense, often severe mental distress. Each of us has our own way of expressing anxiety and distress, but when under intense mental distress our typical ways become exaggerated. We become self-absorbed and behave in ways that the people around us find disturbing. Believing that when we're anxious it's best to keep busy can mean that our intense mental distress drives us into manic activity.
No it doesn't. No-one who has experienced mania or hypomania, or known someone who has, or... actually let's just say that no-one except Dorothy Rowe would be able to take that seriously as an account of mania.

Mania is when you write a letter to every one of your relatives proposing a grand family reunion. On a cruise ship in Hawaii. You'll pay for everything. Actually, you're broke. Mania is being literally unable to stop talking, because there are just so many interesting things to say. Actually, you're ranting at strangers on public transport.

The point is that when you're manic, these things don't seem weird, because mania is a mental state in which everything seems incredibly exciting and important, and you think you can do anything. It's like being on crack, without the link to reality of knowing that actually, you're not Jesus, you're on crack. Not all manic episodes are this extreme, and by definition hypomania is less dramatic, but the essential feeling is the same. That's what makes mania, mania.

You can be "manically" busy of course, or have a Manic Monday, but that's a figure of speech. Maybe some people's strategy for dealing with anxiety is by making themselves "manically" busy. If so, fair enough, but that's not mania. Mania is not a strategy; it's a mental state, and psychologically irreducible: you don't become manic about something, you just become manic.

It can certainly be triggered by things - stress, sleep deprivation, and crossing time zones are notorious - but it's not an understandable psychological response to them, it's a state that happens to result. If you drink some beer and get drunk, you're not drunk about beer, you're just drunk.
So Rowe's account of mania is spectacularly wrong. But take a look at the very next sentence:
A tendency to blame yourself and feel guilty can transmute into depression.
Now this sounds much more plausible. The very influential cognitive-behavioural accounts of depression propose that self-critical tendencies are a major risk factor for depression. Even if you're not familiar with CBT, you'll recognize that depressed people tend to blame themselves and feel guilty or inadequate all the time. That's got to be their underlying problem, right? It's common sense.

But is it? Rowe thinks so, but she's just completely missed the point of mania, and depression is the flip side of the coin that is bipolar disorder. The two states are fundamentally linked, polar opposites. So what are the chances that Rowe's right about one, when she's so wrong about the other? Not very good, if you ask me. Yet her explanation of depression seems much more plausible than her account of mania. Why?

I think it's because when you're depressed, you seek psychological explanations all the time: depressed people worry, ruminate and obsess endlessly about their "problems", and think that what they're feeling is a normal response to them. Of course I'm depressed, who wouldn't be in my situation?

This makes it very easy for psychologists to come along and offer a reappraisal which is in fact only slightly different: you're looking at things too negatively. Things aren't really as bad as you think, it's not really your fault, things really can and will improve. This is, certainly, often very helpful, and it's almost always true - because things generally aren't as bad as you think they are when you're depressed. Depression makes you see things negatively, just as mania makes you see them positively. That's kind of the point.

But this cognitive approach implicitly accepts the depressive notion that depression would have been an appropriate response to what you thought your situation was. It says that your feelings of depression were based on a mistake, but it does not dispute that depression is a healthy emotional state.

So the nature of depression means that it cries out for psychological explanations. But this doesn't mean that these explanations are in fact any more sensible than they would be if applied to mania. Depression may well be as much a psychologically irreducible, abnormal mental state as mania is. This is certainly not to say that cognitive theories of depression aren't useful or that CBT doesn't work. But we must be careful not to over-psychologize depression, however tempting it may be.

Clever New Scheme

CNS Response are a California-based company who offer a high-tech new approach to the personalized treatment of depression: "referenced EEG" (rEEG).

This is not to be confused with qEEG, which I have written about previously. What is rEEG? It involves taking an EEG recording of resting brain activity and sending it - along with a cheque, naturally - to CNS Response, who compare it to their database of over 1,800 psychiatric patients who likewise had EEGs taken before they started on various drugs. They look to see which drugs worked best in people with an EEG profile similar to yours, and give you a fancy report with their recommendations.

That's not completely implausible. It could work. Does it? CNS Response and some academic collaborators have just published a paper saying yes: The use of referenced-EEG (rEEG) in assisting medication selection for the treatment of depression. How solid is it? Well, it would be wrong to say that there are many problems with this study. But then if you run off a cliff and plummet into a volcano, you've only made one mistake.

Depressed patients were randomized to one of two groups: treatment-as-usual, which generally meant the common antidepressants bupropion, citalopram, or venlafaxine, vs. rEEG-guided personalized drug treatment. The trial was pretty large, with 114 patients randomized, and pretty long, 12 weeks. The patients had failed to respond to at least one antidepressant (mean: 1.5) during the current episode, so they were slightly "treatment-resistant", though not extremely so.

What happened? The rEEG-guided group did better on the QIDS16SR self-report scale, and on most other measures. Not enormously: take a look at the graph, notice that the vertical axis doesn't start at zero. But better.
Great, they did better. But why? The problem with this study is that the rEEG-guided group got a very different set of drugs to the control group. No less than 55% of them got stimulants, either methylphenidate (Ritalin) and dexamphetamine (speed). These drugs make you feel good. That's why they're illegal, that's why people pay good money for them on the street.

It's debatable whether stimulants are clinically useful as antidepressants in the long term, but they've got a good chance of making you feel nice for a few weeks, and make you say you feel better on a rating scale. Plus there's nothing like a pep pill to drive active placebo effects.

The authors say that "Almost all of the studies with depression not associated with medical disorders have reported minimal or no antidepressant effect of stimulants", and refer to some 1980s studies - yet their own trial has just shown that they do work in more than 50% of patients, and the latest Cochrane meta-analysis finds stimulants do work in the short term...

The other big names in the EEG group were MAOis (selegiline or tranylcypromine). These are often effective in treatment-resistant depression. Not necessarily more so than other drugs, but remember that these patients had already failed at least one SSRI(*). Yet the control group were, it seems, almost all given SSRIs - either citalopram, or venlafaxine, which is effectively an SSRI at low doses, e.g. the average dose used here, 141 mg. (It does other stuff, but only at higher doses of 225 mg or 300 mg.)

In summary, there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG: they might have done equally well if they'd just been assigned to stimulants or MAOis etc. by flipping a coin. We cannot tell, from these data, whether rEEG offered any benefits at all.

What's curious is that it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG. So you'd give control Patient 2 the drugs that Patient 1 should have got, and vice versa; swap 3 and 4, 5 and 6, etc.

This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs. It would have been a lot easier too. For one thing it wouldn't require the additional step of deciding what drugs to give the control group. The authors decided to follow the STAR*D treatment protocol in this study, which is not unreasonable, but that must have been a bit of a hard decision.

Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice. Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group... and so on.

Hmm. Mysterious. Anyway, we may be hearing more about CNS Response soon, so watch this space.

(*) - Technically, some of them had failed an SSRI and some had failed "2 or more classes of antidepressants", but one of those classes will almost certainly have been an SSRI, because they're the first-line treatment.

ResearchBlogging.orgDeBattista, C., Kinrys, G., Hoffman, D., Goldstein, C., Zajecka, J., Kocsis, J., Teicher, M., Potkin, S., Preda, A., & Multani, G. (2010). The use of referenced-EEG (rEEG) in assisting medication selection for the treatment of depression Journal of Psychiatric Research DOI: 10.1016/j.jpsychires.2010.05.009

Happiness Is Not A Fish You Can Eat

Wouldn't it be nice if you could improve your mental health just by eating more fish?

Well, yes, it would... except for people who hate fish, who would be doomed to misery. But is it true? A new paper from Finnish researchers Suominen-Taipale et al looks at this issue: Fish Consumption and Omega-3 Polyunsaturated Fatty Acids in Relation to Depressive Episodes: A Cross-Sectional Analysis. The results are complex, but essentially, negative.

The authors looked at a large sample (total n=6,500) of Finnish people from the general population, and asked them questions about their diet, and their mood. They found a correlation between the amount of fish a person reported eating, and their likelihood of self-reporting depressive symptoms. However, this should be taken with a pinch of salt, a slice of lemon and a light cheese sauc...sorry. This should be taken with a pinch of salt, because it was only true in men, and it was only statistically significant using some measures of fish eating, not others.

Also, there was zero correlation between blood levels of omega-3 fatty acids, and depression, even in men. Omega 3's are considered to be the good stuff in (oily) fish, and are currently being promoted as good for your brain, your mood, your IQ, etc. by health food fans.

To be fair, it's completely plausible that eating lots of them is good for you, because they are known to be involved in nerve cell function. And there are many papers finding them to be a good thing. But this study is not one of them. (The same authors also have another paper out finding no correlation between fish eating or omega 3 and "psychological distress", but it's largely overlapping data.)

The authors conclude that fish may be beneficial to mental health in men, albeit not through omega-3 fatty acids. They suggest that fish may instead provide some kind of high-quality protein or minerals. However, the other explanation is that fish is just correlated with depression because eating fish is a marker for some other lifestyle factors:

The observed association between high fish consumption and reduced risk for depressive episodes in the men may indicate complex associations between depression and lifestyle which we were not able to take into account. Diet and fish consumption may be a proxy for factors that have effect on mental well-being particularly in men. A plausible explanation is that fish consumption in men is a surrogate marker for some underlying but yet unidentified lifestyle factors that protect against depression.
I think it's fair to say that the jury is still out on the benefits of omega 3's. As a vegetarian I don't eat fish and I have a history of depression and take not one, but two, antidepressants, so maybe I'm living proof that a lack of fish is a bad thing. I don't think so, though, as I took omega 3 supplements for a few months and felt no different. I gave up because they were costing me £30 a box.

ResearchBlogging.orgSuominen-Taipale, A., Partonen, T., Turunen, A., Männistö, S., Jula, A., & Verkasalo, P. (2010). Fish Consumption and Omega-3 Polyunsaturated Fatty Acids in Relation to Depressive Episodes: A Cross-Sectional Analysis PLoS ONE, 5 (5) DOI: 10.1371/journal.pone.0010530

In the Brain, Acidity Means Anxiety

According to Mormon author and fruit grower "Dr" Robert O. Young, pretty much all diseases are caused by our bodies being too acidic. By adopting an "alkaline lifestyle" to raise your internal pH (lower pH being more acidic), you'll find that

if you maintain the saliva and the urine pH, ideally at 7.2 or above, you will never get sick. That’s right you will NEVER get sick!
Wow. Important aspects of the alkaline lifestyle include eating plenty of the right sort of fruits and vegetables, ideally ones grown by Young, and taking plenty of nutritional supplements. These don't come cheap, but when the payoff is being free of all diseases, who could complain?

Young calls his amazing theory the Alkavorian Approach™, aka the New Biology. Almost everyone else calls it quack medicine and pseudoscience. Because it is quack medicine and pseudoscience. But a paper just published in Cell suggests an interesting role for pH in, of all things, anxiety and panic - The amygdala is a chemosensor that detects carbon dioxide and acidosis to elicit fear behavior.

The authors, Ziemann et al, were interested in a protein called Acid Sensing Ion Channel 1a, ASIC1a, which as the name suggests, is acid-sensitive. Nerve cells expressing ASIC1a are activated when the fluid around them becomes more acidic.

One of the most common causes of acidosis (a fall in body pH) is carbon dioxide, CO2. Breathing is how we get rid of the CO2 produced by our bodies; if breathing is impaired, for example during suffocation, CO2 levels rise, and pH falls as CO2 is converted to carbonic acid in the bloodstream.

In previous work, Ziemann et al found that the amygdala contains lots of ASIC1a. This is intriguing, because the amygdala is a brain region believed to be involved in fear, anxiety and panic, although it has other functions as well. It's long been known that breathing air with added CO2 can trigger anxiety and panic, especially in people vulnerable to panic attacks.

What's unclear is why this happens; various biological and psychological theories have been proposed. Ziemann et al set out to test the idea that ASIC1a in the amygdala mediates anxiety caused by CO2.

In a number of experiments they showed that mice genetically engineered have no ASIC1a (knockouts) were resistant to the anxiety-causing effects of air containing 10% or 20% CO2. Also, unlike normal mice, the knockouts were happy to enter a box with high CO2 levels - normal mice hated it. Injections of a weakly acidic liquid directly into the amygdala caused anxiety in normal mice, but not in the knockouts.

Most interestingly, they found that knockout mice could be made to fear CO2 by giving them ASIC1a in the amygdala. Knockouts injected in the amygdala with a virus containing ASIC1a DNA, which caused their cells to start producing the protein, showed anxiety (freezing behaviour) when breathing CO2. But it only worked if the virus was injected into the amygdala, not nearby regions.

This is a nice series of experiments which shows convincingly that ASIC1a mediates acidosis-related anxiety, at least in mice. What's most interesting however is that it also seems to involved in other kinds of anxiety and fear. The ASIC1a knockout mice were slightly less anxious in general; injections of an alkaline solution prevented CO2-related anxiety, but also reduced anxiety caused by other scary things, such as the smell of a cat.

The authors conclude by proposing that amygdala pH might be involved in fear more generally
Thus, we speculate that when fear-evoking stimuli activate the amygdala, its pH may fall. For example, synaptic vesicles release protons, and intense neural activity is known to lower pH.
But this is, as they say, speculation. The link between CO2, pH and panic attacks seems more solid. As the authors of another recent paper put it
We propose that the shared characteristics of CO2/H+ sensing neurons overlap to a point where threatening disturbances in brain pH homeostasis, such as those produced by CO2 inhalations, elicit a primal emotion that can range from breathlessness to panic.
ResearchBlogging.orgZiemann, A., Allen, J., Dahdaleh, N., Drebot, I., Coryell, M., Wunsch, A., Lynch, C., Faraci, F., Howard III, M., & Welsh, M. (2009). The Amygdala Is a Chemosensor that Detects Carbon Dioxide and Acidosis to Elicit Fear Behavior Cell, 139 (5), 1012-1021 DOI: 10.1016/j.cell.2009.10.029

B-Movie Medicine

We all know about movies that are so bad, they're good. But could the same thing apply to doctors?

As I described last week, Desiree Jennings is a young woman from Virginia who developed horrible symptoms, including muscle spasms and convulsions, after getting a flu vaccine. It looked a bit like a form of brain damage called dystonia.

Numerous neurologists concluded that her illness was mostly or entirely psychogenic. A certain Dr Rashid Buttar, however, said that she was suffering from neurological damage caused by toxins in the flu vaccine.

Buttar gave her chelation therapy to flush the toxins out. Within 15 minutes, she was cured. Biologically speaking, this is ludicrous. It's flat-out impossible that chelation could reverse brain damage in 15 minutes, even if Jennings did have brain damage in the first place.

But Buttar's treatment worked, amazingly well by all accounts. This is not surprising, because the illness was psychological in nature, and Dr Buttar's treatment was, psychologically, very effective. Jennings was admitted to Dr Buttar's private clinic; she had IV lines put in to her arm; Dr Buttar attached the chelation treatment to the IV drip and, in a textbook example of how to produce a placebo effect:

I told her "Now the magic should start", prepared her for what I expected to happen. (interview with Dr Buttar, 05:30 onwards)
The magic did indeed happen, precisely because Dr Buttar convinced Jennings that it would.

*

What would have happened to Jennings if there were no Dr Buttars in the world? Her doctors would have run scans and tests to check if Jennings had any neurological damage. The results would have been normal. Jennings would probably have interpreted this as "We don't know what's wrong with you", although experts would have suspected that the symptoms were most likely psychogenic.

At some point, someone would have had to raise that possibility with her. But the point about psychogenic illness is that it's not "faking", "acting" or "made up" - the patient believes they are ill. The symptoms don't feel psychogenic. This is why people often interpret the suggestion that symptoms are psychogenic as saying "you're not really ill" and hence "you're either lying, or crazy". Of course, patients suffering from psychogenic illness are neither, and they know it.

So, without complementary and alternative medicine, Jennings might have ended up believing herself to be suffering from an illness so obscure that doctors were unable to diagnose it, and hence, unable to cure it. A hopeless situation. A worse thing for someone with psychogenic symptoms to believe is hard to imagine.

Dr Buttar's treatment was psychologically very powerful - precisely because he believed in it, so he was able to convince Jennings to believe in it. A doctor who realized that Jennings' symptoms were psychogenic would have found it much harder to achieve the same result. In order to do so, they would have to lie to her, by pretending to believe in a treatment which they knew was just a placebo. This is hard - the doctor would need to be an excellent actor as well as a medic - not to mention ethically tricky.

Interestingly, 100 years ago, this problem wouldn't have arisen. Doctors knew much less about diagnosis and there were few laboratory tests or scans in those days, so there was usually no way to prove that some symptoms were organic and others were psychogenic. Everyone got the same treatment. Of course, the treatments back then were less good at treating organic illnesses, but that wouldn't necessarily have made them any worse as placebos. Ironically, as mainstream medicine gets better and better at diagnosing and treating disease, it may be getting worse at dealing with psychogenic symptoms.

[BPSDB]

The Needle and the Damage (Not) Done

You may already have heard about Desiree Jennings.


If not, here's a summary, although for the full story you should consult Steven Novella or Orac, whose expert analyses of the case are second to none. Desiree Jennings is a 25 year old woman from Ashburn, Virginia who developed horrible symptoms following a seasonal flu vaccination in August. As she puts it:
In a matter of a few short weeks I lost the ability to walk, talk normally, and focus on more than one stimuli at a time. Whenever I eat I know, without fail, that my body will soon go into uncontrollable convulsions coupled with periods of blacking out.
For some weeks the problems were so bad that she was almost completely disabled, and feared the damage was permanent. Vaccines had destroyed her life. You can see a video here - American TV has covered the story in a lot of detail (the fact that she is quite... photogenic can't have put them off). Desiree and the media described her illness as dystonia, a neurological condition characterised by uncontrollable muscle contractions. Dystonia is caused by damage to certain motor pathways in the brain.

However, Desiree Jennings does not have dystonia. The symptoms look a bit like dystonia to the untrained eye, but they're not it. This is the unanimous opinion of dystonia experts who've seen the footage of Jennings. A blogger discovered that it was also seemingly the view of the neurologist who originally examined her.

So what's wrong with her? The answer, according to experts, is that her symptoms are psychogenic - "neurological" or "medical" symptoms caused by psychological factors rather than organic brain damage. It's important to be clear on what exactly this implies. It doesn't mean that Jennings is "making up" or "faking" the symptoms or that they're a "hoax". The symptoms are as "real" as any others, the only thing psychological about them is the cause. Nor are psychogenic symptoms delusions - Jennings isn't mentally ill or "crazy".

Almost certainly, she is in her right mind, and she sincerely believes that she is a victim of brain damage caused by the flu shot. The belief is false, but it's not crazy - in 1976 one flu vaccine may have caused neurological disorders and today many, many otherwise sane people believe that vaccines cause all kinds of damage. (It could well be that this belief is actually driving Jennings' symptoms, but we can't know that - there could be other psychological factors at work.)

*

One of the hallmarks of psychogenic symptoms is that they improve in response to psychological factors. Neurologist blogger Steven Novella predicted that:
I predict that they will be able to “cure” her, because psychogenic disorders can and do spontaneously resolve. They will then claim victory for their quackery in curing a (non-existent) vaccine injury.
They being anti-vaccination group Generation Rescue who were swift to offer Jennings their support and, er, expertise. And this is exactly what seems to be happening: Dr Rashid Buttar, a prominent anti-vaccine doctor who treats "vaccine damage" cases, began giving Jennings (amongst other things) chelation therapy to flush out toxic metals from her body, on the theory that her dystonia was caused by mercury in the vaccine. It worked! Dr. Buttar tells us - 15 minutes after the chelation solution started entering her body through an IV drip, all of the symptoms had disappeared (on the podcast it's about 6:00 onwards).

It's completely implausible that mercury in the vaccine could have caused dystonia, and even if it somehow did, it's impossible that chelation could reverse mercury-induced brain damage so quickly. If you are unfortunate enough to get mercury poisoning the neurological damage is permanent; flushing out the mercury wouldn't cure you. There's now no question that Jennings is a textbook case of psychogenic illness.

*

On this blog I've often written about the mysterious "placebo effect". A few weeks ago, I said -
People seem more willing to accept the mind-over-matter powers of "the placebo" than they are to accept the existence of psychosomatic illness.
We certainly seem to talk about placebos more than we talk about psychosomatic or psychogenic illness. There are 20 million Google hits for "placebo", just 1.6 million for "psychosomatic", and 500,000 for "psychogenic". (Even "placebo -music -trial" gives 8.7 million, which excludes all of the many placebo-controlled clinical trials and also hits about the band.)

Why? One important factor is surely that it's very difficult to prove that any given illness is "psychosomatic". Even if a patient has symptoms with no apparent medical cause, leading to suspicions that they're psychogenic, there could always be an organic cause waiting to be discovered. Just as we can never prove that there were no WMDs in Iraq, we can never prove that a given illness is purely psychological in origin.

But occasionally, there are cases where the psychogenic nature of an illness is so patent that there can be little doubt, and this is one of them. Watch the videos, listen to the account of the cure, and marvel at the mysteries of the mind.

[BPSDB]

Is Freud Back in Fashion? No.

Freudian psychoanalysis is the key to treating depression, especially the post-natal kind (depression after childbirth). That's according to a Guardian article by popular British psychologist and author Oliver James. He says that recent research has proven Freud right about the mind, and that psychoanalysis works better than other treatments, like cognitive-behavioural therapy (CBT).

Neuroskeptic readers have encountered James before. He's the person who thinks that Britain is the most mentally-ill country in Europe. I disagree, but that's at least a debatable point. This time around, James's claims are just plain wrong.

So, some corrections. We've got a lot to cover, so I'll keep it brief:

"10% [of new mothers] develop a full-blown depression...which therapy should you opt for? [antidepressants] rule out breastfeeding" - No, they don't. Breast-feeding mothers are able to use antidepressants when necessary, according to the British medical guidelines and others:

Limited data on effects of SSRI exposure via breast milk on weight gain and infant development are encouraging. If a woman has been successfully treated with a SSRI in pregnancy and needs to continue therapy after delivery, there is no need to change the drug, provided the infant is full term, healthy and can be adequately monitored...
James's statement is a dangerous mistake, which could lead to new mothers worrying unduly, or even stopping their medication.

"People given chalk pills but told they are antidepressants are almost as likely to claim to feel better as people given the real thing."
- This is true in many cases, although it's a little bit more complicated than that, but this refers to trials on general adult clinical depression, not post-natal depression, which might be completely different.

There's actually only one trial comparing an antidepressant to chalk placebo pills in post-natal depression. The antidepressant, Prozac, worked remarkably well, much better than in most general adult trials. This was a small study, and we really need more research, but it's encouraging.

"Regarding the talking therapies, in one study depressed new mothers were randomly assigned to eight sessions of CBT, counselling, or to psychodynamic psychotherapy. Eighteen weeks later, the ones given dynamic therapy were most likely to have recovered (71%, versus 57% for CBT, 54% counselling)."

This is cherry-picking. In the trial in question the dynamic (psychoanalytic) therapy was slightly better than the other two when depression was assessed in one way, which is what James quotes. The difference was not statistically significant. And using another depression measurement scale, it was no better at all. Take a look, it's hardly impressive:

Plus, after 18 weeks, none of the three psychotherapies was any better to the control, which consisted of doing precisely nothing at all.

"Studies done in the last 15 years have largely confirmed Freud's basic theories. Dreams have been proven to contain meaning." - Nope. Freud believed that dreams exist to fulfil our fantasies, often although not always sexual ones. We dream about what we'd like to do. Except we don't actually dream about it, because we'd find much of it shameful, so our minds hide the true meaning behind layers of metaphor and so forth. "Steep inclines, ladders and stairs, and going up or down them, are symbolic representations of the sexual act..."

If you believe that, good for you, and some people still do, but there has been no research over the past 15 years supporting this (although this is quite interesting). There was never any research really, just anecdotes

"Early childhood experience has been shown to be a major determinant of adult character." Nope. The big story over the past decade is that contra Freud, "shared environment", i.e. family life and child rearing make almost no contribution to adult personality, which is determined by a combination of genes and "individual environment" unrelated to family background. One could argue about the merits of this research but to say that modern psychology is moving towards a Freudian view is absurd. The opposite is true.

"And it is now accepted by almost all psychologists that we do have an unconscious and that it can contain material that has been repressed because it is unacceptable to the conscious mind." Nope. Some psychologists do still believe in "repressed memory" theory, but it's highly controversial. Many consider it a dangerous myth associated with "recovered memory therapy" which has led to false accusations of sexual abuse, Satanic rituals, etc. Again, they may be wrong, but to assert that "almost all" psychologists accept it is bizarre.

"Although slow to be tested, the clinical technique [of Freudian psychoanalysis] has now also been demonstrated to work. The strongest evidence for its superiority over cognitive, short-term treatments was published last year..."

First off, the trial referred to was not about post-natal depression, and it didn't test cognitive therapy at all. It compared long-term psychodynamic therapy, vs. short-term psychodynamic therapy, vs. "solution-focused therapy" in the treatment of various chronic emotional problems. No CBT was harmed in the making of this study.

After 1 year, long-term dynamic therapy was the worst of the three. At 2 years, they were the same. At 3 years, long-term dynamic therapy was the best. Although all these differences were small. Short-term dynamic therapy was no better than solution-focused therapy, which is rather a point against psychoanalysis since solution-focused therapy is firmly non-Freudian. And amusingly, the "short-term" dynamic therapy was actually twice as long as the dynamic therapy in the first study discussed above, which James praised! (20 weekly sessions vs 10). (Edit 23.10.09)

*

James ends by slagging off CBT and its practitioners, and suggesting that we need a "Campaign for Real Therapy", i.e. not CBT, something he has suggested before. This is the key to understanding why James wrote his muddled piece.

The British government is currently pouring hundreds of millions into the IAPT campaign which aims to "implement National Institute for Health and Clinical Excellence (NICE) guidelines for people suffering from depression and anxiety disorders". NICE guidelines essentially only recommend CBT, so this is effectively a campaign to massively expand CBT services. CBT is widely seen as the only psychotherapy which has been proven to work, in Britain and increasingly elsewhere too.

Oliver James, like quite a lot of people, doesn't like this. And in that, he has a point. There are serious debates to be had over whether CBT is really better than other therapies, and whether we really need lots more of it. There are also serious debates to be had over whether antidepressants are really effective and whether they are over-used. But these are all extremely complex questions. There are no easy answers, no short cuts, no panaceas, and James's brand of sectarian polemic is exactly what we don't need.

[BPSDB]

St John's Wort - The Perfect Antidepressant, If You're German

The herb St John's Wort is as effective as antidepressants while having milder side effects, according to a recent Cochrane review, St John's wort for major depression.

Professor Edzard Ernst, a well-known enemy of complementary and alternative medicine, wrote a favorable review of this study in which he comments that given the questions around the safety and effectiveness of antidepressants, it is a mystery why St John's Wort is not used more widely.

When Edzard Ernst says a herb works, you should take notice. But is St John's Wort (Hypericum perforatum) really the perfect antidepressant? Curiously, it seems to depend whether you're German or not.

The Cochrane review included 29 randomized, double-blind trials with a total of 5500 patients. The authors only included trials where all patients met DSM-IV or ICD-10 criteria for "major depression". 18 trials compared St John's Wort extract to placebo pills, and 19 compared it conventional antidepressants. (Some trials did both).

The analysis concluded that overall, St John's Wort was significantly more effective than placebo. The magnitude of the benefit was similar to that seen with conventional antidepressants in other trials (around 3 HAMD points). However, this was only true when studies from German-speaking countries were examined.

Out of the 11 Germanic trials, 8 found that St John's Wort was significantly better than placebo and the other 3 were all very close. None of the 8 non-Germanic trials found it to be effective and only one was close.


Edzard Ernst, by the way, is German. So were the authors of this review. I'm not.

The picture was a bit more clear when St John's Wort was directly compared to conventional antidepressants: it was almost exactly as effective. It was only significantly worse in one small study. This was true in both Germanic and non-Germanic studies, and was true when either older tricyclics or newer SSRIs were considered.

Perhaps the most convincing result was that St John's Wort was well tolerated. Patients did not drop out of the trials because of side-effects any more often than when they were taking placebo (OR=0.92), and were much less likely to drop out versus patients given antidepressants (OR=0.41). Reported side effects were also very few. (It can be dangerous when combined with certain antidepressants and other medications however.)

So, what does this mean? If you look at it optimistically, it's wonderful news. St John's Wort, a natural plant product, is as good as any antidepressant against depression, and has much fewer side effects, maybe no side effects at all. It should be the first-line treatment for depression, especially because it's cheap (no patents).

But from another perspective this review raises more questions than answers. Why did St John's Wort perform so differently in German vs. non-German studies? The authors admit that:

Our finding that studies from German-speaking countries yielded more favourable results than trials performed elsewhere is difficult to interpret. ... However, the consistency and extent of the observed association suggest that there are important differences in trials performed in different countries.
The obvious, cynical explanation is that there are lots of German trials finding that St John's Wort didn't work, but they haven't been published because St John's Wort is very popular in German-speaking countries and people don't want to hear bad news about it. The authors downplay the possibility of such publication bias:
We cannot rule out, but doubt, that selective publication of overoptimistic results in small trials strongly influences our findings.
But we really have no way of knowing.

The more interesting explanation is that St John's Wort really does work better in German trials because German investigators tend to recruit the kind of patients who respond well to St John's Wort. The present review found that trials including patients with "more severe" depression found slightly less benefit of St John's Wort vs placebo, which is the opposite of what is usually seen in antidepressant trials, where severity correlates with response. The authors also note that it's been suggested that so-called "atypical depression" symptoms - like eating too much, sleeping a lot, and anxiety - respond especially well to St John's Wort.

So it could be that for some patients St John's Wort works well, but until studies examine this in detail, we won't know. One thing, however, is certain - the evidence in favor of Hypericum is strong enough to warrant more scientific interest than it currently gets. In most English-speaking psychopharmacology circles, it's regarded as a flaky curiosity.

The case of St John's Wort also highlights the weaknesses of our current diagnostic systems for depression. According to DSM-IV someone who feels miserable, cries a lot and comfort-eats icecream has the same disorder - "major depression" - as someone who is unable to eat or sleep with severe melancholic symptoms. The concept is so broad as to encompass a huge range of problems, and doctors in different cultures may apply the word "depression" very differently.

[BPSDB]

ResearchBlogging.orgErnst, E. (2009). Review: St John's wort superior to placebo and similar to antidepressants for major depression but with fewer side effects Evidence-Based Mental Health, 12 (3), 78-78 DOI: 10.1136/ebmh.12.3.78

Klaus Linde, Michael M Berner, Levente Kriston (2008). St John's wort for major depression Cochrane Database of Systematic Reviews (4)

Biases, Fallacies and other Distractions

One of the pitfalls of debate is the temptation to indulge in tearing down an opponent's arguments. It's fun, if you're stuck behind a keyboard but still feeling the primal urge to bash something's head in with a rock. Yet if you're interested in the truth about something, the only thing that should concern you is the facts, not the arguments that happen to be made about them.

Plenty has been written about arguments and how they can be bad: sins against good sense are called "fallacies" and there are many lists of them. Some of the more popular fallacies have become household names - ad hominem attacks, the appeal to authority, and everyone's favorite the
straw man argument.

Likewise, cognitive psychologists have done much to name and catalogue the various ways in which our minds can decieve us. Under the blanket name of "biases" many of these are well known - there's confirmation bias, cognitive dissonance, rationalization, and so on.

There's a reason why so much has been said about fallacies and biases. They're out there, and they're a problem. When you set your mind to it, you can find them almost anywhere - no matter who you are. This, for example, is written by someone who believes that HIV does not cause AIDS. By most standards, this makes him a kook. And he probably is a kook, about AIDS, but he’s not stupid. He makes some perfectly sensible points about cognitive dissonance and the psychology of science. And here, he offers further words of wisdom:

I have no satisfactory answer to offer, unfortunately, for how AIDStruthers could be brought to useful mutual discussion.
...
Here’s a criterion for whether a discussion is genuinely substantive or not, directed at clarification and increased understanding: no personal comments adorn the to-and-fro. If B appears not to understand what A is saying, then A looks for other ways of presenting the case, A doesn’t simply keep repeating the same assertions spiced with “Why can’t you…?”, and the like. [Added 28 December: Another hallmark of the non-substantive comments is that the commentator not only keeps harping on the same thing but does so by return e-mail, leaving no time to consider what s/he is replying to; see Burun's admission of suffering from that failing.]
...
One lesson from experience is that the aim of Rethinkers cannot be to convince the AIDStruthers. It soon becomes a sheer waste of time to attempt to argue substance with them; a waste of time because you can’t learn anything from them, and they are incapable of learning anything from you. Rethinkers and Skeptics should address the bystanders, onlookers, the unengaged “silent majority”. There seem always to be with us some people who cheerfully continue to believe that the Earth is only about 6,000-10,000 years old, and many other things that most of us judge to be utterly disproved by factual evidence.
That could have come straight from the pen of such pillars of scientific respectability as Carl Sagan or Orac - until you remember that by "Rethinkers" and "Skeptics" he means people who don't believe that HIV causes AIDS, while "AIDStruthers" is his term for those who do, that is, almost every medical and scientific professional.

The lesson here is that you don't have to be right in order to notice that people who disagree with you are irrational, or that much of the opposition to your belief is dogmatic. The sad fact is that stubborness and a tendency to dogmatism are a part of human nature and it's very hard to escape from them; likewise, it's very hard to make a complex argument without saying something at least technically fallacious (that witty aside? Ad hominem attack!)

The point is that none of this matters. If something is true, then it's true even if everyone who believes it is a dogmatic maniac. So it's certainly true even if the only people you know who believe it are idiots. What's the chance that you've argued with the smartest Christian ever, or the best informed opponent of homeopathy? In which case - the fallacies and biases of the people you have argued with certainly don't matter. In an argument, the only thing of importance is what the facts are, and the way to find out is to look at the evidence.

If you're taking the time to name and shame the fallacies in someone's reasoning or to diagnose their biases, then you're not talking about the evidence - you're talking about your opponent(s). Why are you so fascinated by him...? To spend time lamenting the irrationality of your opponents is unhealthy. The only people who have a reason to care about other people’s fallacies and biases are psychologists. Daniel Kahneman got half a Nobel Prize for his work on cognitive biases - it's his thing. But if your thing is HIV/AIDS, or evolution, or vaccines and autism, or whatever, then it's far from clear that you have any legitimate interest in your opponent's flaws. In all likelihood, they are no more flawed than anyone else - or even if they are, their real problem is not that they're making ad hominem attacks (or whatever), but that they're wrong.

So when barely-coherent columnist Peter Hitchens writes in the Daily Mail about wind farms

If visitors from another galaxy really are going round destroying wind turbines, then it is the proof we have been waiting for that aliens are more intelligent than we are.

The swivel-eyed, intolerant cult, which endlessly shrieks – without proof – that global warming is man-made, has produced many sad effects.

The point is not that people who believe that global warming is man made are not a cult. They're not, but even if they were, it wouldn't matter. The swiveliness of their eyes or the pitch of their voice is not obviously relevant either.

Of course, if you're out to have fun bashing heads, or writing columns for the Daily Mail, then go ahead. Learn the names of as many fallacies and biases as you can (including the Latin names if possible - that's always extra impressive) and go nuts. But if you're serious about establishing or discussing the truth about something, then there is only one set of biases and fallacies you ought to care about – your own.

[BPSDB]

 
powered by Blogger