St John's Wort - The Perfect Antidepressant, If You're German

The herb St John's Wort is as effective as antidepressants while having milder side effects, according to a recent Cochrane review, St John's wort for major depression.

Professor Edzard Ernst, a well-known enemy of complementary and alternative medicine, wrote a favorable review of this study in which he comments that given the questions around the safety and effectiveness of antidepressants, it is a mystery why St John's Wort is not used more widely.

When Edzard Ernst says a herb works, you should take notice. But is St John's Wort (Hypericum perforatum) really the perfect antidepressant? Curiously, it seems to depend whether you're German or not.

The Cochrane review included 29 randomized, double-blind trials with a total of 5500 patients. The authors only included trials where all patients met DSM-IV or ICD-10 criteria for "major depression". 18 trials compared St John's Wort extract to placebo pills, and 19 compared it conventional antidepressants. (Some trials did both).

The analysis concluded that overall, St John's Wort was significantly more effective than placebo. The magnitude of the benefit was similar to that seen with conventional antidepressants in other trials (around 3 HAMD points). However, this was only true when studies from German-speaking countries were examined.

Out of the 11 Germanic trials, 8 found that St John's Wort was significantly better than placebo and the other 3 were all very close. None of the 8 non-Germanic trials found it to be effective and only one was close.


Edzard Ernst, by the way, is German. So were the authors of this review. I'm not.

The picture was a bit more clear when St John's Wort was directly compared to conventional antidepressants: it was almost exactly as effective. It was only significantly worse in one small study. This was true in both Germanic and non-Germanic studies, and was true when either older tricyclics or newer SSRIs were considered.

Perhaps the most convincing result was that St John's Wort was well tolerated. Patients did not drop out of the trials because of side-effects any more often than when they were taking placebo (OR=0.92), and were much less likely to drop out versus patients given antidepressants (OR=0.41). Reported side effects were also very few. (It can be dangerous when combined with certain antidepressants and other medications however.)

So, what does this mean? If you look at it optimistically, it's wonderful news. St John's Wort, a natural plant product, is as good as any antidepressant against depression, and has much fewer side effects, maybe no side effects at all. It should be the first-line treatment for depression, especially because it's cheap (no patents).

But from another perspective this review raises more questions than answers. Why did St John's Wort perform so differently in German vs. non-German studies? The authors admit that:

Our finding that studies from German-speaking countries yielded more favourable results than trials performed elsewhere is difficult to interpret. ... However, the consistency and extent of the observed association suggest that there are important differences in trials performed in different countries.
The obvious, cynical explanation is that there are lots of German trials finding that St John's Wort didn't work, but they haven't been published because St John's Wort is very popular in German-speaking countries and people don't want to hear bad news about it. The authors downplay the possibility of such publication bias:
We cannot rule out, but doubt, that selective publication of overoptimistic results in small trials strongly influences our findings.
But we really have no way of knowing.

The more interesting explanation is that St John's Wort really does work better in German trials because German investigators tend to recruit the kind of patients who respond well to St John's Wort. The present review found that trials including patients with "more severe" depression found slightly less benefit of St John's Wort vs placebo, which is the opposite of what is usually seen in antidepressant trials, where severity correlates with response. The authors also note that it's been suggested that so-called "atypical depression" symptoms - like eating too much, sleeping a lot, and anxiety - respond especially well to St John's Wort.

So it could be that for some patients St John's Wort works well, but until studies examine this in detail, we won't know. One thing, however, is certain - the evidence in favor of Hypericum is strong enough to warrant more scientific interest than it currently gets. In most English-speaking psychopharmacology circles, it's regarded as a flaky curiosity.

The case of St John's Wort also highlights the weaknesses of our current diagnostic systems for depression. According to DSM-IV someone who feels miserable, cries a lot and comfort-eats icecream has the same disorder - "major depression" - as someone who is unable to eat or sleep with severe melancholic symptoms. The concept is so broad as to encompass a huge range of problems, and doctors in different cultures may apply the word "depression" very differently.

[BPSDB]

ResearchBlogging.orgErnst, E. (2009). Review: St John's wort superior to placebo and similar to antidepressants for major depression but with fewer side effects Evidence-Based Mental Health, 12 (3), 78-78 DOI: 10.1136/ebmh.12.3.78

Klaus Linde, Michael M Berner, Levente Kriston (2008). St John's wort for major depression Cochrane Database of Systematic Reviews (4)

In Science, Popularity Means Inaccuracy

Who's more likely to start digging prematurely: one guy with a metal-detector looking for an old nail, or a field full of people with metal-detectors searching for buried treasure?

In any area of science, there will be some things which are more popular than others - maybe a certain gene, a protein, or a part of the brain. It's only natural and proper that some things get of lot of attention if they seem to be scientifically important. But Thomas Pfeiffer and Robert Hoffmann warn in a PLoS One paper that popularity can lead to inaccuracy - Large-Scale Assessment of the Effect of Popularity on the Reliability of Research.

They note two reasons for this. Firstly, popular topics tend to attract interest and money. This means that scientists have much to gain by publishing "positive results" as this allows them to get in on the action -

In highly competitive fields there might be stronger incentives to “manufacture” positive results by, for example, modifying data or statistical tests until formal statistical significance is obtained. This leads to inflated error rates for individual findings... We refer to this mechanism as “inflated error effect”.
Secondly, in fields where there is a lot of research being done, the chance that someone will, just by chance, come up with a positive finding increases -
The second effect results from multiple independent testing of the same hypotheses by competing research groups. The more often a hypothesis is tested, the more likely a positive result is obtained and published even if the hypothesis is false. ... We refer to this mechanism as “multiple testing effect”.
But does this happen in real life? The authors say yes, based on a review of research into protein-protein interactions in yeast. (Happily, you don't need to be a yeast expert to follow the argument.)

There are two ways of trying to find out whether two proteins interact with each other inside cells. You could do a small-scale experiment specifically looking for one particular interaction: say, Protein B with Protein X. Or you can do "high-throughput" screening of lots of proteins to see which ones interact: Does Protein A interact with B, C, D, E... Does Protein B interact with A, C, D, E... etc.

There have been tens of thousands of small-scale experiments into yeast proteins, and more recently, a few high-throughput studies. The authors looked at the small-scale studies and found that the more popular a certain protein was, the less likely it was that reported interactions involving it would be confirmed by high-throughput experiments.

The second and the third of the above graphs shows the effect. Increasing popularity leads to a falling % of confirmed results. The first graph shows that interactions which were replicated by lots of small-scale experiments tended to be confirmed, which is what you'd expect.

Pfeiffer and Hoffmann note that high-throughput studies have issues of their own, so using them as a yardstick to judge the truth of other results is a little problematic. However, they say that the overall trend remains valid.

This is an interesting paper which provides some welcome empirical support to the theoretical argument that popularity could lead to unreliability. Unfortunately, the problem is by no means confined to yeast. Any area of science in which researchers engage in a search for publishable "positive results" is vulnerable to the dangers of publication bias, data cherry-picking, and so forth. Even obscure topics are vulnerable but when researchers are falling over themselves to jump on the latest scientific bandwagon, the problems multiply exponentially.

A recent example may be the "depression gene", 5HTTLPR. Since a landmark paper in 2003 linked it to clinical depression, there has been an explosion of research into this genetic variant. Literally hundreds of papers appeared - it is by far the most studied gene in psychiatric genetics. But a lot of this research came from scientists with little experience or interest in genes. It's easy and cheap to collect a DNA sample and genotype it. People started routinely looking at 5HTTLPR whenever they did any research on depression - or anything related.

But wait - a recent meta-analysis reported that the gene is not in fact linked to depression at all. If that's true (it could well be), how did so many hundreds of papers appear which did find an effect? Pfeiffer and Hoffmann's paper provides a convincing explanation.

Link - Orac also blogged this paper and put a characteristic CAM angle on it.

ResearchBlogging.orgPfeiffer, T., & Hoffmann, R. (2009). Large-Scale Assessment of the Effect of Popularity on the Reliability of Research PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0005996

Everyone is Mentally Ill

There's been a lot of interest over the idea that an "Artificial brain is 10 years away", which is what Professor Henry Markram told the ultra-hip TED conference in Oxford the other day.

That's an amazing idea. But Markram said something else even more astonishing, which, for some reason, has not got nearly as much attention:

"There are two billion people on the planet affected by mental disorder," he told the audience.
Two billion people. One in three.

This was presumably a throw-away remark, something he said in order to emphasise the importance of understanding the brain. But this makes it even more amazing: we have reached the point where no-one bats an eyelid at the idea that mental illness affects one in three people worldwide.

Well, if this is what we believe now, I think we need to stop beating about the bush with numbers like one in four or one in three, and admit that we now are now using "mental illness" as a synonym for "the human condition".

After all, once you pass the point where one in two people have something, you are saying that it's normal and not having it is weird. As I've written before, if you take the evidence seriously, more than 50% of people are indeed "mentally" ill at some point. So let's just say that everyone is mentally ill and have done with it.

Or we could reassess what we mean by "mental illness" and stop medicalizing human suffering. Hey, we can dream.

The Onion Does China

The Onion turns its satirical eye on China, with hilarious if not entirely PC results -
Here's a screenshot for posterity, because their "special issues" tend to go back to normal pretty quickly.

I always think it's a little odd that the Chinese government don't have anyone whose default assumption is that they're in the right. Whenever a Western or a Western-aligned country does something morally... questionable, you can count on conservatives to defend it. Whereas countries with a history of Western exploitation generally enjoy the benefit of the liberal doubt. But China, almost uniquely, gets it from left, right, and centre equally.

I remember a colleague's astonishment when a Chinese post-doc expressed the opinion that Tibet was part of China and should remain so. This was an idea that she'd just never heard before, and she clearly thought it was entirely bizarre. Yet it was only 40 years ago that many French people were of the opinion that L’Algérie est française et le restera - Algeria! And there are still people in Northern Ireland who might kill you if you suggest that that province doesn't belong to Britain.

More on Suppressed Clinical Trials

We read in the BMJ that a German agency refuses to rule on drug’s benefits until Pfizer discloses all trial results. The drug is reboxetine (Edronax), which readers will recall was recently deemed to be the worst new antidepressant by an Oxford team.


The agency, the IQWiG, are an independent organization, but they were comissioned by the German federal government to report on the benefits of three antidepressants: reboxetine, mirtazapine, and buproprion. Their decision will have major implications in terms of which drugs are available through the German public healthcare system. However, they decided to suspend judgement on reboxetine because the manufacturers, Pfizer, did not provide them with the results of all available trials.

Ten relevant trials for reboxetine that could definitely be included were identified from the literature search in bibliographic databases, publicly accessible drug approval documents, and clinical trial registries. However, 3 of these trials could not be analysed with regard to the antidepressive effect of reboxetine because the publications only contained data on partial populations ...

Furthermore, 6 potentially relevant trials were identified that could not be
included because no full publication existed and the manufacturer of reboxetine (Pfizer) refused to provide full information on all trials with reboxetine...

[and] Due to insufficient cooperation from the manufacturer of reboxetine, it remained unclear whether additional unpublished trials exist. It may well be that the identified data represent an even smaller portion of available evidence.
IQWiG have pulled no punches. It's not every day that a healthcare agency puts out a press release headlined "Pfizer conceals study data: Drug manufacturer hinders the best possible treatment of patients with depression". And to be fair to Pfizer, they are hardly behaving worse than their rivals. Essex pharmaceuticals, the manufacturers of mirtazapine, also failed to provide all the data on their drug. GlaxoSmithKline, who make buproprion, did in fact play ball in this case, but this is the very same company who have been widely accused of suppressing data on Seroxat-induced suicides in kids.

So the problem is not unique to Pfizer, or to reboxetine. The problem is with the system which allows drug manufacturers to conduct as many trials as they like and only publish the results they want. The resulting publication bias is damaging to every field of medicine. One solution is for scientific journals to only publish trials that were pre-registered before they started, so that everyone knows about trials before they happen. But maybe this isn't enough. The IQWiG note that the US has a law mandating that clinical trial results must be registered and reported to the FDA, and call for the EU to adopt similar legislation.

ResearchBlogging.orgStafford, N. (2009). German agency refuses to rule on drug's benefits until Pfizer discloses all trial results BMJ, 338 (jun22 1) DOI: 10.1136/bmj.b2521

Antidepressants and Neurogenesis in Humans

How do antidepressants work? Some people will tell you that it’s all about neurogenesis. The theory goes that antidepressants increase the rate at which new neurones are created in a region called the dentate gyrus of the hippocampus, and that, somehow, this boom in the number of new hippocampal cells alleviates depression.

To date, however, all of the research linking antidepressants and neurogenesis has involved animals. It was generally assumed that if drugs altered neurogenesis in mice, the same thing happened in humans – but this was an assumption, and clearly a pretty big one. Now a new report from a New York-based team claims that antidepressants do enhance neurogenesis in people - Antidepressants increase neural progenitor cells in the human hippocampus.

The authors took post-mortem brain samples from three groups of people – those with no history of depression, those with depression who were not on antidepressants when they died, and depressed people who were on antidepressants. They counted the number of neural progenitor cells (NPCs) in the hippocampus using a stain which specifically marks these cells (anti-nestin).

Although like all post-mortem studies the sample size was small (n=19 total), depressed people taking antidepressants when they died had much higher NPC numbers, indicating greater neurogenesis, compared to the other two groups. (Control: 360±246; untreated: 1119±752; treated: 17229±3443).

The picture above illustrates this; the brown cells are NPCs, and there are evidently more of them in the antidepressant-taking person on the right compared to the control on the left. The authors presumably picked these images because they look different, so, pinch of salt. But still, as an antidepressant user myself, it's nice to see what might well be going on inside my skull at this moment.

The dentate gyrus of the hippocampus, the area where neurogenesis happens, was also larger in the antidepressant-treated group.

Is this evidence for the neurogenesis theory? Not exactly. It’s fairly good evidence that some antidepressants do boost hippocampal neurogenesis in humans, in accordance with the animal data. But we really don’t know what that means. It could just be a side effect, and nothing to do with how they work. I’ve previously written about some recent animal experiments finding that antidepressants have effects on behaviour even when neurogenesis is completely blocked. And notably, five of the seven antidepressant-treated patients in this study died from suicide. So, to put it bluntly, the drugs didn’t work very well, despite sending neurogenesis through the roof...

ResearchBlogging.orgBoldrini, M., Underwood, M., Hen, R., Rosoklija, G., Dwork, A., John Mann, J., & Arango, V. (2009). Antidepressants increase neural progenitor cells in the human hippocampus Neuropsychopharmacology DOI: 10.1038/npp.2009.75

Do The Drugs Work? It's Complicated

Over at Comment is Free a week ago, Ed Halliwell proclaimed that "The Drugs Don't Work". The drugs being antidepressants. On this blog I've often written about antidepressants and the evidence that they work, or don't, so I was interested to see what he had to say.

Halliwell begins by noting that antidepressant prescriptions are rising. This, he declares, is a bad thing because antidepressants just don't work very well - "....A recent review found the SSRIs barely more effective than a placebo pill. Still, the NHS bill for prescribing them runs into hundreds of millions of pounds a year. It's a crazy situation, and the tide may be turning..."

This invokes the famous Kirsch et al 2008 PLoS review of antidepressants. This analysis concluded that 6 weeks treatment with antidepressants was only slightly better than placebos for depression. But slightly is better than nothing. Kirsch et al is evidence that the drugs do, in fact work.

This despite the fact that the analysis included "suppressed" unpublished drug company data unfavorable to antidepressants. So almost uniquely in medicine, there can be none of the publication bias which plagues all clinical trials. In other words, an exceptionally high standard of evidence shows that the drugs do work. A bit. And in fact they probably work better than that, because Kirsch et al's paper was biased against antidepressants as a series of classic posts by P J Leonard and Robert Waldmann pointed out.

Halliwell then says that instead of popping pills to ease our troubled minds, we should turn to "...simple, socially based steps everyone can take to improve their wellbeing. These include building good relationships, lifelong learning, being kind to others and exercise – not rocket science, but somehow we seem to have forgotten them."

Now I don't know what drugs you would need to take to think that "building good relationships" is "simple" - Ecstasy washed down with alcopops might do it. But once you sobered up and read a novel, or watched a play, or just remembered your last breakup, you would realize that relationships can actually be quite complicated. Not to mention being kind to others and lifelong learning, which are so simple that everyone is has a PhD in Being Nice.

But such nonsense aside, the actual hard evidence that these kinds of things can treat clinical depression as opposed to just "improving wellbeing" is weak. There is some evidence that exercise can treat depression, for example, but it's often of poor quality, with no placebo, and publication bias could be rampant.

Indeed as luck would have it, today, two Cochrane Reviews were published. One was about antidepressants for the treatment of depression in primary care. The other was about exercise for depression.

Respectively, they concluded that - "Both tricyclic antidepressants and SSRIs are effective for depression treated in primary care" (not massively effective I hasten to add, but it's something) while the exercise one concluded that "Exercise seems to improve depressive symptoms in people with a diagnosis of depression, but when only methodologically robust trials are included, the effect sizes are only moderate and not statistically significant..."

I'm not endorsing these conclusions. Neuroskeptic readers know that I've long been critical of antidepressant trials and sometimes even Cochrane Reviews. I study antidepressants for a living, and I honestly don't know if they work for the millions of people who take them. (I'm fairly sure that they work in severe depression, but this isn't a very common disease.) But I do know that it's a really complicated issue. And I know that simplistic pro- or anti- medication rhetoric helps no-one and insults the intelligence of all.

But there's more here. Ed Halliwell has a history. He was the lead author of "In The Face of Fear", a deeply flawed Mental Health Foundation report from a couple of months back. The main message of the report was that anxiety is currently on the rise in Britain. We're getting more scared and anxious. There is an epidemic of fear - right now.

This claim was supported by two things - a completely unscientific opinion poll (which didn't really show much of an increase at all), and a reference to government mental health survey data from 1993 and 2007. These indeed show an increase in reported prevalence of anxiety disorders.

Now whether surveys such as these give meaningful data is questionable, but quite apart from that, the MHF report was guilty of a much more glaring error. As I said at the time, it simply failed to mention that we also have data from 2000. And in 2000, rates of anxiety disorders were almost exactly the same as in 2007, higher in some cases.

Now, the 1993, 2000 and 2007 figures are right next to each other in the government report (page 41 of this publicly accessible pdf): unless Halliwell has some kind of visual defect rendering him unable to see the middle of three columns of numbers, he must have seen this. The best available scientific data is that rates of anxiety have been stable for the last 10 years.

Stable, but very high. And the irony - and the tragedy - is that these massive reported rates of depressive and anxiety disorders in the British population (17.6%, at any one time and this is only for some disorders), are why antidepressants are so widely prescribed. Not these figures alone, but rather the more general belief that mental illness is very common - a belief promoted, inter alia, by organisations such as the MHF.

This belief - the "one in four" myth - is music to the ears of the very drug companies that Halliwell and others lambasts for pushing pills. When tens of millions of people are told that they are ill and need treatment, can you blame them for turning first to pills instead of a wholescale reconstriction of human life and society?

Picturing the Brain

You may well have already heard about neuro images, a new blog from Neurophilosophy's Mo. As the name suggests, it's all about pictures of the brain. All of them are very pretty. Some are also pretty gruesome.

But images are, of course, more than decoration. There are dozens of ways of picturing the brain, each illuminating different aspects of neural function. Neuropathologists diagnose diseases by examining tissue under the microscope; using various stains you can visualize normal and abnormal cell types -

FDG-PET scans reveal metabolic activity in different areas, which can be used to diagnose tumors amongst much else -

Egaz Moniz, better known as the inventor of "psychosurgery", pioneered cerebral angiography, a technique for visualizing the blood vessels of the brain using x-rays (this is the view from below) -

And so on. However, for all too many cognitive neuroscientists - e.g. fMRI researchers - the only kind of brain images that matter are MRI scans, traditionally black-and-white with "activity" depicted on top in colour -

fMRI is a powerful technique. But there is much more to the brain than that. Even a casual glance down a microscope reveals that brain tissue is composed of a rich variety of cells, the most numerous of which, glia, do not transmit neural signals - they are not "brain cells" at all. And there are many different types of brain cells, which inhabit distinct layers of the cerebral cortex - the cortex has at least six layers in most places, and different things happen in each one.

The brain, in other words, is a living organ, not a grey canvas across which activity patterns occasionally flash. Of course, no-one denies this, but all too many neuroscientists forget it because in their day-to-day work all they see of the brain is what an MRI scan reveals. This is especially true for those scientists who came to fMRI from a psychology background, many of whom have never studied neurobiology.

Maybe researchers should have to spend a week with a scalpel cutting up an actual brain before they get allowed to use fMRI - this might help to guard against the kind of simplistic "Region X does Y" thinking that plagues the field.

Does Self-Help Harm?

I love the BBC, but their online science and health articles have an unfortunate tendency to be, well, rubbish. At least, the headlines do. A while back I wrote about their proclamation that "Homeopathy 'eases cancer therapy'". The problem with that one was that the only treatments which worked turned out to not actually be homeopathic.

So when I saw the headline "Self-help 'makes you feel worse'", I suspected that whatever research they were reporting on might not have been about self-help at all. Call me a pessimist. But I was right. Go me. Bear with me, though, because the study in question raises some fascinating psychological issues.
The paper is Wood et al's Positive Self-Statements: Power for Some, Peril for Others. The authors aimed to study positive self-statements, the repetition of which is apparantly recommended by many self-help books. One example they give is "I’m powerful, I’m strong, and nothing in this world can stop me". I would hope no-one actually believes that because that would make them floridly manic, but youy get the gist. Now there can't be 92,000 books dedicated to telling people just to do that, so there's a little more to self help than that. But positive affirmations are indeed popular.

Wood et al note that repeating such positive statements might not make everyone feel better. It could have the opposite effect in some people. If you believe yourself to be, say, unloveable, then repeating a "positive" phrase, such as "I am loveable", might make you think to yourself "No I'm not really, I'm horrible...", and feel worse. People with low self-esteem, the people who are most likely to seek self-help, would seem to be most at risk of this.

To test whether such a negative effect in fact occured, they took some psychology undergraduates and told half of them to think to themselves "I am loveable" when they heard a bell ring, which happened every 15 seconds for 4 minutes. And as they predicted, the students who reported low self-esteem to begin with ended up feeling worse. Except, they didn't report "feeling" worse, rather they answered some questions in a more negative way:

Mayer and Hanson’s (1995) Association and Reasoning Scale (ARS), which includes questions such as, ‘‘What is the probability that a 30-year-old will be involved in a happy, loving romance?’’ Judgments tend to be congruent with mood, so optimistic answers suggest happy moods.
In a follow-up experiment, the authors tested the possiblity that the reason why the low-self-esteem group "felt" worse after the positive statements was that they felt themselves unable to succeed in the task - only thinking happy thoughts - and perceived themselves as failing:
‘‘If I’m supposed to think about how I’m lovable and I keep thinking about how I’m not lovable, the ways in which I’m not lovable must be important. I must not be very lovable . . . .’’

So they found that the negative effect of the statements was only present when the students were asked to ‘‘focus only on ways and times in which the statement ["I am loveable"] is true’’, and did not occur when they were "allowed" to focus on ways the statement ‘‘may be true of you and/or ways in which [it] may not be true of you.’’

Fair enough. But there's a crucial limitation with this study, and it's one which also looms large in the study of psychotherapy. The problem is that when people buy a self-help book and decide to start repeating positive statements to themselves, they are doing more than just thinking some words. They are, or at least they believe that they are, taking positive steps which have the power to change their lives. They're turning over a new leaf - taking matters into their own hands. It's change they can believe in. Yes, they can!

Now, this (ugh) "empowering" sense of acting to improve things could bring about all kinds of positive changes. In which case, self-help books might "work" even if the specific technqiues, taken in isolation, are useless or even harmful.

This is directly relevant to psychotherapy. Say you want to run a placebo-controlled trial of a certain kind of therapy in the treatment of depression. You recruit some depressed patients, flip some coins to randomize them to get therapy or placebo... but what "placebo" intervention do you use?

You might decide that the "empowering" feeling of doing something positive about your problems is a mere "placebo effect", so your control group should also experience it. In which case, they should be given some kind of meaningful therapy. Presumably it would have to be a different kind from the "real" therapy group, or it wouldn't be a trial, but then what do you use?

On the other hand, many psychotherapists would reply that this "placebo effect" is exactly what they spend a lot of time trying to produce - it's an integral part of the therapeutic process, and so the control group should not be given it. They should be given something much less involved, like non-specific "supporting talking", or nothing at all ("waiting list").

Now, this is an ongoing debate, and I'll be writing more about in the future, but the lesson is, whenever you read about a "placebo-controlled" trial of any psychotherapy, it's worth thinking about what the "placebo" was.

Apologies to Savage Chickens for "borrowing" the wonderful cartoon. I couldn't resist...

ResearchBlogging.orgWood, J., Elaine Perunovic, W., & Lee, J. (2009). Positive Self-Statements: Power for Some, Peril for Others Psychological Science DOI: 10.1111/j.1467-9280.2009.02370.x

 
powered by Blogger