An neat little study in BMC Medicine investigates how newspapers report on clinical research. The authors tried to systematically compare the tone and accuracy of write-ups of clinical trials of herbal remedies with those of trials of pharmaceuticals. The results might surprise you.
The research comes from a Canadian group, and most of the hard slog was done by two undergrads, who read through and evaluated 105 trials and 553 newspaper articles about those trials. (They didn't get named as authors on the paper, which seems a bit mean, so let's take a moment to appreciate Megan Koper and Thomas Moran.) The aim was to take all English language newspaper articles about clinical trials printed between 1995 and 2005 (as found on LexisNexis). Duplicate articles were weeded out and every article was then rated for overall tone (subjective), the number of risks and benefits reported, whether it reported on conflicts of interest or not, and so forth. The trials themselves were also rated.
As the authors say
This type of study, comparing media coverage with the scientific research it covers is a well recognized method in media studies. Is the tone of reporting different for herbal remedy versus pharmaceutical clinical trials? Are there differences in the sources of trial funding and the reporting of that issue? What about the reporting of conflicts of interest?There were a range of findings. Firstly, newspapers were generally poor at reporting on important facts about trials such as conflicts of interest and methodological flaws. No great surprise there. They also tended to understate risks, especially in regards to herbal trials.
The most novel finding was that newspaper reports of herbal remedy trials were quite a lot more likely to be negative in tone than reports of pharmaceutical trials. The graphs here show this: out of 201 newspaper articles about pharmaceutical clinical trials, not one was negative in overall tone, and most were actively positive about the drug, while the herbs got a harsh press, with roughly as many negative articles as positive ones. (Rightmost two bars.)
This might partly be explained by the fact that slightly more of the herbal remedy trials found a negative result, but the difference in this case was fairly small (leftmost two bars). The authors concluded that
Those herbal remedy clinical trials that receive newspaper coverage are of similar quality to pharmaceutical clinical trials ... Despite the overall positive results and tone of the clinical trials, newspaper coverage of herbal remedy clinical trials was more negative than for pharmaceutical clinical trials.Bet you didn't see that coming - the media (at any rate in Britain) are often seen as reporting uncritically on complementary and alternative medicine. These results suggest that this is a simplification, but remember that this study only considered articles about specific clinical trials - not general discussions of treaments or diseases. The authors remark:
[The result] is contrary to most published research on media coverage of CAM. Those studies consider a much broader spectrum of treatments and the media content is generally anecdotal rather than evidence based. Indeed, journalists are displaying a degree of skepticism rare for medical reporting.So, it's not clear why journalists are so critical of trials of herbs when they're generally fans of CAM the rest of the time. The authors speculate:
It is possible that once confronted with actual evidence, journalists are more critical or skeptical. It may be considered more newsworthy to debunk commonly held beliefs and practices related to CAM, to go against the trend of positive reporting in light of evidence. It is also possible that journalists who turn to press releases of peer-reviewed, high-impact journals have subtle biases towards scientific method and conventional medicine. Also, journalists turn to trusted sources in the biomedical community for comments on clinical trials, both herbal and pharmaceutical, potentially leading to a biomedical bias in reporting trial outcomes.If you forgive the slightly CAM-ish language (biomedical indeed), you can see that they make some good suggestions - but we don't really know. This is the problem with this kind of study (as the authors note) - the fact that a story is "negative" about herbs could mean a lot of different things. We also don't know how many other articles there were about herbs which didn't mention clinical trials, and because this article only considered articles referring to primary literature, not meta-analyses (I think), it leaves out a lot of material. Meta-analyses are popular with journalists and are often more relevant to the public than single trials are.
Still, it's a paper which challenged my prejudices (like a lot of bloggers I have a bit of a persecution complex about the media being pro-CAM) and a nice example of empirical research on the media.
Tania Bubela, Heather Boon, Timothy Caulfield (2008). Herbal remedy clinical trials in the media: a comparison with the coverage of conventional pharmaceuticals BMC Medicine, 6 (1) DOI: 10.1186/1741-7015-6-35