I’m sure many readers of BioethicsBytes are already familiar with the TED Talks phenomenon but, as far as I’m aware this is the first time we’ve directly recommended one of their videos on this site. TED events, and later online videos, involve invited participants in giving “the talk of their lives” in 18 minutes or less. The result is a collection of pithy and thought-provoking presentations on a variety of topics.
Ben Goldacre is a medically-qualified writer who has devoted much of his time drawing the public’s attention to examples of pseudoscience and inappropriate uses of science, originally via his regular Bad Science column in the Guardian newspaper and later in his first book, of the same name (see here for a review of the book Bad Science).
In 2012, Goldacre has turned his penetrating gaze on the pharmaceutical industry and the results, now available in his second book Bad Pharma, have brought to a wider audience concerns about the ‘tricks’ that are played by drug companies to make their products seem more successful than is warranted.
This TED Talk, What doctors don’t know about the drugs they prescribe, was delivered at Washington DC in April 2012. Within the constraints of a 13.5 min talk, Goldacre gives a number of examples of “publication bias” which he describes as “a cancer at the core of evidence-based medicine“. Companies developing a new product have been under no obligation to publish (in peer-reviewed journals) studies that show their product in an unflattering light. They have been at liberty to cherry-pick only the research that might be interpreted to support claims that the new drug is (a) safe and (b) an improvement on existing alternatives. Data that were not favourable could be brushed under the carpet (or go “missing in action” as Goldacre terms it).
Even if you compare studies that drug companies elect to report to the authorities responsible for granting a licence to a new medicine (In the USA this is the Food and Drug Administration, the FDA) and see how many of those were also published in peer-reviewed journals (the”gold standard” for scientific rigour) then a shocking skew emerges. In one meta-analysis of research on new antidepressants it turned out that 37 out of 38 studies with positive findings had also made it into peer-reviewed articles, whereas just 3 out of 36 negative studies had been published in this way.
The blame here must be partially shared by the journals themselves; in the recent past there were few outlets for negative findings and, even now, many of the journals would rather carry good news stories than give over valuable pages to reports of novel compounds that don’t work.
This omission, argues Goldacre, is serious. It can, and has, led to patient deaths. For example, research in the early 1980s that showed a drug for treating abnormal heart rhythms led to more deaths in the study group than in the control patients (who received a placebo) contributed to a decision to stop development of this medicine. This information, however, was not shared with the rest of the scientific community via peer-reviewed journals. As a consequence other companies, actively involved in developing similar compounds, were not adequately warned about the risks and hence the same accumulation of deaths was mirrored in other trials.
Goldacre also shares an example from his own experience as a general practitioner when he had investigated all of the published data on a new antidepressant he was going to prescribe to a patient who had proven unresponsive to other medication. He was later to find out that the one study showing that the drug, reboxetine, was more effective than placebo was countered by six unpublished studies that did not find the new medicine to be better than the dummy medicines. Similarly, the data showing reboxetine was better than the best-available alternatives was, in fact, swamped by unpublished studies that it was not. I rather suspect that other GPs are not as conscientious as Goldacre in checking the published accounts of a new drug’s efficacy but even he had been misled by the lack of availability of all of the evidence.
Along the way, Goldacre makes reference to a study on the anti-influenza drug Tamiflu in the freely-available journal PLoS Medicine (see The Imperative to Share Clinical Study Reports: Recommendations from the Tamiflu Experience, and accompanying editorial Open Clinical Trial Data for All? A View from Regulators). He also cites a 2012 paper in the journal Nature (subscription required) in which researchers found that they could only replicate 6 out of 53 previously-published experiments on the underlying biology of cancer (Drug development: Raise standards for preclinical cancer research). The authors of this latter paper argue that a significant contribution to the failure of cancer therapies in clinical trials actually stems from shoddy or mistaken work. As they note “Some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.”
Using this TED Talk in bioethics education: Aside from the general value of any serious-minded individual knowing about the issues discussed in the talk, there is particular merit in students of biology, philosophy or critical thinking recognising that there are some negative aspects regarding “how science works”. The video is only 13.5 minutes long in total and there is definitely value in showing it to a class in its entirety. Alternatively, watching the talk could be set as a homework assignment.
If time is only available to watch particular sections, I recommend discussion of the flaws in preclinical cancer research (2:05-3:03) and the section on the analysis of antidepressant trials (6:50-10:01, especially 8:23 onwards). These two clips offer evidence of two different, but related, problems in research.
The issues raised by the difficulties replicating basic research are not considered (at least not by the authors of the Nature article) as evidence of research fraud. Instead, they are taken as a warning to the scientific community to ensure that they have adequately checked and double-checked their own findings and on journals and funders to be more willing to accept negative and imperfect data as legitimate outputs of good science rather than requiring unrealistically clear results (I have discussed their paper more fully in a post over at Journal of the Left-Handed Biochemist).
In contrast, Goldacre is clear that the deliberate cherry-picking of trials to publish only those that support the intended outcome is research misconduct.