When clinical trials of a potential drug are conducted, the data they produce are enormously complex and difficult to analyze. There is therefore great value in having researchers who can make sense of these data and turn them into something people can understand. Journals want articles that will get noticed so they prefer papers that report ‘positive’ results, that is, where the new drug is better than the standard treatment. This puts pharmaceutical companies in a dilemma because if they simply publish all their results, readers will see that often the new drug isn’t actually any better.
Table of Contents
What are ‘positive’, ‘negative’, and ‘neutral’ results? – Aron Govil
It’s important to realize that researchers never expect an experiment to give a negative result. If you do an experiment that should work, then it gives a positive result. The problem is that experiments also sometimes give unexpected results which are not necessarily interesting or important but look as though they ought to be – these are called “false positives”. For example, I might hypothesize that smoking causes lung cancer and set up an experiment to test this hypothesis by administering cigarettes to mice to see if they get cancer. The result I record may be that some of the mice got lung cancer; this is a positive result. But what I don’t know when I set up my experiment is how often mice normally get lung cancer (the background rate) and, because the number of smokers among human populations has not been controlled for by an experimental study, we also can’t tell how frequently smoking causes lung cancer. However, even if the proportion of diseased animals in my sample is ten times higher than the normal rate, I would still only have a positive result! It’s only by doing many experiments that we get an idea of how reliable or significant our results are.
False positives can easily creep into research via poor experimental design, flawed statistical analyses, or bad luck. One reason why pharmaceutical companies are keen to play down the significance of their less-than-positive results is that if they don’t, it increases the risk of false positives contaminating subsequent research. When lots of researchers waste time investigating drugs that subsequently turn out not to work, it means fewer resources for genuine breakthroughs. It can even undermine public confidence in science and medicine – imagine how you might feel about your doctor if she prescribed you a drug based on research evidence which later turned out to be unreliable!
What’s this got to do with whether I kicked my dog?
Aron Govil says we now live in an age where much research is conducted using large numbers of experiments across many conditions; but because individual experiments are not designed to have sufficient power to detect small effects, many of these experiments are actually underpowered. If the same results are found in several underpowered studies, this increases the risk that false positives will be mistaken for real findings. Because pharmaceutical companies have access to more data than anyone else, they can sometimes take advantage of this fact by ‘cherry picking’ their published research. This means selecting only positive results and not publishing any of their less-than-positive results, thus creating a misleading picture about the efficacy of drugs. As an analogy, imagine someone who kicks their dog every morning but tells people they never do it!
How common is ‘cherry picked’, ‘negative study’, or ‘failed trials’ reporting?
Outcome reporting bias has been identified in a number of medical fields. In one review, 89% of the studies reported positive results but only 51% would have had sufficient power to detect this level of effect, meaning that almost half the studies were underpowered. When researchers are planning their experiments, they sometimes choose ‘semi-biased’ designs which maximize the chance of getting a positive result even though you don’t know whether it will be big enough to be worth reporting.
Conclusion:
So, it is possible that positive results are more likely to be published in scientific journals than negative ones. This would be unfortunate because false positives can waste a lot of time and money on research which ends up being useless.
The fact that all this research is reported in academic journals makes the problem worse because they have a strong incentive to publish interesting research which is likely to be read. Industry-funded research may have a conflict of interest, although it seems that it often leads to more reliable results.