One of the more productive ways that the methods of science can be used is to look at the scientific process itself. A “meta-science” study (like a recent one published on brain scans) can help tell us when research approaches aren’t producing reliable data and can potentially show what we might need to change to get those approaches to work.
Now, someone has applied a bit of meta-science to an area of research where we shouldn’t expect to see improvements: homeopathy. A group of Austrian researchers looked into why a reasonable fraction of the clinical trials on homeopathy produce positive results. The biggest factor, the researchers found, is that the trials that show homeopathy is ineffective are less likely to get published.
A method to the madness
There are plenty of ways to test potential treatments, but over the years, problems have been identified in almost all of them. That’s left the double-blind, randomized clinical trial as the most trusted method of getting rid of some of the biases that make other approaches less reliable. But even in double-blind trials, problems can creep in. There’s always a bias toward publishing positive results—ones where the treatments have an effect.
As a result, we can’t always be sure whether we are seeing positive results because a treatment works or because negative results simply aren’t getting published. This has been a notable issue with some of the fad “cures” for COVID-19.
To deal with that issue, the field has settled on preregistering clinical trials. In these cases, the design of the trial, the outcomes being measured, and other details are placed in a public database before the trial even starts. Many research journals agreed that preregistration would be a requirement for later publication, meaning that anyone who hoped to publish results in the future would have a compelling reason to preregister. But unregistered trials can usually still get published in lower-profile journals.
This can help us identify when only positive results are being published. And that’s one of the analyses that was done by the Austrian researchers.
With and without
To get started, the team of researchers scanned a set of clinical-trial registration databases for trials involving homeopathy. The researchers also searched the published literature on the topic and, where possible, matched a publication to the preregistered trial that produced it. In some cases, publications were the results of trials that hadn’t been preregistered; in others, a preregistered trial produced no publications.
A few trends were clear. One is that a growing fraction of papers on homeopathy trials is the product of preregistered trial designs—the number has grown to roughly 75 percent in the two decades since preregistration started. The second trend is that roughly half of the preregistered trials don’t result in publication. Some of these trials undoubtedly don’t go to completion for a variety of mundane, uninteresting reasons; the rate is not very different from what you see in studies of actual medicine. Still, these trends represent a lot of opportunity for a bias against publishing negative results.
Is there any indication of this bias? That’s where we get to the new paper’s strongest results. If you do a meta-analysis of all the publications resulting from trials that weren’t preregistered, homeopathic treatments outperformed placebo by a statistically significant margin. If you look at the publications that resulted from trials that had been preregistered, there was no statistical difference between homeopathy and placebo.
In other words, when researchers have to commit to a study design, their results don’t show homeopathy to be effective. But when researchers can write up whatever results they choose, homeopathy suddenly looks good.
Beyond the nonsense
The researchers observed a strange aspect of the data. In the past, one explanation that had been offered for the apparent success of homeopathy trials is a strong placebo effect, generated by the extensive personal interaction between people seeking treatments and the practitioners. But the researchers in the recent paper were only able to find a single preregistered trial with a protocol that included these interactions.
Beyond that, these results are exactly what you expect, given that there’s no reason for homeopathy to do anything. The first paper referenced in the new study is entitled “Proposed mechanisms for homeopathy are physically impossible.”
But the new study is significant in ways that go beyond debunking obvious nonsense. Worries about publication biases apply to actual fields in science and medicine, and the paper provides a good indication that one of the tools we’ve developed to help us analyze bias can work as it was intended.