Is there really a perfect way to study humans?
Doctors, patients, and researchers are frustrated when a new healthcare approach works well in a study, but then has disappointing results in the “real world.”
A recent New York Times (NYT) article discussed new research that suggests workplace wellness programs don’t work. According to the study, they don’t increase employees’ exercise or decrease how much they spend on healthcare – even though previous studies indicate otherwise.
The article explains that this difference stems from the test itself: the new study was a randomized trial, whereas previous studies were observational studies. In a randomized trial, researchers assign volunteers by luck of the draw into groups, treat the groups differently, then look for differences.
In contrast, an observational study does not include a control group or an intervention; researchers just watch to see what happens over time.
As the article explains, if we only looked at the results of the observational study, we would think the wellness program worked. But the randomized trial showed that the program didn’t cause the differences. The only explanation is that the people who participated were already going to exercise more and spend less. The program didn’t change anything.
This article shows the importance of doing randomized, controlled trials when we need to know if an intervention changes an outcome. But these trials are hard to do, because the intervention has to be under our control.
For example, it wouldn’t be practical to do a randomized, controlled trial of whether running marathons makes people healthier, because it wouldn’t be practical to randomly assign people to start running marathons.
Our best chance would be to do an observational study, where we watch what happens to marathoners’ health over time. We can compare the runners to a group of people matched to them for things like age, weight, smoking, drinking, and all kinds of other signs of general healthiness, but that still doesn’t make the comparison as accurate as if we did a randomized trial.
It’s possible to make a randomized trial even stronger by a procedure called “double blinding.” In this design, neither the people who get the intervention nor the people measuring the outcome know which group any individual belongs to.
For example, let’s say we’re doing a randomized double blind study of whether one medicine or another works better to lower blood pressure. All participants would get pills that looked exactly the same, but some people would be getting one medicine and some the other.
Neither the people taking the pills nor the people measuring the blood pressure would know who was getting which medicine. Doing it this way prevents biases that arise when people unconsciously expect certain results.
Of course, it’s not always possible to use blinding. In the randomized trial the NYT looked at, for example, there was no way to prevent participants from knowing if they were doing the wellness activities or not. But for many tests of interventions like new drugs, the randomized, double blind study is considered the gold standard to find out whether the new approach works.
Sometimes when a trial result doesn’t hold up in real life, we’re tempted to think the clinical trials gave the wrong result. Usually, the explanation is that real life is messy. Sometimes all other variables overwhelm whatever effect an intervention showed in a clinical trial. This doesn’t mean the trial was wrong – it just means that the results weren’t the same under different circumstances
Recognizing this problem, the FDA is starting a new program to include “real world evidence” – the experiences patients have outside the controlled conditions of a randomized study – in order to learn more about how effective new approaches really are.
It’s important to remember that well-designed clinical trials give us the best chance to find out whether new healthcare approaches work. Those are the advances that will translate into improved treatments and outcome for us all.
-By Dr. Stacey Berg, professor of pediatrics and medical ethics and health policy at Baylor College of Medicine