John Oliver has weighed in on junk science, and it’s a must-watch. But behind the comedy, there are two serious questions: One, just how dangerous is a junk study? And two, how can you sort the scientific gold from the utter trash?
Junk science is more than just the fuel for dumb Facebook posts. In the wrong hands, it can kill. In 2004, Andrew Wakefield released a study that claimed autism was caused by vaccines. How could such a thing be possible? It wasn’t. Wakefield’s data was collected using at best questionable and at worst outright unethical methods, because lawyers saw a chance to sue vaccine manufacturers.
But the damage was done: In 2016, measles, a disease declared eradicated in the U.S. in 2000, is back with a vengeance. And all because people didn’t know how to read a study. So what should you be looking for?
Good Studies Don’t Make Grand Claims
When approaching any scientific study, especially ones you want to hear, it pays to take a step back and ask if this passes a gut check. We all want to hear that you can eat ice cream with abandon, that we don’t have to exercise, and that people who disagree with us politically are dimwits who can’t screw in a lightbulb, but that doesn’t mean any of those conclusions are accurate. This is doubly true if the study is being used by somebody with an agenda to plug; they’re unlikely to look closely, as long as the “science” tells them what they want to hear.
Wakefield is just the most obvious example. Anti-GMO advocates are still citing a study that claimed rats eating GMO corn got tumors, but conveniently forget to mention the study was withdrawn for a number of reasons, not the least of which was the researchers used a strain of rat bred to grow tumors in the first place.
Good Studies Aren’t Self-Reported
Everybody’s had a moment where they forget something, but have to put it on a form anyway. So you guess or make up and answer. It’s human, right? Who could it hurt? Now compound that with, say, being in a room with a scientist. Maybe it’s some dude who just gets on your nerves for some reason, and you want to screw with him. Maybe you want to impress a particularly attractive member of the team. Or maybe you just decide that no, university post-doc, I will not divulge my real biggest fear… not for ten bucks, anyway.
In short, self-reporting is garbage. It’s been proven to be garbage. If you see a study that relies on “surveys” and doesn’t check its data, it’s not worth reading.
Good Studies Use Large Sample Sizes Over Long Periods Of Time
Generally speaking, when you want to study something, you need as many data points as possible. That’s why a good study uses a large group of test subjects, also called a sample. The bigger the better, and the longer you study them and the more data you collect, the better. Unfortunately, big samples are expensive to put together, and long-term studies even more so — which is why they’re relatively rare. A study with a smaller sample should acknowledge those issues, so look closely for that.
Good Studies Call Bullsh*t On Themselves
A well-written study owns up to possible flaws and is skeptical of its own results. This is why really important studies rarely trumpet massive, world-changing findings. For example, a widely-shared study showing that dietary cholesterol had little impact on heart health in a cohort of Finnish men stopped short of saying everything we know about cholesterol was wrong, because it didn’t track dietary changes over time. It’s still a compelling study, as it followed a thousand men, a third of which had a genetic disposition to high cholesterol, over twenty-one years, but the team only thinks of their results as “meriting further research.”
Good scientists will even go back and check their own results, even if it means they have to say their study was crap. The man responsible for the belief that gluten is bad for you thought his own results were garbage, so he redid the study in much, much more detail and found that, unsurprisingly, they were exactly that.
A Good Study Only Goes So Far
Finally, there’s the simple problem of human error, which takes all forms. Take any study that uses the body-mass index, or BMI. The BMI is a widely accepted medical tool that’s extremely popular, which is strange because it was developed by a 19th-century French sociologist who had no appreciable medical training. Even Ancel Keys, the man who coined the modern term for it, basically said that it was just as crappy as other methods for studying obesity, but it was the best they had. This has resulted in the “obesity paradox”, a lengthy scientific argument over whether or not obesity is always a bad thing and what the medical definition of “obese” should even be in the first place.
The nature of science means constantly questioning assumptions and ideas, and, inevitably, sometimes those ideas fall apart. Before the 1980s, it was widely believed stomach ulcers were caused by stress, but Barry Marshall was skeptical, and did some tests, discovering that H. Pylori bacteria caused ulcers instead. That got him a Nobel, if you’re wondering how scientists view this.
So, as you’re reading any study, even a well-done one, remember that science is built on assumptions. Carefully tested, well-thought-out assumptions, but assumptions nevertheless, and that sometimes, it all comes crashing down.