Why You Need To Be Critical Of Nutrition Science Articles
“We found chocolate is the healthiest type of food!” — study funded by Big Chocolate.
One of my favorite things is reading scientific papers and pull out interesting findings to share with people. I also love reading articles where the journalists cite scientific, peer-reviewed studies. A lot of my articles on here are inspired by scientific studies.
There’s something really fascinating about taking the latest study published in a big scientific journal, and interpreting it in a new way for other people to read. Science communication is vital in a happy, healthy, democratic community, and anything that fosters that relationship is a good thing in my book.
That being said, there’s a dirty secret not many people seem to be talking about: some scientific studies are funded by industry.
What does that mean? It means that someone working in the chocolate industry, or the beer industry, or the sugar industry, thought it would be a good idea to get some scientists to prove their product was healthy, or that a competitor’s was unhealthy.
So they fund these studies to find out if their product really is the new superfood.
Nothing wrong with that in theory, of course.
There are two reasons why you need be to wary of these types of articles.
1. People like new, sexy, interesting scientific results.
Academia should be completely unbiased and rational. It’s a science, with no room for anything by pure logic. But academia is run by humans, and humans are flawed.
When scientists perform an experiment and get a boring result, it doesn’t tend to get published and distributed.
For example, if someone runs a study to prove if having lasers pointing at your eyeballs helps you retain information, but gets the (boring) result that, surprise surprise, that doesn’t help, it probably won’t be published because the headline “Lasers in your eyeballs don’t help you study” isn’t going to excite anyone. That’s old news. We already knew that.
But that means the next person to come up with that idea will do a quick Google, believe they’re the first ones to think of it, and waste a lot of time and money testing the idea out because the boring results weren’t distributed.
Logic dictates that publishing negative results is a good thing, because it will help people avoid the same mistakes. Humans doing the science think it’s a bad thing.
Negative results are dull. They don’t show us anything new. So if there’s a study that shows chocolate is bad for you, nobody will be interested in publishing that finding because people don’t care.
Journals don’t want to showcase dull, old news; they want fresh, exciting takes. So they generally refuse to publish things that aren’t “cool.”
Instead, these results sit in a drawer, unread, indefinitely. This is known as the File Drawer Effect.
It’s not so bad on its own — it can just mean people repeat failed experiments over and over again, because the earlier failures weren’t published. It’s a waste of resources and time, but no real harm is done.
As long as you bear in mind that for every “chocolate is the new superfood!” study you read, there are probably five that show it isn’t the new superfood, that just never got published.
And it doesn’t end there.
2. Researchers unconsciously favor the people paying them.
Let’s have a thought experiment. Let’s say you’re a food scientist. You were just funded by Big Chocolate to find out if eating a piece of chocolate per day has long-term, positive implications on health.
It’s a lot of money. And you love your job.
So, you do the experiment and, wow, you find out that chocolate is healthy and good for you!
That’s great — big chocolate is more likely to fund you in the future since they’re happy with these results. You’re likely to get published, because this is new and exciting information.
It’s possible that no “bad science” was done here. But it’s unlikely. One study (not funded by any large industries) found that nutritional studies funded by industry were 4–8 times more likely to find positive impacts than those which were unfunded.
Results get published. You’re thrilled with another publication. The public rejoices as they eat more chocolate without feeling like they’re being unhealthy. Big Chocolate gets a bit richer. You go about your life without feeling like you’ve done anything wrong.
Pointing these findings out to scientists won’t do much good, because we like to think of ourselves as rational and logical. You tell anyone that number, they might not disagree with it, but they’ll think to themselves: that doesn’t apply to me.
But the numbers don’t lie. For whatever reason, whether intentional or not, when an industry funds a study, the results are likely to favor that industry. And that’s not good science.
This is what happens when people expect science to be unbiased, infallible. As a practice, science is free of prejudice. But scientists are human. Like most people, they like money, interesting results, influencing others.
Researchers don’t think they’re being influenced. Companies don’t outright pay for good results. Bad results are purposefully buried.
But the numbers don’t lie, and the trend is impossible to reject. When industries pay for studies to be done, the outcome is far more likely to be positive. And there aren’t likely to be any negative results published, because they’re not very interesting to readers.
This can be as big as actually skewing numbers, or it can just mean publishing certain positive results and withholding other negative ones, because they’re not “significant enough to warrant publication.”
Next time you read an article in the paper that alcohol increases longevity, do yourself a favor and dig up the study it was published in before you start downing the beer.
Check: were the results accurately represented in the article you read? Were the possibilities of mistakes downplayed? And most importantly, who were they funded by?
Science is a pure form of research. But scientists are human, and fallible. We can all work towards better science — researchers by performing more unbiased experiments, or by insisting on publishing all results, and readers by digging into the why’s behind the science.