A major problem with educational or informative content, such as on YouTube or social media, is an overreliance on studies. I see this a lot in anything health or fitness related. Studies are much less useful than their presumed authority would suggest. The assumption is that because studies are perceived as objective instead of opinion, they must be more useful, but studies have four major problems:
1. Interpreting the study, or more specifically, misinterpreting it. How a study is interpreted is subjective, so it’s not like it’s possible to be entirely unbiased when using studies.
2. Garbage-in-garbage-out. This where the methodology of the study or the sample does not provide useful information even if the study passes scientific rigor. For example, data mining. Or academic fraud, in which data is manipulated. A notable example is an influential and now-retracted study by Dan Ariely, ironically, about dishonesty. A study becomes popular and widely cited because the results seem plausible and confirm people’s preexisting biases or beliefs about society, only to fail to replicate later.
3. Inapplicability. This is where it’s hard to interpret what the results mean on a practical/individual level. A study of resistance training among 80-year-olds is going to be less useful for 20-year-olds. A study composed of untrained subjects may not generalize to subjects who have training experience.
Or mouse models. I recall a study on Twitter about how mice fed high fructose diets did not gain weight, with the implication being this could work for humans. If only humans could be turned into mice obesity would be cured. From google, “Chiapas, Mexico drink an average of 2.2 liters of Coca-Cola per day, which is more than any country in the world per capita.” Mexico has among the highest obesity rates in the world. All the evidence, for humans, is that sugar induces obesity, not protects against it.
4. Confirmation bias. In the context of meta analysis, omitting studies that go against one’s preexisting beliefs. If you want to show that low-carb/fat diets work, simply ignore the opposing studies. I remember a video in which the health guru John McDougall mentioned a study from the ’70s about how eating bread purportedly led to weight loss among a sample of college students. This study may be 100% true in so far as people lost weight eating bread, but what about all the studies in which nothing happened or subjects gained weight, or was this study ever replicated?
Studies typically involve making inferences from large groups of people. For example the typical headline, “Drinking wine is associated with a 1% increased incidence of death over a decade,” which although perhaps interesting or true in the aggregate, is not that useful from a practical or individual level. How does one quantify a 1% increase over a 10-year period? Or what does this mean in terms of how one ought to change his or her drinking habit, if at all?
A study may claim, on average, that people who train a certain way may get slightly stronger, or lose a small amount of weight on a certain diet. But because individuals are not averages, it’s hard, if not impossible, to predict where someone will fall on the spectrum of possible outcomes.
The study has to be either overwhelmingly conclusive with a very large effect size (e.g. smoking appreciably lowers life expectancy), which is a bar very few studies can clear. Thus, studies are more useful from a descriptive, such as averages over large groups of people, than prescriptive at an individual level, e.g. what one ought to do.
Other times, the study is correct, but is misinterpreted. For example, there was a famous study that purported that bulking (consuming calories at a large surplus) produces more ‘lean mass’ compared to maintaining. This led to the popularity starting in the early 2000s of bulking, such as in the context of bodybuilding or strength sports. But lean mass includes everything that is not fat, not just muscle, so it’s intellectually dishonest to conflate muscle mass with lean mass, yet you see this a lot on YouTube and social media.
Other studies fail to control for important variables. A study my claim, “Humanities majors earn less money compared to STEM majors,” which although broadly true, does not take into account the ranking of the school, which matters at an individual level.
It’s hard to overemphasize how important this is. Let’s consider a contrived study that tests perceived satiety/fullness of different macros. The participants are all fed a high-protein meal first, followed by a high-carb meal and finally a high-fat meal. The participants overwhelmingly–giving that coveted p < .01 value–report the high fat meal as the most satiating. Case closed? Hardly–perhaps they were merely full from the other two meals. Or remember all the hype about the Twinkie Diet? The professor was only eating at a deficit; there is nothing special about Twinkies in so far as weight loss is concerned.
Not to mention, wishy-washiness. This is a major problem too. It’s not uncommon in a nutrition study for a “high-fat diet” to be defined as only, say, 40% of the nutritional content from fat, compared to 30% for the “low-fat diet”. A 10% difference between high fat and low fat seems so indistinguishable as to be meaningless. When one thinks of “high fat” what comes to mind is more like 80-90%, or at least to me.
So what is the alternative? I think n=1 studies are more useful. You either run trials on yourself to see what works best, like dieting or fitness (as someone like Tim Ferriss does), or you copy the protocol of someone who is similar to you, e.g. similar age, weight, or other pertinent variables. Best of all, you can construct it anyway you want, so no wishy-washiness. You can make a high-fat diet that is actually high in fat. So does a bread diet work? Try it and find out.