Having academic credentials is no assurance of quality

From 2009, Eliezer Yudkowsky discusses the Shangri-La Diet, a type of diet that involves the added consumption of oil:

Once upon a time, Seth Roberts (a professor of psychology at Berkeley, on the editorial board of Nutrition) noticed that he’d started losing weight while on vacation in Europe. For no apparent reason, he’d stopped wanting to eat.

Some time later, The Shangri-La Diet swept… the econoblogosphere, anyway. People including some respectable economists tried it, found that it actually seemed to work, and told their friends.

It didn’t work even though the ‘theory’ behind it seemed plausible:

I tried it. It didn’t work for me.

Now here’s the frustrating thing: The Shangri-La Diet does not contain an obvious exception for Eliezer Yudkowsky. On the theory as stated, it should just work. But I am not the only person who reports trying this diet (and a couple of variations that Roberts recommended) without anything happening, except possibly some weight gain due to the added calories.

Fast-forward to 2023 and he’s still trying the oil diet:

No shit it did not work, at least not long term. If the oil diet actually worked, people would be singing its praises from the roof tops. This does not mean that popular diets which get a lot of praise are effective, but it’s also reasonable to assume unpopular diets are also ineffective.

There is so much demand for weight loss solutions and relatively low barriers to entry or other market frictions, that if someone actually had an answer/solution, it would quickly become the only solution. We would not need thousands of hours of YouTube content, much of it of dubious value, about macro counting or time-restricted ‘eating windows’. We would not have to debate between the merits or lack thereof of five or so competing diets, there would be just ‘The Diet’. Overall, the data shows all diets perform equally poorly, so it’s reasonable to assume no diet regardless of popularity works well, owing to the stubborn properties of human biology.

He writes, “Though it’s not like Roberts is a standard pseudoscientist, he’s an academic in good standing.” (He was; he died in 2014.) Umm…no. Wikipedia lists Seth Roberts as a “professor of psychology at Tsinghua University in Beijing and emeritus professor of psychology at the University of California, Berkeley.” What expertise could he possibly have about human biology, or nutrition for that matter, that would make him qualified to offer health advice? (But given the lousy track records of diets and weight loss pills, overall, it’s not like the experts are that much better in this regard.)

The dirty secret is, having letters after one’s name is of no assurance of the quality of research or veracity of any claims brought forth. After obtaining tenure, professors have the discretion to study anything, even fields completely unrelated to one’s putative expertise. Peer review can by bypassed by using pre-prints or writing books, in the latter leveraging one’s credentials for branding purposes even if the content is nonsense or wrong. Because tenure is a lifetime appointment and an expensive investment on the part of the university, the extreme vetting process of obtaining tenure is to prevent this from happening and ensure years of productive research, but it’s no guarantee.

Just as the humanities has a problem of bullshit papers that do not replicate (the so-called replication crisis), post-modernist nonsense/verbiage, or outright fraud, STEM is not immune to having problems too.

CV fluffing/padding is common (“publish or perish”). I saw a math paper where the author, a professor in Spain, had pretty much copied a bunch of formulas from a handbook of Special Functions and published that. It’s not that his papers are bad…they are useful as reference guides, but it shows how easy it is to pad one’s publication count by regurgitating stuff, but with some small modifications to make it look like original research.

Or multiple authors are often used, sometimes as many a half a dozen authors for a five-page paper, so each can add the publication to his or her CV. I have seen this so many times, especially in computer science. As a recent example, the paper “Auditing Elon Musk’s Impact on Hate Speech and Bots” weighs in at five pages and has 6(!) authors. That seems a tad excessive. Another example is the paper “Bot or Human? Detecting ChatGPT Imposters with A Single Question,” which has 12 pages and four authors. All you need to do is find some collaborators, run some simulations in R/Python for a positive result, and then divide the publishing rights.

It would seem like computer science (CS) is especially vulnerable to fluff compared to other STEM subjects. A lot of these papers seem like the bullshit-jobs equivalent of STEM. I think it’s because CS has the intellectual cachet of being a part of STEM (CS literally has the word ‘science’ in it), but without the steep learning curve of having to understand the complexities of human biology, molecular structures, or having to solve or understand gnarly equations like in math or physics. It’s more like running simulations or considering hypotheticals, that does not need to be made precise or follow precise rules, unlike other STEM subjects.

I think this is a good case for eliminating tenure, or at least why tenure is bad deal for universities or incentivizes unproductive or unethical behavior/conduct. Academics will do their highest quality work in anticipation/hope of getting tenure. If successful, quality and output tends to cliff dive.