Psychology does have a bullshit problem, but STEM is not immune either

I saw this going viral I’m so sorry for psychology’s loss, whatever it is

Psychology is unloved, and deservedly so. It’s hard to find anyone, psychologists included, who can vigorously or unconditionally defend the field. I think even psychologists acknowledge that there are too many flimsy, weak, or underpowered studies, or unsupported generalizations about the predictiveness of such studies.

I think the extent of academic misconduct, that is known, is the tip of an iceberg. It would not surprise me if the replication rate is much lower than 40-50%, but more like 5% or even about zero. Basically, they are making it up as they go along or grasping in the dark. Save for maybe a handful of studies, no one has any idea how human behavior works, and only a faint idea of how the brain works. One example is ‘persuasion techniques’, another bullshit area that needs to be unmasked.

On the other hand, let us assume half (or some percentage ) of studies does not replicate. This does not tell us about the implications of those that do. I think there is the tendency to throw the baby out with the bath water here. The science of IQ, for example, is highly reproducible and predictive of ‘real world’ implications, more so than any other topic of psychology such as personality. To paraphrase Dr. Jordan Peterson, “To disregard the science of IQ means you must disregard the rest of psychology, because IQ is the most statistically robust.” Other research possibly also replicates, such as priming.

But although it’s easy to beat up on psychology and the humanities overall for flimsy studies and ‘postmodernist gibberish’, STEM, especially computer science, has its own bullshit papers problem too, as discussed in my post Having academic credentials is no assurance of quality. ArXiv is full of seven-page computer science papers with seven co-authors about things that feel like pure resume padding and very light on any sort of actual science or rigor. Stuff like “Hate Speech and the Algorithms of Twitter” or “Using Machine Learning to Detect Covid Misinformation,” as shown below. Same for math papers in which common formulas are rederived.

Here is a screenshot of what I am talking about. Yes, these papers are technically considered to be a part of “STEM”, as somehow being cordoned off from the bullshit of the social sciences or the humanities:

The first paper “Auditing Elon Musk’s Impact on Hate Speech and Bots” weighs in at just six pages and has six co-authors. It feels also like a citation ring. It would not surprise me if it’s the same couple dozen or so authors citing each other and co-authoring each other’s papers.

Clicking on one of the six authors “Fessler, D” shows the same six authors for a second paper, “No Love Among Haters: Negative Interactions Reduce Hate Community Engagement,” 2023.

The first sentence of the abstract begins, “While online hate groups pose significant risks to the health of online platforms and safety of marginalized groups…” This feel like the sort of thing that belongs in a social science journal or pre-print, not arXiv.

The modus operandi seems to be:

-find as many co-authors as possible to divide the work and share the publishing rights
-gather data with API
-run some R/python analysis on said data
-look for anything statistically significant that affirms politically correct narratives, such as hate speech
-write up paper (be sure to include as many citations as possible, as you are expecting others to do the same for your paper, too)

This is a lot cheaper and scalable than running tests on human subjects.

Just as ‘useless/worthless degrees’ are part and parcel of the humanities (which are not so worthless given that the data still confers a college wage premium), STEM is rife with all those bullshit ‘become a front-end developer in 6-weeks’ courses/bootcamps advertisements I often see. Yeah right. Sure, for $10,000 you may get a certificate that shows you completed a course, but does this imply competency at coding? Who knows.

The fact arXiv publishes this stuff is an example of ‘mission creep’ and hurts its ‘brand’ by diluting it with math-light pseudo social-science fluff. Submissions to arXiv should be limited to math and physics, theoretical computer science, and maybe quantitative finance. The aforementioned examples would be better suited to something like SSRN, which has more of a focus on the social sciences, not math/physics, but it’s brand is weaker compared to arXiv, which has a limited form of peer review, that being the endorsement system and algorithmic filtering/screening of papers, having published some math papers on arXiv myself, which SSRN does not have.

In trying to diagnose the problem, the problem is the incentives are aligned to favor quantity over quality, and this is in all fields. A huge resume/cv full of items looks more impressive to the media, tenure committees, outsiders, etc. compared to a sparse resume. It’s not uncommon for academics to list minor media appearances and op-ed articles as part of an academic CV, to pad its length. Second, peer review, in theory, is supposed to flag bullshit, but peers cannot replicate the experiments as readily as checking theorems or ‘logic flow’, like in math. They can only check that the methodology seems ‘sound’ (the smell test), but evidently this is not good enough.