Scott’s three-part meritocracy series:
part 1: TARGETING MERITOCRACY
part 3: DON’T BLAME GRIGGS
Skimming through the comments, here is what I gleaned:
- Companies and recruiters will default to Ivy League applicants, not necessarily because such applicants are more meritorious, but because it’s safer, so there is less culpability than had they taken the risk of hiring an applicant from a lesser-known school, even if such an applicant prima facie was more qualified. As the saying goes, “Nobody ever got fired for buying IBM”
- It’s not so much about merit but signaling merit. The assumption is that the highest scorers will be the most competent, but this is not always the case. The evidence that the LSAT predicts law school performance is poor or uncertain.
Regarding the Alice and Carol hypothetical:
Alice and Carol are both programmers, and are up for a promotion to management. Alice is smarter, works harder, and produces better code. She gets along well with everyone and is consistently rated as the highest performer in the group. By contrast, Carol is consistently a mediocre programer. She’s not awful—certainly in no danger of being fired. But she’s not as smart, she doesn’t work nearly as hard, and her code is acceptable rather than excellent.
On the other hand, Carol has a real knack for management. When she’s in a group project, she naturally takes the lead and others feel comfortable deferring to her. (Alice is more likely to just do an unfair share of the work herself.) Looking at the two of them, we can confidently predict that—even though Alice is the better programer—Carol would be the better manager.
So, is it more meritocratic to promote Alice or Carol?
A solution is promote Carol to management but also give Alice a raise. However, Alice may still feel cheated.
- Meritocracies may widen the divide between elites and everyone else, but such a divide is the price to pay to have competent people in important positions. It’s basic economics: Why does a coder make more money than someone who washes dishes? Because anyone can wash dishes, but maybe only 5% of the population (an IQ above 120) is smart enough to code, but coding is harder than washing dishes, so unless coders are paid more, people with high IQs will make the most rational choice of choosing to wash dishes (since the pay is the same as coding and it’s easier than coding).
- Defining merit is subjective, and this problem is compounded by the fact that those who are in power can set standards for what is considered ‘merit’, which keeps the powerful in power but possibly hurts others who hope to advance.
As Scott writes:
Different considerations certainly apply to surgeons versus senators, and talking about appointment to a vague “ruling class” probably confuses things pretty badly. I’m much more willing to listen to arguments for a randomly selected Congress than I am for a randomly selected surgical staff. Maybe the problem is that, aside from a few elected officials, nobody ever notices that they’re choosing people for the ruling class at all.
It’s much easier to quantify a ‘good programmer’ or a ‘good doctor’ (based on test scores) than a ‘good politician’ (which is far more subjective).
- Utilitarian arguments can be made for choosing less meritorious candidates. If Mike is a 10% better coder than Bob but has 20% worse ‘people skills’, perhaps the company is better off with Bob. Although ‘people skills’ can, in fact, be defined as merit. This is part of the problem of subjectivity.
- Meritocracies can sometimes become ‘guilds’ that create increasingly high barriers to entry. This comment from the Girggs article stood out:
In economic terms, several critics of the American Medical Association, including Nobel Memorial Prize winning economist Milton Friedman as well as his wife, Rose Friedman, have asserted that the organization acts as a guild and has attempted to increase physicians’ wages and fees by influencing limitations on the supply of physicians and competition from non-physicians.
From the Griggs article, college degrees have not been challenged in the same way tests have been challenged:
What matters isn’t the fine print of the decision, but how likely someone is to get sued. Once employers started asking for college degrees but didn’t get sued for it, then other employers knew that asking for college degrees was safe and the popularity of doing so snowballed.
This comment stood out:
HR departments like three things: the ability to trashcan a bunch of applicants, the ability to cover their ass if a hire goes bad, and the ability to avoid litigation. Credentialism immediately lets them screen out a bunch of applicants. So when justifying their time and effort to upper management they get to say things like “we looked through 300 applicants before showing the hiring team the top 10 possibilities”.
Credentialism is an easy, litigation-proof way of screening applicants.
Like most social science debates, part of the frustration is there seems to be no resolution, even after skimming through hundreds of comments. My take is, credentialism is a costly and time-consuming means of signaling merit. In this sense, credentialism is part of the meritocracy, but also obstructs it by creating all these hoops that job-seekers must go through. Opportunity and upward mobility can be increased by reducing such credentialism (such as by using tests instead of diplomas). Although it will still be a meritocracy (and that means society will be divided between an ‘elite’ and everyone else), it will at least be a fairer one.