AI risk as rebranded climate change

This is a good article, Stuff I still don’t get about AI, agree:

And if we stick with the climate change analogy for a second, how worried should we be about polarisation? How likely is it that caring about AI risk gets thought of as a left-wing thing, or becomes associated with some other political identity that has powerful opponents? The fact that Bernie Sanders is now talking about AI risk is interesting – it means this is really going mainstream, but Bernie isn’t exactly my ideal candidate for an AI risk spokesperson.

‘AI risk’ feels like climate change 2.0 but for the cool kids, because the former has fallen out of favor. Or the rapture 2.0. The brand of climate change has been damaged by the perceived hypocrisy and cringe-ness of its advocates (e.g. Greta Thunberg, Soros, private jets to Davos, etc.). By comparison, robots, Elon Musk, LLMs, and GPUs are cool (even if Elon flies private, he’s not lecturing the world about how he’s morally superior). It combines ‘the left’ and ‘the right’ in the way millenarianism or climate change does not.

There are two major concerns: the destruction of humanity, and more modestly, job loss. The first prediction is unfalsifiable. It’s a hypothetical that does not rise to the level of anything that can be quantified or tested in any rigorous sense. People who argue the apocalypse have shifted the burden of proof to everyone else to disprove it while hogging the spotlight in the process.

Job loss is a more reasonable concern. But the evidence is mixed on this too. A favorite example I give are self-checkout machines. These have been around forever, and were promised/touted to eliminate jobs. But 15 years later the machines are hardly fully-automated. They are prone to malfunctioning. Not only that, but stores, understandably, are paranoid about theft.

Food stores have tiny profit margins, around 3%, so even a small amount of theft can be very detrimental. This is why there are always so many clerks overseeing the machines and why the machines are prone to throwing errors if item weights are not perfectly calibrated or anything else is amiss, which necessitates employees at the cost of convenience. Food stores cannot take a chance or trust that the machines will work perfectly or detect theft well enough.

The same applies to AI too, in that the AI works well, but not well enough to eliminate humans completely. Humans will act as a sort of quality control for AI, similar to self-checkout machines. Also, there will need to be humans to teach AI software and inputs to other humans, like how computers led to the rise of programmers and also programming books, teachers, and other occupations involving computers.

Leave a comment

Your email address will not be published. Required fields are marked *