Eliezer Yudkowsky was recently on the podcast “Bankless Shows”, in which as the tile conveys in no uncertain terms, he argues that humanity is going to perish due to AI.
Bankless is crypto podcast. It should change its name to ‘penniless show’ or ‘moneyless show’, because that is the likely outcome of anyone who invests too much in crypto or actually believes that crypto is a substitute for banking. But I guess if AI is going to kill everyone, then having money is the least of anyone’s concerns.
I think his argument is lunacy, and he does not come anywhere close to meeting any sort of burden proof, such as explaining the mechanism or chain of events that must arise for the AI to replicate, let alone kill everyone, only that is will happen after it somehow learns to make copies of itself. And that the AI does not want to kill people, but does so accidentally. So the onus is on humans to make AI that does not kill people, but it’s almost certainty too late anyway.
The argument is unfalsifiable, specially, in invoking the appeal to ignorance fallacy. I would go so far as to say Mr. Yudkowsky does not actually believe it. Maybe he believes that AI may pose some danger to society and that efforts are needed to contain it, which is reasonable enough, but not the alarmist “we’re all going to die soon” aspect of it. The hosts did not do a good enough job pushing back against the nonsense, but I don’t think they were qualified, lacking the technical background. They were happy to have such an important guest, and their main focus is crypto anyway.
It’s not even an original argument given that it’s just an iteration of the ‘grey goo’ disaster scenario, popularized by the 2002 Michael Crichton novel Prey, or the ‘paperclip maximizer’ described by Nick Bostrom in 2003, except it’s GPU clusters and matrices instead of nanobots. Other jargon is updated too, such as preferences curves.