Dwarkesh Patel on AGI: Separating AI hype from reality

I saw this viral post by Dwarkesh Patel, “Why I don’t think AGI is right around the corner.” This is a reasonable take on AI in contrast to the hype that is typical of the topic. There is a middle ground of people, such as myself and Freddie deBoer, who believe AI will lead to improvements in some respects of life, but will not liberate people from work or anything grandiose like that. Nor will AI bring about the end of humanity either.

Instead we will continue to see incremental improvements at the sort of things AI is good at, like text and image generation, parsing data, generating code, and so on. But the utopian dream of AI solving all of humanity’s problems as people bask in infinite prosperity are unfounded.

There is no consensus as to what ‘artificial generalized intelligence’ means or entails. The definition is hopelessly vague or imprecise, meaning that the goalposts are constantly being moved. For example, Google defines it as, “Artificial General Intelligence (AGI) refers to the hypothetical ability of a machine to understand, learn, and apply its intelligence to any intellectual task that a human being can perform. It’s a type of AI that aims to replicate the cognitive abilities of the human brain across a wide range of tasks, unlike current AI systems which are typically designed for specific tasks.”

The thing is, this already exists. AI already beats or matches humans at many things. An obvious example is writing essays, which college students struggle at, but AI does with ease. Or strategy games, such as Go. All from the same program, suggesting a generalized type of intelligence, than specialization. But whether this constitutes actual understanding or learning in a deeper sense, is debatable. If someone from 20 years ago could demo today’s chatbots, they would probably deem it as close enough to a ‘general intelligence’.

My own experience is that AI does a reasonably good job at many things, but is not a substitute for expertise. It’s like having an assistant, who sometimes hallucinates or obstinately refuses to do certain things. But in today’s economy and society, the top dollars and status only are awarded to the best. No one cares if you’re merely competent at something. The people who have the most social status and earn the most money are not the median or average people at their respective domains. Otherwise there is Fiverr.

There is such a huge difference or gap between ‘top people’ and the average. In math, a world class researcher gets tenure at a top university and publishes influential papers. A ‘merely competent’ person may teach math at a community college. A scratch golfer is objectively a very good golfer, yet still a far cry from being a pro in terms of pay or ability. This has always been the problem with almost anything at life, in which there is an extremely lopsided distribution of how status is allocated.

What happened when computers became ubiquitous? Did wealth inequality go down due to automation? No, all the smart people got computer jobs and became wealthy that way. Now smart people are getting AI jobs and being paid similarly handsomely. If history is any evidence, AI will not be a force for egalitarianism, but rather that the usual differences of human capital and the disparities these engender will remain intact.

Smart, well-connected people will just adapt to whatever the latest technology or hype-cycle is, because they are already starting at a major advantage anyway. People who are already smart and talented can quickly learn new skills and adapt–that is what makes them smart and talented in the first place. It’s those who are in the ‘fat middle’ of the distribution in terms of ability who will have the hardest time adapting. Look how rural Midwestern towns failed to adapt to automation and offshoring.

It always strikes me as wishful thinking when technology is promised to disrupt the most successful of people or businesses. Look how Microsoft, Alphabet, Meta and other huge tech companies pivoted to AI , including Microsoft buying out a large stake of Open AI instead of being replaced. True, Apple disrupted Research in Motion, Google disrupted Yahoo, and Meta disrupted MySpace, but there is no rule set in stone it must.

As for liberating people from work, one thing that stands out is how rather than eliminating work, AI only shifts it around. People are doing the same amount of work, but only different types of work. The quantity of work still fills the allotted time despite AI. Rather than coding, coders are setting up run-time environments for AI or modifying and combining AI-generated code. The interplay between GPUs and AI is another type of job. App development is still time consuming and hard, even with AI. Consider the viral video below of a girl using AI. She’s still doing work at the computer:

In fact, she has two computers, as if one was not enough.

To entertain the possibilities of AGI, consider the ‘black box researcher’ thought experiment, in which a distributed computing entity is fed the corpus of knowledge about human biology. Eventually it spits out the formula for a cure for cancer. Can this be a reality over a long enough horizon? Who knows. I am skeptical. Breakthroughs are typically very subtle, which is why they are hard to find in the first place. Part of the reason AI struggles so much at ‘textbook math’ is it does not know which transformation, like for computing an integral, or other rules to use. The possibilities are endless. And it needs to be exact instead of ‘good enough’. The contextual learning method does not work as well in this instance.

Even if AI eventually leads to less work and more free time for people, it doesn’t address the deeper challenge: how to live a meaningful and fulfilling life. In fact, those who seem to lead the most fulfilling lives—like top YouTubers, podcasters, and CEOs—are often incredibly busy. When people retire or win the lottery, the initial excitement often fades, leaving behind a void of unstructured time that can feel empty.

Some take it a step further and argue that the AI will conspire to kill its creators or accidently destroy humanity in an attempt to summon sufficient resources to solve a problem. Either side of the alarmist or utopia coin invokes the same ‘wait and see argument,’ in which any skepticism is dismissed for lack of imagination or not having a long enough time horizon. So AI is supposed to be moving rapidly, but when pressed by the obvious rebuttal that incremental change does not mean an entire overhaul of society, suddenly we’re told to wait.

I remember in 2023—when AI made major breakthroughs—there was talk that AGI was just a couple of years away. Now, two years later, while the technology has improved, AGI still feels out of reach. How long are we supposed to wait until we can conclude that maybe the earlier forecasts were wrong? In the end, maybe no one knows. And that is fine. It doesn’t have to be ‘all or nothing’. We can enjoy the progress and fruits of AI, and at the same time admit maybe no one has any clue either.