Why everyone is probably wrong about AI

I saw this viral post by Dwarkesh Patel, “Why I don’t think AGI is right around the corner.” This is a reasonable take on AI, in contrast to the hype that is typical of the topic. There is a middle ground of people, such as myself and Freddie deBoer, who believe AI will lead to improvements in some respects of life, but will not liberate people from work or anything grandiose like that. Nor will AI bring about the end of humanity either.

Instead, we will continue to see steady progress at the sort of things AI is good at, like text and image generation, parsing data, generating code, and so on. But the utopian dream of AI solving all of humanity’s problems as people bask in infinite prosperity are unfounded.

There is no consensus as to what ‘Artificial General Intelligence’ (AGI) means or entails. The definition is hopelessly vague or imprecise, meaning that the goalposts are constantly being moved. For example, Google defines it as, “…the hypothetical ability of a machine to understand, learn, and apply its intelligence to any intellectual task that a human being can perform. It’s a type of AI that aims to replicate the cognitive abilities of the human brain across a wide range of tasks, unlike current AI systems which are typically designed for specific tasks.”

The thing is, this already exists. AI already beats or matches humans at many things. An obvious example is writing essays, which college students struggle at, but AI does with ease. Or solving math problems. And all from the same program, suggesting a generalized type of intelligence, than specialization. But whether this constitutes actual understanding or learning in a deeper sense, is debatable. If someone from 20 years ago could demo today’s chatbots, they would probably deem it as close enough to a ‘general intelligence’.

My own experience is that AI does a reasonably good job at many things, but is not a substitute for expertise. It’s like having an assistant, who sometimes hallucinates or obstinately refuses to do certain things. Or like a smarter and multifaceted Google.

But still, AI can be surprisingly limited. Many have noticed that popular commercial AI software, such as Chat GPT, leaves telltale artifacts of its usage, a famous example being the overuse of em-dashes. I tested this recently and noticed that it’s still overusing them. One would assume that if AI were so ‘smart’ it would adapt by using them more sparingly, but evidently not. There is also significant rate-limiting of features as well, among other restrictions that soon become obvious.

Conversely, others argue AI will conspire to kill its creators or accidently destroy humanity in an attempt to summon sufficient resources to solve a problem. Either side of the alarmist or utopia coin invokes the same ‘wait and see argument,’ in which any skepticism is dismissed for lack of imagination or not having a long enough time horizon.

When pressed by the rebuttal that incremental change or progress at certain domains does not mean an entire overhaul of society, the burden of proof is shifted from the person making grandiose claims to the skeptic to prove why the world will not end or why AGI will not happen (assuming we can even agree what AGI is). This is impossible to do unless the frontier of progresses is limited by the known physical laws of the universe, which provides a hard limit on anything I suppose.

All you have to do is look back at all the predictions about virtual reality during the early nineties, or space explorations during the sixties, to see that people’s track records at this sort of stuff is really poor. Robots replacing humans has been a staple of fiction forever, and even now feels so distant. Robots are still clumsy and expensive. The success of robots in the context of assembly lines, failed to spillover to replacing housework or restaurant waiters, in which dexterity becomes much more important. Futurists such as Buckminster Fuller envisioned a more interconnected world, but except for the internet, humanity is more divided than ever among longstanding cultural or ideological lines.

I remember in 2023 when AI made major breakthroughs. There was talk that AGI was just a couple of years away. Now, two years later, while the technology has improved, AGI still feels out of reach. How long are we supposed to wait until we can conclude that maybe the earlier forecasts were wrong? In the end, maybe no one knows. And that is fine. It doesn’t have to be ‘all or nothing’. We can enjoy the fruits of AI, and at the same time admit maybe no one has any clue either about how this will all end.