Similar to Freddie Deboer, I identify as an ‘AI realist’ or an ‘AI incrementalist’. I don’t identify as a skeptic though, as that has become a loaded word to mean disbelief (e.g. bigfoot). AI is real; however, I’m doubtful of far-reaching narratives that AI will precipitate humanity’s extinction or the collapse of society (I criticized Eliezer Yudkowsky here and here about this). Or conversely, that AI will lead to ‘the singularity’, ‘AGI’ (however that is defined, which good luck getting a clear answer ), become ‘aware’, or liberate people from work. For doomers and utopians alike, AI takes on a kind of eschatological role, similar to religion. The details are always hazy when pressed, and the anticipated rapture-like moment is always “a couple years out.” When life continues as usual, the deadline is quietly pushed outward.
In agreement with the Infinite Scroll, “AI Hype and the Search for Meaning“, AI fills a void in people’s life for meaning or change. Rather than having to take the initiative, the hope is that AI will somehow engender the change that people are too unmotivated or disempowered to bring about themselves. I don’t want to dismiss AGI entirely. It’s always possible something may happen, hence why I am invested accordngly (tech stocks). But we needn’t create drastic scenarios. Instead, three or four years after AI burst into the public consciousness, there is enough data that we can simply look at the evidence and assess what it can–and cannot–actually do.
AI is great for assisting humans. It can show the steps of trying to solve difficult math problem, and in some cases, actually render proofs to particularly difficult problems. It can create content with just text prompts, which is the typical use case for the average consumer, whether it’s producing a short video, a picture, or writing an essay. This is sometimes derided as ‘AI slop,’ which although annoying or an eyesore, hardly rises to the level of being an existential threat to humanity either. Then there is the whole ‘vibe coding’ aspect. AI can now generate entire programs from little more than prompts. There’s growing evidence that vibe coding will hurt the profit margins of leading software companies. This is already reflected in declining share prices across parts of the sector, in what the financial media has called the “SaaS apocalypse,” amid fears of margin compression and customer defection to cheaper, AI-generated alternatives.
I too can relate that AI does a great job some of the time, but not enough to rely on it entirely. I may feed text to Chat GPT, and maybe 10-20% of the time it suggests a rewrite that is better than my own, or it chooses a word or phrasing sounds more pleasing. For math, about 10-20% of the time it suggests an approach to looking at a problem that never occured to me. It’s like having an assistant or a second set of eyes. It costs nothing to ask AI for a second opinion; I can choose to ignore the suggestion or use it. For vibe coding, it’s the opposite: keeping 80-90% of AI output, and then spending hours refining that last 10-20% that AI can’t quite get right. In either case, human discernment is required.
Sector-specific disruption (e.g. SaaS stocks) is much more likely than AI fundamentally reshaping the economy (e.g. ‘post scarcity’ or ‘post labor’ economies). Instead, we see AI impart significant influence on certain vulnerable sectors and industries of the economy, but society, as a whole, goes on as usual. People still go to their jobs , save for the handful of possible AI-related layoffs. Despite the sharp decline in many SaaS stocks–down 25% or more from their highs in some cases–the Nasdaq 100 (QQQ) and the S&P 500 remain near or at record levels, indicative of an economy where weakness in some sectors is offset by strength in others (e.g. Google, Walmart/retail, datacenters, and semiconductor companies). This is what is expected of a dynamic economy.
Moreover, with the so-called ‘affordability crisis‘ a focal issue, my post from June 2023, “AI is not going to make things cheaper (hidden fees and costs will persist)” aged well. All the evidence instead points towards things becoming more expensive despite AI, with ‘post-scarcity’ being more wishful thinking than grounded in any reality.
Sure, vibe coding may mean cheaper software. But any savings are likely to be offset by rising costs elsewhere, such as healthcare, housing, tuition, dining out, and other labor-intensive goods and services–where prices continue to climb and people have few good alternatives (good luck ‘vibe coding’ a medical procedure). AI will not suddenly make the phone or home internet bill cheaper. Same for surging insurance premiums and rent. Companies have shown a remarkable ability to pass costs in clever or subtle ways (e.g. the ‘trinity’ of shrinkflation, tip-inflation, and enshitification), where even if prices are not nominally rising much, customers get the impression they are still being ripped off or paying more indirectly.
AI won’t make adverting cheaper. Super Bowl ad spending in 2026 hit another record. Google and Meta continue to earn record earnings from paid advertising, such as those super-pricey mobile ads. Such costs are passed on to consumers. When you pay $8 for a Starbucks coffee or an overpriced $30 Chipotle meal, you’re paying for the labor and advertising, not the raw goods.
As I argued in 2023–2024–and so far correctly–AI has not led to widespread job destruction. Predictions of imminent mass unemployment have become a familiar refrain among the chattering classes and tech CEOs, yet the promised timeline is always 12 to 16 months away, as the long-awaited ‘white-collar job apocalypse’ repeatedly fails to arrive.
Here are pundits in 2023 and again 2026. The only difference is the year; the alarmist language is the same:
- 2023: AI could affect many white-collar, high-paid jobs — CNBC, July 2023.
- 2023: AI automation to drive mass white-collar job losses in 12-18 months, says Andrew Yang — The Economic Times.
- 2023: ChatGPT could disrupt white-collar jobs, experts warn — Business Insider, 2023.
- 2023: AI may replace millions of white-collar jobs, economists say — Fortune, 2023.
- 2023: The AI revolution is coming for white-collar workers — The Washington Post, 2023.
- 2026: Microsoft AI chief says AI will replace most white-collar work in 12–18 months — TechRadar, 2026.
- 2026: AI could eliminate half of entry-level white-collar jobs, CEO warns — Yahoo Finance, 2026.
- 2026: AI is displacing jobs, but some firms use it to justify layoffs — Business Insider, 2026.
- 2026: Sanders warns AI revolution threatens tens of millions of jobs — The Guardian, 2026.
- 2026: AI’s next disruption could bring widespread white-collar job shifts — Fox News affiliate report, 2026.
Ironically, the biggest growth industry in AI may be commentary about it. Since 2023, there has been a surge in media coverage–especially from outlets like The Atlantic, The Economist, and The New York Times–publishing articles on AI’s implications. There are likely thousands of people whose job it is to write, podcast, or otherwise opine about AI.
It’s much easier to think of jobs that will not be automated by AI compared to jobs that are at risk. A favorite example is law, but lawyers still have to represent their clients in court. Or doctors? Good luck performing surgery without surgeons. Or performing a biopsy with AI, or pediatrics without a family doctor. Even with robotic surgery, an expert still has to oversee the robot. Attempts at ‘remote medicine’ have been mixed. It had its moment during Covid, but it failed to catch on; many people evidently want or need to see a doctor. Food preparation is hopelessly stuck in the 20th century; good luck automating that. The same goes for nursing, elder care, teaching, baristas–just about anything hands-on and requiring interaction with people. None of these will be automated anytime soon.
Same for status, or lack thereof. Or signifiers of status–be it big salaries, nice cars, social media clout–or anything else where there there are more people competing for a relative finite quantity of some resource. These are positional goods, defined by relative scarcity. AI has not diminished the influence of elite schools or elite jobs. Jane Street , Citadel and other top hedge fund salaries are bigger than ever, and getting a job at those firms harder than ever too. Same for ‘FAMNG’ jobs: salaries keep ballooning despite AI. These companies have also adapted to interviewees possibly using AI to cheat. That was a big concern a year ago (e.g. Cluely, which blew up a year ago as the hottest AI start-up, to now largely forgotten), but we don’t hear about it as much. Pundits underestimated the ability of companies to adapt, such as the use of counter-AI tools to negate AI-powered cheating tools.
So why have white-collar jobs continued to thrive? As I argued in 2025, a meaningful gap still exists between a finished, production-ready product and what AI systems can reliably deliver on their own. Or edge cases necessitating human intervention, or solving coordination or adversarial problems. This is a common complaint with vibe coding, where the vibe-coded product is 95% complete, but getting that last 5% to work to the specifications of the coder is time consuming and hard. When something goes wrong or other unforeseen circumstances, you want an expert onsite to handle it. When Cloudflare suffers from one of its many outages, the expectation is that humans are going to fix it, not an AI.
In the end, AI will not erase class divisions or soften disparities of individual talent. It’s hard, if not impossible, to find examples of Ai actually making things less competitive. If vibe coding is yet another skill, it stands to reason some individuals will be much better at it than others. Nothing has changed in terms of individual differences of human capital and the inequalities that flow from this, contrary to the hope of AI having a ‘leveling effect’. Indeed, large companies are now paying top dollar for top vibe coding talent, as Open AI evidently did when they acquired the ‘OpenClaw’ tool and its founder for presumably a lot of money. Why didn’t Open AI just clone OpenClaw for far less money? Because there are no substitutes for branding, virality, and top talent.
I don’t want to pick only on the AI-utopians. The doomers and bubble-sayers have been even more wrong. If the media is any indication, AI is the greatest bubble ever that should have popped years ago, and yet it continues to defy the doubters. Private valuations of leading AI companies keep going up, such as Anthropic, which a couple weeks ago raised $30 billion at a $380 billion valuation. Meanwhile, Bitcoin crashed. The actual bubble was in crypto–and to a lesser extent, SAAS companies, which for years rode a wave of high profit margins and little competition–not AI, tech stocks, or the US economy as so many wrongly predicted. This goes to show how hard it is to predict things. Perhaps the most accurate prediction about AI is that most people will be wrong.