In my earlier posts I argue that AI will not lead to meaningful economic change, or even boost productivity much. It will not liberate people from work, or even lessen the workload much overall. My prediction has generally been that AI will make continued, incremental progress at the sort of things it’s already good at, but there will be no fundamental change or paradigm shift where it renders humans obsolete or surpasses them, except for perhaps those specific tasks.
First, where it does excel are not the sort of tasks or problems that are of pressing commercial need. An example is image and video generation, which is sometimes disparagingly referred to as ‘AI slop’. Even though it’s supposed to be realistic, ironically, it stands out for appearing fake, much in the same way a McDonald’s building is an eyesore in an otherwise historic neighborhood.
The market for ‘image and short-video generation’ was never that big to begin with, so congrats, AI just disrupted a tiny industry. Getty Images at its peak had a market capitalization of just a few billion dollars. Adobe is a far bigger company, but still, this is tiny relative to other sectors. Before the advent of slop, typically brands would outsource this menial work on sites like Fiver. Same for writing. Although LLMs can compose decent short-form text, this has never been that big of a market ever. The combined market capitalization of every newspaper company in the US doesn’t even equal one Figma.
For example, from Ethan Mollick’s post “GPT-5: It Just Does Stuff,” he shows how Chat GPT 5 generated a picture of an “otter on a plane using a laptop”. I am not sure what applicability this has for any actual serious work, but cool, I guess.
He gives a second example: a city-builder:
Let me show you what ‘just doing stuff’ looks like for a non-coder using GPT-5 for coding. For fun, I prompted GPT-5 “make a procedural brutalist building creator where i can drag and edit buildings in cool ways, they should look like actual buildings, think hard.” That’s it. Vague, grammatically questionable, no specifications.
A couple minutes later, I had a working 3D city builder.
Not a sketch. Not a plan. A functioning app where I could drag buildings around and edit them as needed. I kept typing variations of “make it better” without any additional guidance. And GPT-5 kept adding features I never asked for: neon lights, cars driving through streets,
So basically it recreated Sim City, albeit an unplayable version.
XAI’s “Grok 4” launched a few weeks ago. Since then, Elon has been re-tweeting examples of slop (ahem, artwork) its users have generated, to the annoyance of Roon and others:
I guess I need you baby ♥️
Grok Imagine 🩵 💫✨ pic.twitter.com/mdRlK2RQZp— Ofelia (@OfeliaLamensky) August 9, 2025
also frankly the owner posting grokslop
— roon (@tszzl) August 8, 2025
Again, I am not sure what economic value this unlocks. If Elon wants to convince a skeptical public, or serious developers who will pay $200/month for a premium plan, that Grok has practical uses, he’s not doing a great job selling it. If anyone should know and be able to demonstrate Grok’s full capabilities, it’s him, yet he’s falling short by just re-tweeting slop.
Regarding coding, again, I observe that AI does not entail less effort or fewer hours, but only that it shifts the work around. From the post “How I Code with AI on a budget/free,” the author shows his workflow:
This sounds pretty overwhelming. If AI is supposed to be an out-of-the-box solution or self-contained within the language model, this is the opposite. He needs over a dozen ancillary programs just to get it to ‘work’. Other times, although AI can generate a large percentage of the code, there will be some difficult parts or errors that will consume a lot of time to rectify, in order to get the program to up to par. By comparison, something as old as self-hosted WordPress, is truly out-of-the-box. You just install and activate the WordPress plugin, change some parameters, and have a fully-functional blogging platform.
A major problem or bottleneck is the finitude of attention. AI excels at generating content, but given that world population is growing at a far slower rate than the rate of content generated by LLMs, the much harder problem is promotion or standing out, whereas AI does the opposite. People do not need more slop in a market overflowing with it, or programs for the sake of creating programs; rather, they need ways to get users or to stand out.
This is not to say AI is useless. I use the free version of GPT daily for small tasks. The way I see it, AI is more like an assistant. Instead of obsoleting jobs, it makes people better at their jobs, except those rare exceptions such as ‘term paper writers’ or (some) graphics designers, much of which was already outsourced. Data scientists, for example, can use AI to generate Python code for creating graphics, but they still have to decide which data is worth analyzing and how to connect or interpret the findings.
But work-specific tasks tend to also be specific and conceptually hard, in which having more computational power doesn’t necessarily help. The issue is not insufficient computation, but overcoming some conceptual barrier that stands in the way of turning an un-scalable problem into a scalable one. An example is proving the four-color theorem by enumerating all the possibilities of applicable four-color planar maps. The hardest part is not checking for counterexamples, which is merely constrained by computational resources, but instead working out all the difficult math to determine the suitable ‘search space’.
An expert coder is not just paid to ‘write code’. There are many libraries that bypasses having to write a lot of code. The difficulty is in the debugging, combining these libraries, and setting it up into the finalized product in mind. This is why these jobs have titles such as ‘engineer’ or ‘developer’ instead of just ‘coder’.
This is how human ingenuity can thrive alongside AI. I have two personal examples. First, the creation of the BTC hedging method. I observed that shorting BTC is a great hedge against tech stocks, like NVDA. This has worked well , even after many years and despite AI and near-limitless computational power. A second example is the math paper, in which using five methods, bypassed having to use a computer to find new results. This turned a brute force exercise into a purely analytic one, which I think is more interesting from a mathematical perspective as it reveals the underlying properties for why something works.
The typical objection is to wait. Some ‘extremely smart and credible people’ have argued that by 2027 AI will overtly surpass humans at all tasks, by using its computational power to recursively self-improve. This is sometimes called the ‘AI takeoff’, which I suppose is the latest rebranding of the singularity, when that too failed to happen.
There are many similarities to the 2012 Mayan apocalypse prophecies, which goes without saying was wrong (at the very least the people who subjected us to their nonsense should apologize). The difference is today’s AI prophets have better intellectual credentials and the patina of actual science and mainstream credibility, and are not limited to fringe talk radio cranks. Like the singularity, when the takeoff doesn’t happen, assuming we can even agree what this means, it will be pushed back or rebranded. It’s analogous to a hamster wheel, in that as the hamster moves and the wheel turns, there is no displacement even as the path locally slopes upward.
But otherwise, the evidence is also weak, with lots of handwaving as to how the takeoff actually happens. Just like many of those same smart, well-credentialed people were ‘certain’ about economic collapse in 2020 during Covid or in 2025 due to Trump’s tariffs, considerable skepticism should be warranted. Just look back at how robots were predicted in the 50s to replace workers. Or flying cars. Yet no one predicted the world wide web or smartphones. Very little of anything when it comes to technology is ever certain.