Don’t beat yourself up for missing out on AI or Nvidia hype

In 1997 IBM made headlines worldwide after its supercomputer, Deep Blue, beat chess grandmaster Garry Kasparov in a 6-game match. In 2006 IBM researcher David Ferrucci created Watson, as a successor to Deep Blue, who had high hopes Watson would revolutionize healthcare, but like Deep Blue, it was a sort of solution looking for a problem. Healthcare companies didn’t need Watson, just like the general public didn’t need a better chess-playing computer. Outside of chess, humans were not at risk of being obsoleted by Deep Blue.

Neural networks, developed in the ’90s, was a successor to ‘expert systems’ that dominated in the ’80s and earlier. The idea is to try to replicate human learning for general purposes, rather than an AI that is smart (an expert) but otherwise specialized and limited. Machine learning, which built on neural networks, saw use for self-driving cars. DeepMind, a research laboratory founded in 2010, was acquired by Google in 2014. Similar to Deep Blue, it “made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, a world champion, in a five-game match”. Like Watson and Deep Blue, commercial use was limited. The public did not have a need or use for it, and beating humans at a board game is a limited application of intelligence even if the methods employed to achieve this feat were very advanced.

During the ’90s and 2000s, AI was overshadowed by the World Wide Web, such as search engines, advertising, and online storefronts. And later–social networks, ‘the cloud’, and apps. Whereas the Hypertext Transfer Protocol, the backbone of what is colloquially called ‘the internet’, saw immediate widespread commercial adoption, no one really had much of a use for AI beyond chess or dog-like robots, it seemed. MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), founded in 2003, was more noteworthy for its leaky architecture than any earth-shattering innovations borne out of it.

Overall, AI has been around for at least half a century in various forms and has always failed to deliver on its loftiest promises or hype. Or at the very least has never been ready for prime time. AI has progressed in fits and starts for so long that one could be forgiven for overlooking or dismissing it. The question of what it meant for a computer to be intelligent, although perhaps interesting as an intellectual exercise, didn’t matter outside of academia or Isaac Asimov novels.

AI was often hyped as the Next Big Thing, that although a ‘thing’, didn’t seem to have a big impact. That all changed in 2023. A switch was flipped in March 2023 with the launch of GPT-4, in which it suddenly became impossible to not talk about AI. Eliezer Yudkowsky became a household name as his podcast interviews warning of ‘AI doom’ were watched by millions and indirectly helped generate hype for GPT-4. ‘Beff Jezos’, a buff parody of Amazon CEO Jeff Bezos, sought the conversion of life into thermodynamic energy. The possibility of mass unemployment due to AI suddenly became a centerpiece of discussion by the chattering classes, not just academia.

GPT-4 was the perfect recipe of performance and hype that jolted AI out of its multi-decade lull, and made it front and center. Unlike earlier attempts at AI, GPT-4 actually replicates human intelligence convincingly well for a wide range of tasks, and saw widespread adoption by consumers, much like AOL and the World Wide Web. This also led to a boom in semiconductor stocks, notably Nvidia, to build the necessary hardware for AI, such as for ‘training’.

Don’t beat yourself up for missing out on AI or Nvidia. 50 years is a long time to wait for a technology to hit critical mass or become a commercial success, and it’s near-impossible to predict when, if ever, it happens. The Apple 2, in 1976, was an instant commercial hit and propelled Apple stock to a successful IPO a decade later. Graphical user interfaces caught on instantly in the late ’80s and early ’90s, quickly obsoleting command-line operating systems. Television became commonplace in the US in the ’50s during the post-war expansion of the middle class, but the first prototype of a working electronic television system was in 1927, so thirty years. Segway or VR? The verdict is still out.

If 50 years is too long, try waiting 110. Electric cars date back to the 1890-1900s and saw some limited use, alongside cars powered by internal combustion engines, but it was Ford’s Model T, introduced in 1908, that made the latter technology the clear-cut winner for much of the remainder of 20th century. General Motors line of EV1 electric cars in 1996-1999 met an ignominious end, immortalized in a movie that inquired into its death, but in the 2010s Tesla came along and turned electric cars into a trillion-dollar business after the industry had otherwise been written off for dead.