To say that the news cycle has been slow lately is an understatement. Tax cuts are happening. Even though the far-right is lukewarm about them, preferring a wall instead, ‘mom and pop’ Trump voters, which is a much bigger demographic, are enthusiastic about the tax cuts.
Even though smart people such as Elon Musk, Bret Weinstein, and Eliezer Yudkowsky are worried about AI (specifically, advanced artificial generalized intelligence), AI does not worry me at all. The problem is alarmists have a very poor track record of being right despite all the media attention they get. It’s not a surprise they get so much attention, because doom and gloom sells more advertising and get more pageviews than optimism. ‘Smart people’ that are elevated as ‘experts’ were wrong about:
-The economy (many economists and pundits predicted Trump and Brexit would be bad for stocks and the economy; the S&P 500 is up 25% since Trump won, and Brexit turned out to be a ‘big nothing’ aside from the nationalistic symbolism of it)
-Y2K (biggest scam of the decade)
-2014 Ebola outbreak (went away, as all prior outbreaks do)
-SARS, bird flu (same as above)
-Malthusian trap (this was a big deal in the 70’s)
-GMOs in food (no evidence GMOs are harmful, yet liberals like Taleb keep pushing the anti-Monsanto lie)
-The 2010-2012 Greek debt contagion (was no contagion; S&P 500 has doubled since then)
-College bubble, stock market bubble, Bay Area real estate bubble, car loan bubble, credit card bubble, etc. (give it a rest already. Just because something is going up a lot does not mean it’s unsustainable and or that is will end in crisis)
-Bitcoin (they called it a fad/bubble in 2011-2017; glad I ignored them)
-Trump is Hitler 2.0/madman/evil/insane/etc. (quite the opposite; voters like him; foreign leaders want to work with him; economy and stock market confident in Trump)
-Predictions since 2008 of debt crisis, hyperinflation, and dollar collapse (none of those things happened)
Much of modern AI is about creating programs that are more advanced, rather than eliminating the role of the programmer altogether. This is an important distinction because it means AI is only as smart as the people who are coding it. Also, progress has been incredibly slow in emulating anything that even remotely resembles generalized, rather than specialized, human-like thinking and reasoning capability. AI doomsayers say that at some point a metaphorical trigger will be switched that makes AI suddenly become sentient, but seems not much different from Millenarianism, in that a bold, world-altering proclamation is made that is unsubstantiated by any sort of empirical evidence. We’re supposed to take their word that the AI apocalypse is coming. The AI apocalyptos are requiring that optimists prove a negative: that AI doom won’t happen, which is impossible to do, but that doesn’t make the apocalyptos right either.