AI Skeptics, AI Doomers and AI Optimists
How powerful and positive is AI technology? AI reactions key off this question
Whenever a new technology arises, we react to the shock of the new with a mixture of curiosity, fear, puzzlement, hope, scorn, and awe. As the once new thing becomes part of our lives over time, our awe may fade into familiarity and our curiosity into indifference. If it’s as useful as a smart phone, it becomes an addictive part of us.
AI is the latest new thing, and we see a pattern of different reactions to make sense of the rise of AI technology, even as it evolves, changes and improves constantly. We can group these attitudes towards AI into several categories, based on two questions:
How powerful and important do you think AI technology is? Is AI over-hyped, or is it a huge technology paradigm shift, on par with the printing press and industrial revolution?
How positive is the impact of AI? Is it a force for good that benefits mankind, or are the negatives, risks and harms of AI the real consequence of adopting AI?
These two questions and their answers and attitudes about them divides the world to AI observers into four camps.
AI Skeptics
AI Skeptics believe AI is over-hyped, not as great as others claim, and even “not real,” and they also see AI’s impact as both limited and negative.
Classic AI skepticism is best expressed by the “It can’t do X” template, where “X” has been anything from answer SAT questions, compose symphonies, write code, identify objects in videos, or create new theorems.
Gary Marcus, in his 2019 book “Rebooting AI,Building AI Systems We Can Trust” is great example of AI Skepticism, opining that AI based on statistical inference is not good enough to understand the world like a human, and pointing out manifest flaws in the AI of 2019. Unfortunately, some of it’s “AI can’t do this or that” examples have been superceded by progress with LLMs.
Some of the skepticism of current statistical deep-learning based AI derives from the symbolic processing AI world, such as this talk: “Without causal inference, AI is FI (fake intelligence)” a recent talk title by Prof. Miguel Hernan. This may be a bit of goal-post moving on the question of what AI is, declaring GPT-4 a ‘stochastic parrot’ because its not rooted in certain kinds of causal or logical semantics.
AI Skeptics they also believe that even though AI’s powers and progress may be overstated, AI’s negative consequences and risks may be understated. As Gary Marcus put it in an article on GPT-5 irrational exuberance “AI doesn’t have to be all that smart to cause a lot of harm.”
For decades, while AI struggled with even toy problems, it was easy and correct to be skeptical of progress, and even believe that AI would never break certain barriers. Now that AI is breaking barriers, this is much less tenable.
AI Optimists
The flip-side of the AI Skeptics are the AI Optimists, who believe that AI is one of the most powerful technology milestones ever achieved, and that it will be a great force for good in the world.
A great example of the AI Optimist viewpoint is Marc Andreessen’s essay “AI will Save the World” as I discussed in my prior article “The Case for AI Optimism.” He declared “AI is quite possibly the most important – and best – thing our civilization has ever created,” and sees AI as “A way to make everything we care about better.”
The case for AI’s importance and benefit is rooted in the fact that, as an informational technology, it will accelerate all technology progress. As Andreeson puts it: “Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit.”
Technology futurist and optimist Ray Kurzweil, maker of many staggering future predictions, has pointed out that technology advances along an exponential. Based on that model of progress, he has predicted rapid advances in technology in coming years in books such as The Singularity is Near, predicting we would get Turing-test-passing AGI by 2029 and the Singularity by 2045; so has his friend and fellow futurist Peter Diamandis.
Critics note the lack of consideration of the risks of AI in these prediction. This critique leads to the position of AI Doomers.
AI Doomers
The AI Doomers believe that as AI gets better, we will lose control of AI, and AI will take over the world and pose a grave threat to humanity. This is the stuff of Terminator movies and science fiction, but is also a seriously argued position.
The extreme example of an AI Doomer is Eliezer Yudkowsky. He feels that AI is such a risk that we need to shut AI development down, saying:
the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
He assumes artificial intelligence will yield destructive agency, saying “The likely result of humanity facing down an opposed superhuman intelligence is a total loss.” This describes AI as if it is our competitor, and not our tool or servant.
The more measured version of the AI Doomer is Geoffrey Hinton, who recently quit Google to more openly warn about the risks of AI.
The AI Doomer arguments are based on uncertain future scenarios, and any argument pro or con about AI ends up being based on a number of assumptions and hypotheticals, highly sensitive to an uncertain technology outcome.
While some AI Doomer arguments are based on fallacies, one good argument to be worried about AI Safety and AI Alignment is that the very uncertainty of what AI will bring is itself an indicator of risk.
AI Minimal-Positivists
The AI Minimal-Positivists believe AI is just a tool, albeit a very useful tool. We ought not think AI will dramatically change or threaten humanity. We will adapt and improve from it, just as we have used and adapted and improved with other technologies. AI isn’t as special as some make it out to be.
Perhaps there isn’t a catchy name for this category because it tends not to get in the news. It’s more newsworthy to be a contrarian, or predict extreme outcomes like a singularity or human extinction, than to see AI as a useful but not life-altering technology.
Someone who might fit the AI Minimal-Positivist category is the leading AI researcher Yann LeCun, who heads up Meta’s AI research. He has been skeptical of the power of LLMs like chatGPT, noting its failures in reasoning, and says that “Artificial intelligence is not yet as smart as a dog.”
He critiques language models as being a limited part of our human knowlege and experience, stating that “most of human knowledge has nothing to do with language.” Rather, LeCun sees that much human understanding is embedded in our visual and spacial experiences, so he is working on visual and multi-modal AI models like I-JEPA to address that gap.
He is also critical of the overstated claims of AI Doomers.
“A fear that has been popularized by science fictions [is], that if robots are smarter than us, they are going to want to take over the world … there is no correlation between being smart and wanting to take over,”
The Bootleggers
Then we have Sam Altman, who has made statements that could place him as either an AI Optimist or an AI Doomer at times.
Is his calls for AI regulation, telling us that we need Government to regulate AI model generation for our sake, fall into Marc Andreessen’s ‘bootlegger’ category? That is, is he being just a little too cute to suggest AI Safety is super important and the risks are enormous, but we can trust OpenAI to get it right?
There is reason to take Sam Altman at face value, but even if you do, the preferred AI regulation approach of licensing large foundation AI models will lead to the cartelization of the AI industry and the dominance by a few Big Tech players. Bootleggers win.
Summary
Where do I personally see AI, and what quadrant do I fit in? I’m overall an AI Optimist who believes AI’s impact on the world will be huge and mostly for the good. AI will accelerate technology for mankind’s benefit in a host of ways. AI Changes Everything.
I would temper that with some caveats: Extreme scenarios tend to get attention but overstate both positive and negative realistic outcomes. AI is indeed just a tool, albeit the most powerful tool we may have created, and it will neither save us nor destroy us.
AI won’t save mankind from our own flaws that lead to division, crime, addiction, wars, and accidents. Yet neither will AI make any of those things worse. If we use it properly, AI can be a force for much good in the world.
We tend to overestimate progress in the short-term and underestimate progress in the long-term. Because of this, we get surprised sometimes by sudden breakthroughs or a seeming ‘lack of progress’. As a result, technology goes through hype cycles, and AI is definitely in a hype cycle now. This is the Summer of AI, but it had several winters in past decades of slow AI progress.
Self-driving cars were a hyped-up amazing soon-to-come thing in 2013. Now 10 years later, we are still waiting for self-driving cars to hit the road. One day in the near future, probably within 10 year, we will order some groceries online, and zero humans will interact with us as robots pluck the items from the warehouse, hand it off to an autonomous delivery truck or drone and deliver it our doorstep. We won’t think much of it when the shock of the new wears off and it becomes a mundane part of living.
Whether optimists, skeptics or doomers, let us not forget that what we do with technology is a human choice. We must not nor do we need to be slaves to any technology or process, but we decide.
Human ideals, human culture, human politics and human civilization remain up to us, based on the decisions we collectively make. In the end, our future is not a prediction, but a choice. As Alan Kay once put it, “The best way to predict the future is to make it.”