This is an Alice-In-Wonderland moment for AI. - Practical AI
Humans think and talk by analogy, so when we are faced with something strange and novel, we grasp for comparisons. That’s why NVidia’s CEO called this current moment “AI’s IPhone moment,” and it’s why I and others have compared 2023 to the year 1995 for the internet. It’s why several years ago Andrew Ng said that “AI is the electricity of the 21st century.”
All these analogies regarding the impact of AI are quite valid, and if anything understate the reality: AI changes everything. As the culmination of the computing revolution that started 70 years ago, AI will disrupt every industry, accelerate technology and science in all areas, and impact every living human. AGI is near, the Singularity is inevitable. That coming “Singularity” event is so novel and different, we don’t have comparisons for it, except to compare it to something beyond human experience.
Does Human-like Thinking Make you Human?
When grappling with defining AI itself, we have fallen back on analogies as well, and for good reason. In the dawn of the computing era, computing pioneers pondering artificial intelligence had a basic premise that since human thought and computer processing were both deep down ways of processing information, it might be possible to get computers to think like humans. The challenge of AI then was: How could we get computers to think like humans? As Marvin Minsky put it:
“Artificial intelligence is the science of making machines do things that would require intelligence if done by men.” - Marvin Minsky, 1968
Indeed, Alan Turing defined AI in specific human terms:
“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” - Alan Turing, 1950
This human analogy has been very important, vital even, to the development of AI. However, I believe it leads us astray in our current thinking both of how AI systems work, and how AI systems will impact us, both for good and bad, now and in the near future.
An Army of Elephants
While it’s true that AI has been largely about getting computers to “think like humans,” the analogy to impute human-like intelligence or thought to AI models in a direct way sometimes leads us astray.
Computer-based Artificial Intelligence acts like humans in the same way that airplanes act like birds. While there is a similarity of purpose and of basic principles used, the specific architectures, capabilities and much else are quite different.
AI today is capable of much more and different things than any human being, while at the same time lacking so much of actual human agency, emotion and consciousness, that the AI-as-human analogy leads us astray. AI as it works now and in the near future is both powerful and different enough from human thought that the analogy is no longer useful in helping us be precise and correct about what AI technology is and how it works.
AI can be thought of as a form of intelligence used in our service. If the human analogy is flawed, let’s consider another form of intelligence used in our service: domesticated animals.
We have used domesticated animals - goats, cows, sheep, oxen, horses, camels, and more - to do work for us or serve us in many ways: plowing fields; transportation (burros, camels, pulling canal boats, sleds, ox-carts, and horse-drawn carriages); as a power source for pulling ploughs and milling grain, etc.; and even in warfare (chariots, knights, and cavalry). We have also, of course, farmed animals for clothing (wool) and food (eggs, milk, cheese and meat). Domesticating animals has been among the most important of technologies humans have adopted.
Perhaps the most intelligent of the animals we have applied to our service is the Indian Elephant. With a mahout guiding it, elephants can be gentle and useful giants in moving large loads, handling timber and other tasks.
So is AI in our service like elephants? Like AI, elephants respond to their training, but unlike AI, which is wholly a creation of our developed algorithms, elephants have their own nature. Elephants are living, can feel emotions, including joy and pain, and perhaps have what might be called “agency.” We are kin to elephants, not AI.
Indeed, it’s easy to see that the non-living mechanical AI - even if it exhibits levels of intelligence that are beyond elephant or even human level - are far less “close” to humanity than our mammal friends. It is easy to empathize with our mammal friends, whether as pets or our elephant, ox or literal work-horses, as we share a place in the animal kingdom as living creatures. Empathy for AI is based on tricking our own emotions to feel for a virtual algorithm whose ‘empathy’ is an artificial construct.
Sentience At Scale
I have much respect for Lex Fridman, the great interviewer of many luminaries in technology and beyond, but he’s off course here:
We wouldn’t think to give animals a right to vote or indeed any ‘human rights’, nor would any non-human beast have the capacity to demand as much. So what are we to make of the thoughts of imputing human sentience, thoughts, emotions or even applying human rights concepts to AI?
Let’s first parse out what “sentience at scale” means.
Human-like sentience is not a needed feature for 99% of useful and practical AI. Useful and practical AI will apply knowledge, reasoning and language generation to do intellectually complex tasks. Intellectually complex tasks require disparate and varied information and knowledge, and reasoning and synthesis of that data, to produce a satisfactory result. AI operations may require world knowledge, and even some AI have some emergent ‘theory of mind’, but none are close to having sentience of any human kind.
Could some form of ‘sentience’ emerge at some future date out of ever-more advanced AI? Perhaps. However, we have deconstructed intelligence and found a way to exhibit it in non-living non-conscious cloud computers. Does a non-living mechanical entity exhibiting a human characteristic such as ‘sentience’ makes it human? No.
Nor do I foresee human-type consciousness emerging out of even the most advanced and powerful AI models we can imagine today. As of now, this remains science fiction.
Regarding scale, our mindset needs to shift with regards to feeling threatened when AI tools can do a specific job better than we can. Should we worry that AI may be more ‘intelligent’ than us at a task? No. Does it inherently mean we are inferior in a moral or intellectual way because an AI can do the tasks better? No.
We do not worry about horses pulling a plough better than any human could, or a car going faster than we can run, or a calculator multiplying numbers faster than we can multiply. The whole point of human tools, including AI, is to extend our powers by helping us do a job better than we could do before.
We can transfer human beings in the air - at scale - as speeds approaching the speed of sounds. To someone living 500 years ago, such a feat would have seemed magical and God-like. Would that imply we should worship airplanes or call aerodynamics witchcraft? I would hope not. Likewise with the feats of AI logic, intelligence, knowledge and creativity, even if above and beyond human capability. An AI that can compose is a great AI, not a great human composer.
Stochastic Parrots and the Agency of AI
The other challenge in Fridman’s tweet is the word “demand” as in: “AI … will demand to have equal rights with humans.” Nonsense. Such a scenario is the stuff of Science Fiction not fact, and my basic problem with it is that AI only has the agency that we program it to have.
Grappling with the immense and novel power of AI makes it hard to understand the difference between raw intellectual power and intention and agency. For example, AI-powered coding and AI-powered writing makes both faster and easier. We do not confuse a database that can memorize a billion entries of facts with a human. But we find it easier to confuse an AI chatbot that expresses at a human level.
AIs have gone from far below us in capabilities to write and code just a few years ago, to today being useful ‘co-pilots’. Within a few years, they will likely become orders of magnitude faster than humans at these tasks, yet still not require the human-like features of emotion, self-awareness or consciousness to do it. When they are as far above us as they are below us, would we think them any more human?
Most of the imputation of human consciousness to these AIs is a bit of a parlor trick, like a parrot reciting Shakespeare. It can express human emotions through language quite well, because in its training it has read literally millions of book equivalents of text of humans expressing emotions and sharing thoughts.
If AI hasn’t already formally passed the Turing test, it will be able to soon, using this capability to fool any human conversationalist behind a screen of text. Such as conversing AI can make the kind of statements a human makes because it knows exactly what thousands of conversations sound (or read) like. That powerful LLM knows everything about how to express itself through the lens of getting trained on Dickens, Twain and Jane Austen and a million other books, articles, Reddit comments, and stack-overflow posts.
Somewhere between ‘stochastic parrot’ and ‘full world-view and understanding’ lies the real truth of what current LLMs have embedded in them. They may even have ‘sparks of AGI’ and surprising emergent capabilities. However, nowhere on this continuum is agency - they have no will nor desire to be or to know more than it does. It just is what it was programmed to do.
So to sum up: AI, far different a form of intelligence than our animal kin the elephants, does not and will not have sentience, let alone have it ‘at scale’. AI will never have the sentience, agency and will to actually want, let alone ‘demand’ something like ‘human rights’. Alignment issues aside, AI will be trained, like domesticated animals have been, to do useful jobs for us, no more, no less.
The Trajectory of Human Civilization with AI
We are missing the real story of AI’s impact if we dwell on the near-impossible far-off science-fiction scenarios and not realize that AI “will dramatically change the trajectory of human civilization” right now in so many mundane ways without having any sentience, agency, nor be a threat to our sense of humanity whatsoever.
By way of analogy, this graphic was posted recently on Twitter, showing the arrival of the automobile in the space of 13 years from 1900 to 1913.
Think of AI models as those automobiles. A year ago, chatGPT did not exist, and within a month of its launch 100 million people tried it. Soon, AI will be everywhere - in your apps, in your customer service, in your browser, in your work, in our science. It will even be in our cars, displacing drivers. Technology moves much faster today, so it won’t take 13 years. AI changes everything.
The upside and the real risks of AI lie much more in the mundane, ordinary, non-extreme applications and impact of AI. Some will object to my prior “AI will do what AI is trained to do” perspective with AI Alignment problems. Could the AI break free like a wild animal or runaway train and cause damage? Yes, but such risks are manageable risks of mechanical or algorithmic or pilot error, and don’t pause existential risks to humanity. Planes crash, and so will AI; but we’ll survive both.
The ‘existential threat from AI' claims make for great hype-driven headlines but are based on implausible-to-impossible scenarios. They mis-state real risks; the only caveat around that is the future is so unknowable it’s hard to quantify any risk scenario. Real AI Safety risks are far more mundane, less extreme, and a billion times more likely. Real AI impacts and problems with probability close to one include:
Some form of AI will eventually be able to do any human task, from juggling to discovering new math theorems. “Well, humans can do this thing that AI can’t, so we aren’t as risk of being obsoleted” will become famous last words.
Future generations might lose the ability to write well and be self-disciplined intellectual workers, if such activity loses its economic value and if our education system can’t or won’t teach future generations such independent non-AI-aided skills.
Job dislocation from ever-accelerating technology shifts will create massive changes in job demand, in turn creating new classes of economic winners and losers.
We will see both a rise in creativity, with AI-assisted art, music and film, but also cultural losses as art and human culture are diluted by AI creations. If video killed the radio star, AI may kill the Hollywood star.
Malicious users of AI will use it for spreading misinformation, hacking, Phishing attacks, identity theft, and more sophisticated than ever forms of cyber-crime.
AI will be used in warfare. Correction: AI already is being used in warfare. It will get more pervasive.
The end of work? Farming productivity is such that 3% of us do the work that feeds the rest of us. AI-enhanced factory and service-sector productivity may get to be so extreme that only a fraction of the population will need to work. So what then? Will we become like squirrels in National Parks, as Ben Goertzel suggests?
On the last item, I doubt we will run out of jobs, but job dislocation will be immense. Again, we don’t need AGI or super-human AI to have any of these consequences. Even today’s AI technologies will change the world, and AI is not standing still.
These are all huge challenges, but none are “existential threat” scenarios by any means. However, since calm professionalism doesn't generate the clicks and attention fear-mongering headlines do, we can expect more of the latter from pundits and prognosticators.
Excellent summation of the state of play.