Prometheus Unbound
Three things coincided in the past 24 hours: A visit to Rockefeller Center NYC, reading the Stanford 2023 AI index report, and reading this remarkable off-tilt tweet by Chloe21e8, a narrative on the inevitability of AI supremacy birthed by Capitalism:
4.Singularity 1.0 already occurred with the arrival of Capitalism as a parasite on unsuspecting man, Kubrick’s monolith from the sky, zero to one, a superdarwinian ratchet propelling us on-rails towards our Long Singularity’s final act. The stageplay began millennia ago, and you’re strapped to chairs with no intermission—where you’ll be left when the curtains close and the lights to go off.
I called the screed “The Whig Theory of History meets Kurzweil’s Singularity.” I can’t tell if the Twitter account is satire, a meta-level AI experiment, or both, but a narrative that asserts Capitalism as a “thermodynamic God” brings back echoes of Prometheus.
Prometheus is the thermodynamic God. The Titan god of fire in Greek mythology, Prometheus is said to have defied the Olympian gods by stealing fire from them and giving it to humanity in the form of technology, birthing human knowledge and civilization.
Capital in its basic form is stored labor, which in useful form are tools. Mankind became mankind by using tools, and the first invention that crossed the threshold to human technology was fire. It was perhaps the trinity of fire, language and stone tools that made homo sapiens mankind.
Every invention builds on what came before, an elaboration and extension. Looking back from the mountaintop of progress, it seems quite inevitable that sooner or later what has been discovered or invented would be, but that’s post hoc rationalization.
And what of the future? The logical conclusion of the inevitability of progress is that it will progress to either infinity forever or some endpoint, or both. Hence the Singularity: when the curve of technology goes vertical.
What the Singularity Is and Is Not
While AGI has been associated with the Singularity, reaching AGI and the singularity are not the same thing.
AGI is artificial intelligence that is human-capable as a technology. I have already expressed that I believe AGI will arrive in 6 years, by 2029. It will happen quicker than many expect.
The Singularity is when the cost of that human-level intelligence reaches a price point of zero, making infinite technology expansion possible. AGI will happen first, and then it will continue to become pervasive, cheap and universal AGI.
Before we reach zero, we will see the price of intelligence plummet, as it is already doing on an exponential curve. We are in the pre-Singularity exponential curve right now, dealing with increasing waves of future shock. AGI will be the final capping technology to take us hurtling towards the Singularity at an ever-rapid pace.
The Rise Of LLMs
In just the past few years, a revolution has occurred in artificial intelligence with the rise of large language models. Leaders like OpenAI's Greg Brockman and Ilya Sutskever, Google's Jeff Dean and Deis Hassabis, and Anthropic's Dario Amodei have led their teams to pioneer new techniques to train neural networks on massive datasets to generate human-like language. Using large clusters of computers and algorithms that can understand patterns in huge amounts of text data, these researchers have built models like GPT-3 and others that can produce paragraphs of coherent prose on almost any topic.
The techniques behind these LLMs are not radically new, but rather the researchers were able to apply them at an unprecedented scale. Training their neural networks on datasets of tens of terabytes of internet text, these models have learned the complexities and nuances of language in a generalized way. The more data they ingest, the more powerful their language generation becomes.
These models have become more powerful as they become more general, thanks to "transfer learning", able to apply knowledge from one domain to new areas, just as humans do.
Today, large language models can now generate persuasive essays, coherent works of fiction, sophisticated code, and more. They also can produce content that is nonsensical, false or misleading, but we know how to teach them the difference so they become grounded.
We also know how to extend the LLMs to become multi-modal, true foundational models across many domains and data types. We can connect these Foundation AI models together to create a collective intelligence that can solve more complex and varied problems that a single LLM prompt-and-reply invocation cannot. We also know how to build on Foundational model ecosystems to create AI agents that seemingly can do anything autonomously. All of these will be elements of AI’s next act to play out in coming years.
AI Escape Velocity
The researchers, leaders and institutions that have gotten this far are not quitting now. If anything, everyone is doubling down on AI acceleration. AI has achieved a kind of ‘escape velocity’ where:
AI as a business opportunity and competition drive AI progress forward
AI itself is such a powerful information technology that technology progress in AI (and elsewhere) will accelerate.
There are too many players and competitors for any monopolistic institution to slow it down. Even Government regulation wont stymie progress too much.
The Stanford AI index report for 2023 is replete with ‘up and to the right’ charts: More AI publications; more AI research citations; more models; more parameters in models; more industry activity; more people in the field. The massive adoption of chatGPT, achieving more than 100 million users in its first month, was a ‘hockey stick’ moment for AI, a technology inflection point like the Internet had in 1995.
The Race to AGI
This problem of intelligently navigating in the real world and doing it safely seems to be hardest problem in AI right now. Self-driving cars turned out to be a harder than some AI enthusiasts thought 10 years ago, and they’ve been stuck on the threshold of being reliable enough for some years, and are still not there yet. The release of PaLM-E by Google last month is a sign that multi-modal embodied AI foundational models are not far behind pure LLMs in being able to scale.
When we see AGI, I believe we will see it in the form of embodied intelligence, not in pure LLMs.
We are already close to Turing-test-level capable LLMs, but as AIs pass the Turing-test, it seems like a stochastic parrot parlor trick instead of adequate vetting for what AGI means.
We can move the goalposts, but can only move them so far. When an AI-enabled robot is able to engage in any physical or mental task that a human can do, and do it autonomously while understanding and interacting with a real-world environment, then we have an AI that fully takes in a more complete representation of the reasoning, sensing, thinking and actions that humans have to go through. That’s AGI.
Such a technology is not far away: Make a multi-modal (visual, audio, tactile, language) multi-trillion parameter model, like a next generation PaLM-e, and build a full iterable agent architecture on top of it, so it can operate autonomously. It will have full environment awareness while passing the Turing test and more.
What does AI mean for humanity as a whole?
Current and near-term AI can do many things: Improve search engines, translate between languages, diagnose diseases, generate educational content and business reports, read and analyze any document or piece of code, write poems and songs, make images and videos, and much more. These capabilities will ripple through and impact every industry.
AI’s impact on technology is creating acceleration in technology change. AI’s ability to write software is driving the marginal cost of new apps to zero. AI assists in drug discovery, chemistry, physics, and in any other scientific endeavor, especially where AI can assist in simulation or generation of new ideas from large text. Product design and innovation is accelerated by AI’s new ability to generate 3D rendering via natural language interfaces.
Many companies are betting that AI techniques will give them a competitive advantage, and the competitive advantage of leveraging AI is too great for any company or professional worker to fail to use. Get on board or get left behind. That’s why the rush to get on board is upon us; FOMO is real. That too will fuel yet more innovation.
Some things will change, but life may not look as different as some might think. We will still need to live in the same kinds of houses, eat food, wear clothes.
Or will we? Maybe not now, but soon, robot tailors and cooks might change what we eat and wear. AGI powered robot janitors will help us fix things around the house, and a AI-powered robot butler, janitor and grounds-keeper will keep the homes of the well-do-to in the kind of exceptional care once reserved for an English Lord.
Those well-off can afford their own, while the less well-off might be renting their robots who come via AI taxi and do house-cleaning and chores. As in the home, so too in the factory floor, where already highly automated factories will become completely so. Every repetitive job will be automated and assigned to AI. AI will be co-pilots for CxO positions and might even fill the whole role.
It’s a myth that blue collar labor will be exempt from getting impacted by AI. Physical-labor fields are exempt for now because the most advanced AI is not embodied; it cannot control its environment. Being a lawyer is easier for AI than being an electrician, but both will be automated eventually.
I have also mentioned previously that we don’t need AGI to have AI change the world. What matters more than AGI is the total cost of (high-quality) intelligence, intellectual creation and insights. That cost is plummeting and will continue to. The continued advance of AI will not drive a single step change, but a cascade of changes. AI’s advance will drive ongoing economic and social disruption and change before, during, and after AGI.
While technology change has been accelerating up until this point and shows little sign of letting up, the friction of social adaption to technology may cause technology change to slow down from exponential increase. It might reach some terminal velocity where the rate of change stays the same.
We could just as well cling to ‘old ways’ out of choice, habit or cultural self-preservation. We may get to a point where society will not be able to absorb technology as quickly as it can be generated. That will not stop change, only slow and channel it.
Since nothing is literally free, we can expect pre-singularity accelerating technology that is so rapid it looks close enough to feel like the Singularity to us.
Here’s the standard save humanity disclaimer: We must ensure this progress benefits humanity as a whole, not just a few, and that we're thoughtful about the impact of these intelligent machines on jobs and privacy and security. We just don’t know what that means. We cannot see through the Singularity vortex.
AI progress will accelerate. AGI is near. The Singularity is inevitable and will follow. Prometheus is unbound.