I was met with the news this morning that Elon Musk, Steve Wosniak, many AI researchers, professors and AI business and thought leaders have signed a letter calling for a 6-month pause on powerful AI development.
Posted on the FutureOfLife.org, the letter “Pause Giant AI Experiments: An Open Letter” outlines several concerns with the current pace of AI development, which they describe as “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” They follow-up with their concerns with a call for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
They say:
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
These are great questions and raise legitimate concerns. Let’s address them:
Propaganda and untruth: Such things were and are already possible, but the cost of words is plummeting and faking being human is easier than ever. AgitProp from Turing-passing AIs programmed to misinform is possible. We need ways to detect AI-generated “deep fakes” and other forms of misinformation. AI is a technology that can be used or misused, but it is not unique in that regard, and even pre-existing weak forms of AI can and will be misused.
Jobs- “automate away all the jobs”: As I have noted in prior articles, just the GPT-4-level AI we already have is good enough to cause massive disruption to jobs and industries. The pace of technology progress accelerating faster than careers will be so disruptive. Job dislocation is perhaps the biggest negative consequence of AI we can expect to see in the coming decade. Many jobs disrupted will be professional jobs, not the menial jobs like fast-food or retail checkout jobs. This doesn’t even require super-intelligent AI, just cheap-enough AI to displace human intellectual work.
“nonhuman minds that might eventually outnumber, outsmart”: AI is the culmination of information technologies. Smart-phones, PCs and IoT sensor outnumber us, and the memory banks of data centers hold vast amounts of memory beyond what all humanity knows. We haven’t worried about that. Should we worry that AI insights-on-demand will be super-cheap and abundant? So long as it is serving us well, no. We will need to think of AI as a very advanced tool, not a competitor.
It’s breathless hyperbole to speak of “obsolete and replace us” as if AI will up and decide to ‘replace’ humanity. It can’t and it won’t do anything like that; AI is a tool and it remains a tool, albeit incredibly powerful, with no agency of its own. There are claims from some AI Worriers that “AI Could Defeat All Of Us Combined,” but I don’t agree with them. Those scenarios are far-off science fiction and are not grounded in reality of current or near-future AI technology.
Their concern seems to be that the ‘competitive race’ going on, with OpenAI and Microsoft forging ahead with GPT-4 based AI and beyond, that things will get out of control.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
My first reaction to this? “Well that’s not going to happen.”
A moratorium is likely to be impractical, futile, and unenforceable. Such an effort reminds me of “15 days to stop the spread” in March 2020 to fight Covid-19. It wasn’t 15 days and it didn’t really stop the spread.
The Open Letter really wants “at least” 6 months and also says, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” Who decides that confidence level? These are subjective matters likely to devolve into political arguments over “Safety vs Progress” between AI Enthusiasts vs AI Worriers. (see below).
OpenAI’s Planning for AGI
This Pause Open Letter is in many ways a response to OpenAI’s “Planning for AGI and beyond”, written just last month by Sam Altman and describing OpenAI’s further caution as they progress further in AI. In it, he made clear that OpenAI would lean more into AI Safety and put the breaks on more powerful AI. Go slower and more deliberately. But he also said:
Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.
His point is training more powerful LLMs might not be bad, if the end result is both higher quality, more aligned, and more able to satisfy AI safety concerns. So maybe a moratorium is wrong approach; maybe a ‘safety first AI’ is the right approach.
We think a slower takeoff is easier to make safe, and coordination among AGI efforts to slow down at critical junctures will likely be important (even in a world where we don’t need to do this to solve technical alignment problems, slowing down may be important to give society enough time to adapt).
It’s hard to have ‘coordination’ without open-ness and it’s hard to have that when OpenAI is closing off information about their models due to “competitive concerns.”
The OpenAI also mention in a footnote how “we were wrong in our original thinking about openness, and have pivoted from thinking we should release everything … to thinking that we should figure out how to safely share access to and benefits of the systems.” This is the “Trust me, bro” version of AI Safety. Open Source software is safer than closed-source, open-source AI research has democratized progress, and opening up the data sources gives us confidence that there aren’t skeletons lurking in the black box Foundation Model.
So where does that leave us? In Formula 1, they throw up the yellow flag on the racing car drivers when there is a car crash or concern, to tell them to slow down. Elon Musk and the Pause Open Letter co-signers have thrown a yellow flag on the race track. They want this:
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
These are noble and correct goals. Every AI maker should sign up for following these kinds of principles in what they develop.
I find their call to involve “policymakers” a dreary attempt to involve uninformed politicians, who will most likely end up doing the wrong thing for the wrong reasons at the wrong time … but that’s just my opinion. We shall see.
Prediction - GPT-4 Remains Best-in-class for 2 years
Prediction: GPT-5 release will be as much as 2 years away. The Open Letter folks may get their wish in some sense. It doesn’t mean AI will slow down, it just means LLM releases might take a pause.
Here’s why: OpenAI is openly saying they will push safety over speed in deployment. Microsoft wanted to push on GPT-4 to get out there so they can challenge Google in search; they have done that, but will be content to sit on that lead unless and until Google catches up; but Google is stuck in a vice as chat-search harms their core search business model - innovator’s dilemma! Bard ain’t that good, yet. Meanwhile, other players like Anthropic with Claude explicitly want to be fast-followers. So the incentives are not so much to race ahead to AGI, but to keep things safe.
What might throw this prediction off is if Google gets very aggressive with trying to leapfrog OpenAI/Microsoft.
AI will advance greatly near-term by leveraging GPT-4, creating the plug-in ecosystems, and building apps and interfaces around GPT-4-level AI will still be incredible progress and profound change.
Postscript: The 4 camps of AI Attitudes
We are seeing different reactions to the rise of AI, and I believe you can group those reactions into 4 camps, based on how powerful you think AI will be and how positive the changes AI will bring.
AI Cynics - AI is over-hyped and really isn’t as great as others claim. In fact, some of those claims are a fraud. It can’t do X.
AI Enthusiasts - AI is the most powerful technology ever, and it will be a powerful force for good in the world. It will accelerate tech and lead to the Singularity soon.
AI Worriers - AI safety and alignment are serious concerns as AI gets better. AI will take over the world, and in doing so it poses a grave threat to humanity.
AI Min-Positivists - AI is just a tool, albeit a very useful tool. We ought not think AI will dramatically change or threaten humanity. We will adapt and improve from it, just as we have used and adapted and improved with other technologies. AI isn’t as special as some make it out to be.