Biden Kicks Off AI Regulation
Biden Administration signs sweeping executive order on Artificial Intelligence
Biden’s Announcement
On Monday, The Biden White House publicly announced an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. President Biden himself spoke on the matter from the Whitehouse, inviting members of Congress and industry leaders, highlighting the importance of AI to this administration.
We face a genuine inflection point in history. Once where the decisions we make in the very near term are going to set the course for the next decades. … As Artificial Intelligence expands the boundaries of human possibility and tests the bounds of human understanding, this landmark executive order is a testament to what we stand for: safety, security, trust, openness.
- President Biden
Unpacking the Executive Order
The executive order is a sprawling, 20,000 word order, initiating regulations across many (19?) agencies and impacting many parts of the Federal bureaucracy. Before we dive in, an aside that the Federal Government has defined Artificial Intelligence:
The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
The regulations are set up with several key principles in mind: First and foremost, Artificial Intelligence must be safe and secure. Also, development of responsible AI requires a commitment to supporting American workers, advancing equity and civil rights, and protecting Americans’ privacy and civil liberties. This is clearly a ‘safety first’ approach.
“The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.”
Since AI potentially touches many issues and considerations, the Biden administration’s approach is to initiate a response in AI reporting, regulation, and funding for collaboration and innovation across a number of agencies.
AI Safety:
The biggest regulatory impact is from “AI must be safe and secure” directive. To meet this goal they propose standards for AI Safety and Security that requires developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. The National Institute of Standards and Technology will set the standards for extensive red-team testing of AI models to ensure safety before public release. DHS will set up an AI Safety and Security Board.
They establish criteria based on compute capacity for what large AI models will fall under reporting requirements, and basically it would be the next generation of AI model beyond GPT-4:
“any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations
More AI Safety measures:
Developing strong new standards for biological synthesis screening to protect against the risks of using AI to engineer dangerous biological materials.
Establishing standards and best practices for detecting AI-generated content and authenticating official content.
Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
Privacy Regulations:
The order calls on Congress to pass bipartisan data privacy legislation, in particular protecting kids, and directs federal funding for privacy-preserving research and technologies to accelerate the development and use of privacy-preserving techniques.
Further, it calls for developing guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems, and seeks to strengthen privacy guidance for federal agencies to account for AI risks.
Advancing Equity and Civil Rights:
In this area, they are looking to address ‘algorithmic discrimination’ and to that end, they direct to “Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.” They also will develop best practices for the use of AI in law enforcement and the justice system.
Workers, Consumers, Patients, and Students:
They seek to advance the responsible use of AI in healthcare, and they direct the Department of Health and Human Services to establish a safety program to report on and remedy unsafe healthcare practices involving AI.
The Biden administration addresses the job-displacement risks of AI with proposals to invest in workforce training and development “accessible to all,” report on AI impacts, and develop best practices. Much of it, such as assuring “ability to bargain collectively” is to advance pre-defined agendas more than meeting the specific challenge of AI.
For students, they call on creating resources to support educators deploying AI-enabled educational tools.
AI Innovation and Competition and US leadership:
Overall, the order promotes responsible AI innovation through funding AI-related education, training, development, research, and capacity. They will pilot the National AI Research Resource to aid AI researchers, and expand grants for AI research.
They want the US Government itself to deploy AI effectively and modernize Federal AI infrastructure. So they are directing guidance for Government use of AI, will help agencies acquire AI products.
They are also calling for AI jobs in the Federal Government, to staff up the people who will help implement the AI agenda in the various agencies. They are also using this as an opportunity to expand high-skill immigration.
Finally, they are doing this in the context of global coordination and recognition of the work in other countries, calling for international cooperation and AI standards.
Issues
Peel away the boilerplate, and there are multiple components at play. My takeaways:
Sprawling: At the top level, this “all of Government” approach that is led to a 20,000 word Executive Order. A ‘regulate-first’ bias leads to a ‘kitchen sink’ result.
Opportunistic: Some elements, such as on jobs, immigration, and civil rights, are grafting pre-existing policies into the context of AI.
Bureaucratic: Other elements are AI-specific but bureaucracy generic: Directing more reports, more funding, more panels and standards. At one level, this is harmless, but at another level, it incentivizes worry, feeding an industry.
Heavy-handed: The main regulatory ‘bite’ in this order is the frontier AI model reporting requirements. It regulates AI to “prove AI is safe before they are allowed to be used” by using the Defense Production Act, intended as a wartime power. Companies “must notify the federal government when training the model, and must share the results of all red-team safety tests.”
What other technology is regulated so harshly on a ‘guilty until proven innocent’ basis? This is information technology, not a chemical-biological weapon. Do we know that the risks and benefits of AI are? Biden’s Commerce Secretary Gina Raimondo was asked that question in a CNBC interview, and admitted “We don’t know.”
Other Reactions
Reactions have ranged from supportive to highly critical. Former President Obama predictably praised it.
Some Big Tech companies have been urging AI regulation and some in Big Tech have supportive. But Joseph Nelson, CEO of Roboflow, said “Skepticism is warranted.”
Academic reactions on SciAm are “Biden’s Executive Order on AI Is a Good Start, Experts Say, but Not Enough”
“Artificial intelligence needs to be governed because of its power,” says Emory University School of Law professor Ifeoma Ajunwa, who researches ethical AI. “AI tools,” she adds, “can be wielded in ways that can have disastrous consequences for society.”
Matt Laslo of Wired says it Sounds scary but lacks bite: “Joe Biden’s new executive order is billed as the biggest governmental AI plan ever. Unless he can convince a dysfunctional US Congress and overseas rivals to play along, its effects will be limited.”
One reaction, in “Now comes the hard part,” skepticism that Biden admin has the expertise to move quickly on AI:
“The EO is a little bit paradoxical in that it gives pretty aggressive timelines for agencies to craft this guidance, [while] at the same time acknowledging that they don’t have enough expertise in the government to address AI,” Parker told FedScoop.
Steven Sinofsky says Regulating AI by Executive Order is the Real AI Risk:
The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation
He points out, correctly, that a risk-averse approach to prior technology revolutions would have stunted development and deprived us of many technology advances.
James Pethokoukis writes in an op-ed that Biden threatens to stifle US tech innovation with a too hasty AI power grab:
Without a firm understanding of possible harms, we shouldn’t risk slowing a technology with vast potential to make America richer, healthier, more militarily secure and more capable of dealing with problems such as climate change and future pandemics.
… the order suggests nothing more than a wholesale abandonment of the light-regulatory approach toward American digital markets that created a world where all the most important Internet players are American companies.
The reaction from some E/accelerationists including Marc Andreessen had a Texas flair to it, protesting the Government poking their nose into AI models and compute:
The context for non-Texans: It’s a play on the Gonzales battle flag from the Texas war of Independence.
Conclusions
AI is history in fast-forward. Less than a year ago, ChatGPT stunned the world and kick-started massive AI hype. That led to claims of imminent AI takeover, and calls for a pause, amping up a ‘moral panic’ over AI Safety. That led to political leaders leaning in on the “do something” chorus. So here we are, only 11 months after ChatGPT, and the politicians did something - the first U.S. AI regulations have landed.
It’s good that AI Safety is on the radar. It’s good to fund efforts in both innovation and understanding how to develop safe and secure AI. The problem comes in demanding reporting and proof of AI Safety. The order talks of “responsible AI” and “AI Safety” but we don’t know what that means yet.
Had Biden simply funded an AI Safety and Alignment lab, putting money into the kind of effort OpenAI recently announced, we’d be better off. We’d learn first, act next. More thoughtful than having the Commerce Department keep track of our compute or demanding Red Team reporting.
We don’t know if that will be useless or lead to unintended consequences of heavy regulation. With such uncertainty about AI capabilities, a safety-first approach to AI may stunt or distort the path of innovation. We don’t know exactly, but a historically likely outcome in this situation favors large incumbents, who will benefit from a regulatory burden that newcomers and startups can less afford to meet. It will also likely fail to address all the real AI safety risks.
Imposing such a heavy-handed AI regulatory regime has risks that may be greater than the harms they avoid: Stifling innovation; handing permanent advantage to the bigger incumbent players in AI. Stifling AI development that could actually solve AI safety, security and alignment challenges may be self-defeating and harm progress in AI Safety itself.
AI developing faster than Government regulations can keep up with is a feature not a bug. So many benefits await us from AI, so let’s not kill the AI revolution in the crib. A ‘light touch’ is better.