On Tuesday, a Senate Judiciary Subcommittee held a hearing on AI oversight and regulation, titled ‘Oversight of A.I.: Rules for Artificial Intelligence.’ This hearing brought OpenAI CEO Sam Altman to Washington to testify, and it put AI regulation on the table for Congress.
To kick off the hearing, Senator Richard Blumenthal shared an AI-generated speech that convincingly imitated him. He used chatGPT to write a one minute speech and used voice cloning to make an AI-Blumenthal to deliver it. AI truly has arrived in DC.
The hearing showed a consensus on AI's importance and potential impact, and a bipartisan desire to finds way to grapple with AI’s risks. Democrat Senator Blumenthal stated, “We want to avoid mistakes that were committed on social media,” indicating a desire to lean in on Government oversight versus letting technology companies avoid liability for technology risks. He even went so far as to say that an AI-dominated future "is not necessarily the future that we want".
Republican Senator Hawley’s opening remarks highlighted that this AI leap eclipses the invention of the internet in importance; this AI technology breakthrough’s impact is becoming more widely recognized.
We could be looking at one of the most significant technological innovations in human history. My question is what kind of innovation is it going to be? Will it be like the printing press … or like the atom bomb? The answer has not yet been written. … Will we strike that balance between technological innovation and our ethical and moral responsibility to humanity, to liberty, to the freedom of this country? - Senator Josh Hawley
OpenAI CEO Sam Altman testifies
OpenAI CEO Sam Altman was up-front about leaning in on mitigating AI risks. While saying “We believe it can be a printing press moment” for AI, he sees many risks to AI and is in favor of Government regulation.
“I think if this technology goes wrong, it can go quite wrong…we want to be vocal about that. But we have been clear-eyed at mitigating those risks. … We want to work with the government to prevent that from happening.” - Sam Altman
Altman acknowledged that people are anxious about how AI could change the way we live and said that government intervention will be critical. Some of the key takeaways from Sam Altman’s testimony across a number of key AI issues and risks:
Altman supports government licensing of AI models:
Altman proposed having a “combination of licensing and testing requirements” that could be applied to the “development and release of AI models above a threshold of capabilities.” When asked about independent testing labs for AI models, Altman agreed, saying “Independent audits are important. A lot of disclosures about inaccuracies would be valuable.”
Gary Marcus, also testifying, went further in suggesting a CERN-like international body for AI Safety, and clinical-type trials by independent auditors for AI models.
Altman emphasized OpenAI’s work on AI safety
As OpenAI CEO, Altman has always paired his expression of concerns over AI Safety with touting OpenAI’s record on it. He mentioned GPT-4’s six months of testing and the red-teaming done for AI Safety.
AI manipulation of opinion and voters at scale is an “area of greatest concern”
When Senator Hawley asked about AI faking and targeted misinformation to influence voters, Altman said it was a top concern. “We’re going to face an election next year, and these models are getting better.” Altman described AI fakery as “like 'photo-shop but on Steroids,” and he suggested we should try to “make clear whether people are talking to an AI.”
Gary Marcus brought up the point of data transparency for this. “Systems can manipulate. We need transparency about [what data an AI model is trained on].” It has been shown that interacting with opinionated chatGPT models can change opinions. Just as Google search engine bias is insidious because it is invisible, AI bias is insidious when it’s not exposed for what is going on.
AI’s impact on jobs
Senator Blumenthal said AI’s impact on jobs as his “biggest nightmare” with AI, but Altman admitted the impact that AI could have on the economy, including the likelihood that AI technology could replace some jobs, but was more optimistic and cautious. He pointed out that AI’s impact on jobs is difficult to predict, many jobs will get better, and that AI is “good at tasks but not jobs.” He also added, “I am optimistic jobs will get better” in the AI future.
Altman was challenged on copyright and compensation issues
Republican Senator Blackburn of Tennessee asked about copyrighted songs, describing how OpenAI’s Jukebox was able to create a Garth Brooks sound-alike song for her.
Who owns the rights to that AI-generated technology, based on the works of recording artists? How do you compensate the artist? If it takes a part of a Garth Brooks song to create a song, there should be compensation to that artist.
When the Senator asked explicitly asking for a promise not to train AI models on copyrighted works, Altman was vague and ducked. He agreed that content creators deserve control over how their creations are used, but didn’t have a specific answer for how OpenAI is assuring that. This is a huge legal sticking point for generative AI, since it all rests on prior human artistic and intellectual creation.
OpenAI and Microsoft could benefit from AI regulation
While Sam Altman’s pro-regulation stance may sound altruistic and certainly tickles the ears of politicians eager to do something, the real impact of Government regulation historically has been to benefit incumbent players in an industry. The licensing and regulation of models could close the gate on open source models and other free-ranging developments that might challenge OpenAI and Microsoft.
Prospects for US Regulation of AI
The United States trails behind AI regulation efforts in the EU, which has already gone forward with implementing AI safety rules. This hearing is a sign that’s about to change.
Both Republicans and Democrats are looking for a proactive approach to regulating the rise of AI. Most Senators feel that Section 230 liability protections let us down when it came to dealing with social media platform bias and misinformation, and they don’t want to repeat that mistake with fast-moving powerful AI technology.
The hearing testimony and comments from Senators showed the elements of what future Federal regulation of AI might look like:
Liability for AI harms: Senators from both parties are more eager to make sure tech companies are held liable for AI risks, moving away from Section 230 approach. Sam Altman said that Section 230 didn’t apply to AI models, and he felt that it was appropriate for AI model makers to be liable for AI risks and harms.
Transparency and ‘ingredient labeling’ on AI models: As Gary Marcus put it, “We need greater transparency about the data, how the models work, and access to what goes into those systems.” Require public disclosure of the datasets for the training of generative AI models.
Licensing bodies: As Sam Altman puts it, “I would form a new agency that licences any efforts above a certain scale of capabilities.” Safety review before deployment would be required, perhaps by independent audits. Comparisons were also made with regulations of nuclear power, by Senator Graham, and regulation of genetics research.
Copyright and intellectual property: Senator Blackburn and others highlighted need for protections for content creators in generative AI. Regulators might forbid training AI models on copyrighted works without authorization and agreed compensation, which could have wide impact on these LLMs that have effectively scraped the entire internet to get trained.
Misinformation concerns: Legislators have every reason to be worried about the potential impact of deep fake videos and audio, as they themselves might be the target of the next AI deep fake manipulation or misinformation attack. It’s unclear how to handle this. Most ‘deep fake’ images and voice cloning are made with home-brew local open-source models, not GPT-4.
Global bodies: It’s expensive to train models for different nations, and EU is already regulating, so there is corporate support for global regulatory coordination. Comparisons with CERN and the IAEA were made as a potential model to follow.