AI Risks and AI Regulations
Less is More - Less AI regulation yields more AI innovation and progress
Executive Summary - TL;DR
There are several possible approaches to AI regulation, ranging from laissez-faire to outright bans on AI, with different levels of AI regulation in between.
The EU is taking a regulation-heavy approach. The EU Parliament has passed the AI Act, which regulates AI applications based on risks from their application and use, with high-risk uses subject to licensing.
The driver for AI regulation has been various perceived AI risks.
AI ‘misinformation’ concerns are subjective. Existential AI risks are not grounded in current AI reality. Neither are valid reasons to regulate AI.
AI Safety concerns are valid, but AI can also save lives. Technology improvements to AI and more safety built in can mitigate AI safety risks.
AI technology poses risks, but AI regulation poses risks as well.
The number one risk of AI regulation is that it stifles or stalls AI progress and in so doing, slows down the development of highly beneficial AI technology.
Our approach should be to support the freedom to build and use AI technologies.
Only intervene with limited regulation in cases of real existing AI harms, and encourage AI development progress that leads to better technology and human flourishing long-term.
EU Parliament passes the AI Act
Earlier in June, the EU Parliament passed a set of AI regulations called the AI Act. The goal of these regulations, billed as the world’s first comprehensive AI law, is to put ‘guardrails’ around AI. It is based on the fear that AI might create some harm.
The approach is to look at the application and the risk of harms from that application. As a result, they divided AI uses into various categories:
Unacceptable risk AI uses include: Social credit scores; facial recognition and bio-metrics in real-time in public spaces; cognitive behavioral manipulation of people. These uses are banned.
High-risk uses of AI include eight specific categories: biometric identification; managing critical infrastructure; education and training; employment or recruiting; access to essential private and public services and benefits; law enforcement; migration and border control management; legal interpretation and assistance. These AI systems will have to be registered in an EU database.
Also considered high-risk are AI systems with product safety concerns, that fall under the EU’s product safety legislation, such as medical devices, aviation, cars and toys.
Limited risk AI uses include spam filters and generative systems for images, audio, and video content. AI systems should comply with transparency requirements but are otherwise left unregulated.
These EU rules were first proposed in 2021, before ChatGPT and new generative AI burst on the scene, but the new AI played a role in the final version. In particular, the regulations propose these transparency and safety requirements for generative AI models:
Disclose when content is generated by AI.
Design AI models to prevent them from generating illegal content.
Publish summaries of copyrighted data used for training.
The EU Parliament is just one step in the process. The next step is for the EU Commission and the member states to finalize and pass laws, but EU leaders expect this to be complete this year.
AI Regulation Alternatives
This raises the question: Is this AI regulation needed and the right approach?
When considering the broad scope of regulation, you could consider alternative approaches depending on how much you lean in on Government regulation, from ‘do nothing’ to outright bans. And how much you lean in on AI regulation will depend on how much you buy into concerns about AI risks:
Outright bans on AI: The ‘pause AI for 6 months’ letter seemed to such such a high level of risk that we needed to stop AI development.
A regulation-heavy approach on AI: License and require permission and guardrails before deploying AI models and applications, taking an AI Safety first approach to ensure no unsafe uses arise (or are limited). The EU AI Act seems to be the model for this approach.
A ‘light tough’ approach: Pass minimal regulations that keeps boundaries on limiting existing, known real harms, but minimally regulates AI to encourage innovation. This is the approach the US Government took to the internet in the 1990s, and it was largely successful in promoting the blossoming of the internet.
Do nothing: The Laissez-Faire approach to AI is for Governments to not add any regulations at all. If AI is mostly a technology for good, then the status quo is sufficient. Current laws around copyrights, consumer safety, fraud, etc. can handle actual AI harms that arise.
The Risks Driving Calls for AI Regulation
Arguments in favor of each type of AI regulation have been based on perceptions and views on the risks of AI. The fears around AI have been:
Existential AI risk: “AI will become super-powerful and decide to kill us all.” There has been a lot of hype around this particular AI Doomer narrative. This hypothetical AI risk is not based on current and near-term realistic AI technology. Today’s AI lacks agency and sentience and is highly dependent on a tech stack we manage and control. AI in the next decade won’t be any different on that score. There is an ‘off switch’ for every computer system.
This hypothetical risk can be a discussion topic for futurists, but it is more science fiction fear than practical reality, and this should not influence decisions about regulating AI today.
Misinformation and AI Alignment: “We failed to regulate social media and bad things happened. Therefore we should regulate AI.” The concern is that AI can be a channel for spreading misinformation, bias, or other harmful statements.
AI alignment is about getting AI to align with our intentions and desires as users and also getting AI to align with lawful, higher human values. The latter is a subjective value judgment; even what constitutes ‘misinformation’ is debatable, as nobody has a monopoly on truth. The larger risk is imposing censorship on AI in the name of stopping perceived AI harms, thereby limiting liberty and diversity of thought. As Marc Andreessen has pointed out, “In short, don’t let the thought police suppress AI.”
Malicious abuse of AI: High-quality generative AI has led to the rise of deepfakes, more convincing AI-generative fraud, and new approaches to identity theft. Unfortunately, there are people who will want to use AI for offensive, immoral and illegal purposes.
As a powerful tool, AI may help people engage in illegal activity even if AI’s use is otherwise innocuous. AI’s improved email writing skill has been misused to make spam phishing attacks more sophisticated. There should be ways to prevent AI from being helpful in illegal endeavors, such as using guardrails or limits to prevent toxic and illegal output from AI models. However, it may be hard to shut it off completely when malicious users want to subvert those very limits on AI. In the end, illegal behavior remains illegal no matter what tools are used to engage in it.
AI Saves Lives
Another driving concern for regulating AI is AI Safety: When AI is used in critical systems, such as medicine or operating vehicles, safety can mean life and death. There have been instances where AI has killed people. Not by sci-fi robots, but self-driving AI car systems that are imperfect and make mistakes. Even though AI is getting better and is already much safer than human drivers, it can still result in many road deaths.
It’s important to reiterate that AI applications will likely save many real lives. Self-driving AI is safer than human drivers. AI in medicine will improve medical outcomes. AI inspection systems can improve food safety and industrial safety. Robots can replace humans in highly unsafe ‘dirty jobs’. The real story about AI Safety is that AI will help us live longer, healthier, less risky lives.
Because of this, we should both encourage ongoing AI development while ensuring these systems are as safe as possible. In the case of self-driving cars, this would start with getting all automakers to have the real-time reporting of car accidents and events to the NHTSA. As with aviation safety, full reporting and analysis of events could lead to a cycle of analysis and improvement in the safety profile of these systems.
It also makes other concerns about AI Safety and AI harms seem inconsequential. An AI model writing out something toxic may be a problem, but is important enough to be worthy of Government concern?
Transparency and Intellectual Property
AI is testing the limits of current copyright law and challenging notions of intellectual property rights. AI models, in particular LLMs, are trained on a vast array of human-generated text in academic papers, books, articles, internet message board discussions and more, much of it copyrighted.
Image generation AI models like stable diffusion is likewise trained on the images made by human artists; it can only imitate an artist’s style if the artist’s work was part of its dataset. Who owns the artwork generated by such an AI? It seems natural that the original author and artist has some claim on such a derivative work of art.
Things become more complicated when an AI is trained on millions of images or texts. How much claim does one artist have when their work is just one small piece of a large body of human works used to feed a large AI model?
To navigate this, we should have transparency both in knowing what is ‘in the box’ in AI models as well as disclosure when AI is used. Every large AI model should be open and publish summaries of copyrighted data used for training. If a model is trained on copyrighted information, it makes sense to clarify the legal ownership and obligation around using such data in an AI model dataset.
It also makes sense to disclose when content is AI-generated. To help this, generative AI models should have watermarking features to make it possible to trace where and how images were created.
There doesn’t need to be government dictation on this, but each industry needs to adopt its own rules regarding attribution and uses of AI in critical output. For example, the Grammy Awards will not accept songs without human authorship, while Nature magazine forbids use of AI-generated images. Academic journals and courts will require disclosures of uses of AI to prepare documents and filings. AI is not good enough to ‘stand on its own’ and humans will be generating things with the help of AI.
The Risks of AI Regulation
While AI technology poses many challenges and some risks, on balance it offers far more benefits than risks. It also is a fast-moving and fast-evolving technology, and that leads to many uncertainties and thus ‘risks’ are magnified by the uncertainties.
One thing the promoters of AI regulation forget: AI technology poses risks, but AI regulation poses risks as well.
One risk of AI regulations is that it is used to impose a censorship regime. That risk is playing out in China, as China censors AI models to avoid criticism of their regime.
The number one risk of AI regulation is that it stifles or stalls AI progress and in so doing, slows down the development of highly beneficial AI technology. There are many parts to this:
AI technology is changing too rapidly to assess risks from AI. As AI evolves, it can evolve how it handles its own limitations and problems. For example, many of the risks of AI that have been presented recently, such as the “hallucination” issues with chatGPT, are resolvable with either better AI models or techniques such as retrieval-augmented generation, discussed in prior articles.
This is why AI regulations should be driven by real harms, not by hypothetical harms or subjective value judgments. Shutting off AI or imposing regulatory censorship on the basis of such fears will not only limit possible useful AI applications, but it will also stunt AI innovation and progress in AI towards even better outcomes overall. It will do so to stop a risk that AI technology might have evolved to head off anyway.
Even if we can assess AI risks and understand them, we cannot predict or mandate good solutions that require further AI technology innovation. This is true of most of the real AI concerns we have discussed, where a technology problem has a technology solution, such as:
AI can battle deep fakes with watermarking technology, AI recognition technology can identify uses of AI that might be malicious or inappropriate.
Improve AI safety and AI alignment by training AI models to be better aligned; use guardrails around AI model inputs and outputs to prevent prompt injection attacks, clean up toxic or dangerous outputs, etc.
AI regulations will stifle AI progress
AI regulations - in particular bans and licensing of AI - will have a chilling effect on AI innovation. A natural consequence of limiting AI applications is limiting AI technology development. In particular, the EU registration of models that are ‘high risk’ is also a list of ‘high impact’ industries where AI could be very beneficial.
For example, education is on the EU’s “high risk” list. The creator of an AI tutoring application will need to register models into a database and might have further requirements in licensing that could materially slow the adoption of such an application. If this is done to avoid a dangerous AI tutor, it doesn’t come without a cost. It would slow development of better personalized customized AI tutors to help students learn better.
AI is Evolving, so Stay Flexible
The narrative around AI in recent months has been about how important AI is and that therefore we should do it “right.” Yet, history has shown us that the main avenue to human progress is through human freedom, which allows for human creativity and innovation. This has led to technology advances which have almost universally been good for humanity.
History also shows we never get things ‘right’, but rather make many mistakes. We often ‘fight the last war’ and we misunderstand real risks and opportunities. We over-correct on some risks and then get blind-sided by ‘black swans.’ In the end, many ‘risks’ of AI are simply the reality that AI is imperfect and will sometimes fail. Failure is neither unique to AI nor is it a reason to abandon the technology.
AI is a hugely important information and beneficial technology, as important as the internet or perhaps even the printing press. If AI is mostly a beneficial technology that we want to see flourish, then we would want to regulate and treat it like we have the internet, with a ‘light touch’ approach. Our approach should be to support the freedom to build and use AI technologies.
Heavy-handed AI regulations now, at a time when AI is relatively new and evolving rapidly, is likely be the wrong approach at the wrong time, addressing the wrong problems with the wrong solutions (legal hammers instead of technical screwdrivers). It will slow progress and do more harm than good.
The EU’s AI Act, while tame now, is unfortunately taking steps down that path. If the EU over-regulates AI, they may face a ‘brain drain’ if entrepreneurs flee Europe to come to USA where innovation is less stifled. Gilles Babinet, a French entrepreneur has stated, “I have spoken to a few teams. If the law is strong, they would leave the EU and go to the US.”
How can be minimize regrets or avoid mistakes? Stay flexible, be minimalist in regulating AI for now as it develops, while encouraging positive progress:
Encourage AI progress and development, in particular open source AI model development and AI research that is openly shared and advances understanding.
Track AI progress. Monitor it’s benefits and pitfalls and encourage transparency and openness by all AI model creators, so people know what they are getting.
Clarify copyright and intellectual property rights and responsibilities in law.
Do not have Government licensing of AI models. Treat AI like software.
Focus AI Safety concerns on actual safety risks and harms, such as in transportation and medical uses, and not on hypothetical risks or subjective ‘misinformation’ concerns.
AI is a profoundly beneficial technology advance, so we should want it to advance with minimal impediment. The greatest AI risks therefore come not from AI itself but from AI regulation that chills AI progress.
Let’s not throw that AI baby out with the bathwater. Let’s default to freedom in AI innovation and development, only intervene with limited regulation in cases of real AI harms, and encourage AI development progress that leads to better technology and human flourishing long-term.