By Emil Bjerg, journalist and editor
Everyone from computer scientists to politicians to AI CEOs seem to agree that generative AI needs to be regulated, but we’re only just starting to see the contours of what regulation might look like. This article delves into the key arguments for regulating generative AI and explores who leads the race to regulate.
Earlier in 2023, Sam Altman, head of OpenAI, along with CEOs of four other AI companies, had a private meeting with American Vice President Kamala Harris. The conversation centered around how the American state can regulate AI.
“Ultimately, who do you think were the most powerful people in that room – the people from the government side or the people heading the tech companies?” a journalist from the New Yorker subsequently asked Altman about the meeting.
“I think the government certainly is more powerful here in the medium term, but the government does take a little bit longer to get things done, so I think it’s important that the companies independently do the right thing in the very short term,” Sam Altman replied.
Consensus to regulate with little action
A few months earlier, in May, Sam Altman won over Congress with his pro-regulation approach to his AI hearing. “I sense there is a willingness to participate here that is genuine and authentic,” Democratic Senator from Connecticut, Richard Blumenthal, said to Altman.
Despite Altman’s willingness to regulate – Altman, who, more than anyone, personifies the wave of generative AI – from an American perspective, very little regulation is happening. Before we look into who leads AI regulation globally, let’s have a look at some of the arguments for regulating generative AI.
Maintaining ethical standards
“I think if this technology goes wrong, it can go quite wrong,” Altman said to Congress. AI systems are capable of independent decision-making to reach a set goal, but they lack moral and ethical judgment. Without proper regulation, these systems could potentially be utilized in ways that breach ethical standards and even human rights. It seems evident that regulation
needs to take place as a part of a broader, democratic conversation rather than as self-regulation inside a few powerful tech companies.
Safeguarding democracy and human rights
Both individuals and societies can be hurt by generative AI. Deepfake technology can ‘undress’ celebrities and normal people alike, just like it can produce images for fake news. While the American presidential election in 2016 was scarred by social media misinformation, the 2024 election is likely to be one of the first elections where deep fakes and fake news made by generative AI influence votes.
An evident solution is watermarking material generated by AI.
Avoiding monopolization
Generative AI is quickly becoming an everyday technology for individuals and companies. In a very near future, generative AI can easily become a must-have in a competitive world. That can centralize unthinkable power and wealth in the hands of a few gatekeepers. Without regulation, larger entities could monopolize AI technology, stifling competition and innovation. Regulation can ensure an even playing field, allowing smaller companies and startups to compete and contribute to the AI landscape.
One way to ensure fair distribution is to make sure that the creators of the data that generative AIs are trained on – without which generative AI couldn’t produce anything – are fairly compensated.
Protecting creators and artists
Generative AI currently poses a double threat to creators and artists: musicians, painters, writers, graphic designers, and more. On the one hand, they risk having their work used to train AIs without warning or compensation, on the other hand, they risk being made redundant by AI that might have been trained on their work.
We’re in for a long copyright battle between creators and AI companies. The EU is currently working on laws that would force companies that deploy generative AI tools to disclose the use of any copyrighted material.
Ensuring transparent communication
Google famously had to withdraw their freakishly human-sounding AI, Duplex, that would trick people into thinking they had a phone conversation with a human. An AI system has been developed to generate fake quotes from real people and publish them online. News, journalism, and full news sites are created by AIs with little to no human editing. We’re just starting to see the deceptive effects of AI. It’s essential for people to know if they’re communicating with humans or AIs.
An apparent approach to regulation is to create laws that require explicit disclosure when a person is communicating with an AI or interacting with content generated by an AI.
With some of the main arguments for regulation of AI established, let’s have a look at regulatory efforts outside of the US.
EU and China lead AI regulation
In mid-June, EU lawmakers agreed on a draft of the EU AI Act, which regulates the diverse use cases of AI, ranging from chatbots to surgical procedures and fraud protections at banks. The AI Act is the first in the world that sets rules for how companies can use artificial intelligence. The new legislation groups use cases of AI into three different categories. Unacceptable risk – cognitive behavioral manipulation of people or specific vulnerable groups, social scoring, and real-time biometric identification systems – high risk and limited risk.
Further, the Act looks into regulating generative AI. If the new AI Act is approved, generative AI services will have to comply with the following transparency requirements:
- “Disclosing that the content was generated by AI”
- “Designing the model to prevent it from generating illegal content”
- “Publishing summaries of copyrighted data used for training”
In a classic EU versus Big Tech show-off, the otherwise pro-regulation Sam Altman has sounded the alarm over the EU’s planned intervention. In the current iteration of large language models such as ChatGPT and GPT-4 might be designated as “high risk”, which would force a company like OpenAI to “comply with additional safety requirements”. “Either we’ll be able to solve those requirements or not,” Altman recently said of EU’s regulatory plans. “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible,” Altman said.
The EU expects to approve the AI Act later this year. Shortly after the publication of the EU’s AI act, China entered the race to regulate generative AI with a new set of rules. The new set of rules means that China has the lead in AI regulation – even ahead of the EU, which expects to approve the AI Act by the end of 2023.
The Cyberspace Administration of China has led the regulatory process, which will take effect from August 15. In the regulatory efforts, the Chinese rule closely pays attention to the fact that generative AI can create content that contrasts the views and ideology of the Chinese state. The Cyberspace Administration of China announced that generative AI services have to conform to the “core values of socialism” and are obliged to take measures to avoid “illegal” content. To enforce the regulations, generative AI services have to obtain a license from the Chinese state to operate.
Beyond the regulation versus innovation dichotomy
While censorship-based regulation is evidently a hindrance to innovation, could regulation also foster innovation? At least the EU seems determined to let regulation and innovation go hand in hand. A new paper from the European Parliament’s Scientific Foresight Unit asks the
question, “What if AI regulation promoted innovation?”. The paper promotes the perspective that well-crafted regulation is not just compatible with AI innovation but also is its essential precondition. It is argued that regulation can help level the playing field, ensuring a more dynamic ecosystem. Furthermore, according to the paper regulation can promote synergies and it is argued that short-term restrictions on certain developments can stimulate long-term innovation.
Adding to the list of arguments, the shortcomings of Big Tech in the past decade make it clear that a new approach is needed with this new wave of revolutionary tech. Social media platforms, which were once seen as powerful tools to unite people around the world, have in the past years proven more efficient in creating societal division. Not until the creation of semi-monopolies or the interference in democratic elections did big tech find itself under the regulatory lens. This time, with generative AI, there are good reasons to be proactive.