ChatGPT, Other AI Models Said to Have Complicated EU’s Efforts to Form Landmark Rule

Share

Rapid technological advances such as the ChatGPT generative artificial intelligence (AI) app are complicating efforts by European Union lawmakers to agree on landmark AI laws, sources with direct knowledge of the matter have told Reuters.

The European Commission proposed the draft rules nearly two years ago in a bid to protect citizens from the dangers of the emerging technology, which has experienced a boom in investment and consumer popularity in recent months.

The draft needs to be thrashed out between EU countries and EU lawmakers, called a trilogue, before the rules can become law.

Several lawmakers had expected to reach a consensus on the 108-page bill last month in a meeting in Strasbourg, France and proceed to a trilogue in the next few months.

But a 5-hour meeting on Feb 13 resulted in no resolution and lawmakers are at loggerheads over various facets of the Act, according to three sources familiar with the discussions.

While the industry expects an agreement by the end of the year, there are concerns that the complexity and the lack of progress could delay the legislation to next year, and European elections could see MEPs with an entirely different set of priorities take office.

“The pace at which new systems are being released makes regulation a real challenge,” said Daniel Leufer, a senior policy analyst at rights group Access Now. “It’s a fast-moving target, but there are measures that remain relevant despite the speed of development: transparency, quality control, and measures to assert their fundamental rights.”

Brisk developments

Lawmakers are working through the more than 3,000 tabled amendments, covering everything from the creation of a new AI office to the scope of the Act’s rules.

“Negotiations are quite complex because there are many different committees involved,” said Brando Benifei, an Italian MEP and one of the two lawmakers leading negotiations on the bloc’s much-anticipated AI Act. “The discussions can be quite long. You have to talk to some 20 MEPs every time.”

Legislators have sought to strike a balance between encouraging innovation while protecting citizens’ fundamental rights.

This led to different AI tools being classified according to their perceived risk level: from minimal through to limited, high, and unacceptable. High-risk tools won’t be banned, but will require companies to be highly transparent in their operations.

But these debates have left little room for addressing aggressively expanding generative AI technologies like ChatGPT and Stable Diffusion that have swept across the globe, courting both user fascination and controversy.

By February, ChatGPT, made by Microsoft-backed OpenAI, set a record for the fastest-growing user base of any consumer application app in history.

Almost all of the big tech players have stakes in the sector, including Microsoft, Alphabet and Meta.

Big tech, big problems

The EU discussions have raised concerns for companies — from small startups to Big Tech — on how regulations might affect their business and whether they would be at a competitive disadvantage against rivals from other continents.

Behind the scenes, Big Tech companies, who have invested billions of dollars in the new technology, have lobbied hard to keep their innovations outside the ambit of the high-risk clarification that would mean more compliance, more costs and more accountability around their products, sources said.

A recent survey by the industry body appliedAI showed that 51 percent of the respondents expect a slowdown of AI development activities as a result of the AI Act.

To address tools like ChatGPT, which have seemingly endless applications, lawmakers introduced yet another category, “General Purpose AI Systems” (GPAIS), to describe tools that can be adapted to perform a number of functions. It remains unclear if all GPAIS will be deemed high-risk.

Representatives from tech companies have pushed back against such moves, insisting their own in-house guidelines are robust enough to ensure the technology is deployed safely, and even suggesting the Act should have an opt-in clause, under which firms can decide for themselves whether the regulations apply.

Double-edged sword?

Google-owned AI firm DeepMind, which is currently testing its own AI chatbot Sparrow, told Reuters the regulation of multi-purpose systems was complex.

“We believe the creation of a governance framework around GPAIS needs to be an inclusive process, which means all affected communities and civil society should be involved,” said Alexandra Belias, the firm’s head of international public policy.

She added: “The question here is: how do we make sure the risk-management framework we create today will still be adequate tomorrow?”

Daniel Ek, chief executive of audio streaming platform Spotify – which recently launched its own “AI DJ”, capable of curating personalised playlists – told Reuters the technology was “a double-edged sword”.

“There’s lots of things that we have to take into account,” he said. “Our team is working very actively with regulators, trying to make sure that this technology benefits as many as possible and is as safe as possible.”

MEPs say the Act will be subject to regular reviews, allowing for updates as and when new issues with AI emerge.

But, with European elections on the horizon in 2024, they are under pressure to deliver something substantial the first time around.

“Discussions must not be rushed, and compromises must not be made just so the file can be closed before the end of the year,” said Leufer. “People’s rights are at stake.” 

© Thomson Reuters 2023


From smartphones with rollable displays or liquid cooling, to compact AR glasses and handsets that can be repaired easily by their owners, we discuss the best devices we’ve seen at MWC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.