On Air Now Josh Beaven 7:00pm - Midnight David Guetta / One Republic - I Don't Wanna Wait Schedule

Why the UK didn't sign up to global AI agreement

Wednesday, 12 February 2025 18:01

By Mickey Carroll, science and technology reporter

World leaders and tech bros descended on Paris this week, with some determined to show a united stance on artificial intelligence. 

But at the end of the two-day summit, the UK and the US walked away empty-handed, having refused to sign a global declaration on AI.

Earlier on Tuesday, US vice president JD Vance told his audience in Paris that too much regulation could "kill a transformative industry just as it's taking off" and Donald Trump has already signed an executive order removing rules imposed by Joe Biden.

But for the UK, the declaration did go far enough.

"The declaration didn't provide enough practical clarity on global governance and [didn't] sufficiently address harder questions around national security," said a UK government spokesperson.

So what is the UK government so concerned everyone is missing?

Aside from taking jobs and stealing data, there are other existential threats to worry about, according to Carsten Jung, the head of AI at the Institute for Public Policy Research (IPPR).

He listed the ways AI can be dangerous, from enabling hackers to break into computer systems to losing control of AI bots that "run wild" on the internet to even helping terrorists to create bioweapons.

"This isn't science fiction," he said.

One scientist in Paris warned the people most at risk of unregulated AI are those with the least to do with it.

"For a lot of us, we're on our phones all the time and we want that to be less," said Dr Jen Schradie, an associate professor at Sciences Po University who sits on the International Panel on the Information Environment.

"But for a lot of people who don't have regular, consistent [internet] access or have the skills and even the time to post content, those voices are left out of everything."

They are left out of the data sets fed into AI, as well as the solutions proposed by it, to workforces, healthcare and more, according to Dr Schradie.

Read more from science, climate and technology:
Ozempic helps to reduce alcohol consumption and smoking
Beavers could help tackle Britain's rising flooding problems
Elon Musk denies 'hostile takeover' of US government

Without making these risks a priority, some of the attendees in Paris worry governments will chase after bigger and better AI, without ever addressing the consequences.

"The only thing they say about how they're going to achieve safety is 'we're going to have an open and inclusive process', which is completely meaningless," said Professor Stuart Russell, a scientist from the University of California at Berkeley who was in Paris.

"A lot of us who are concerned about the safety of AI systems were pretty disappointed."

One expert compared unregulated AI to unregulated food and medicine.

"When we think about food, about medicines and [...] aircraft, there is an international consensus that countries specify what they think their people need," said Michael Birtwistle from the Ada Lovelace Institute.

"Instead of a sense of an approach that slowly rolls these things out, tries to understand the risks first and then scales, we're seeing these [AI] products released directly to market."

And when these AI products are released, they're extremely popular.

Just two months after it launched, ChatGPT was estimated to have reached 100 million monthly active users, making it the fastest-growing app in history. A global phenomenon needs a global solution, according to Mr Jung.

"If we all race ahead and try to come first as fast as possible and are not jointly managing the risks, bad things can happen," he said.

Sky News

(c) Sky News 2025: Why the UK didn't sign up to global AI agreement

More from VIDEO