Will anyone trust the UK to regulate AI?

The AI race has been nothing short of electrifying over the past year. Now there’s another race on – to be the first country to set the standard for AI regulation.

With even AI advocates admitting that the technology poses a risk of human extinction, politicians and regulators around the world are scrambling to react.

UK Prime Minister Rishi Sunak is keen to put his country in pole position. Last week he met with US President Joe Biden, reportedly pressing the UK’s case to set the “guardrails” for the AI industry, according to the FT.

The Prime Minister was quick to declare success of sorts, tweeting that “we’re stepping up international efforts to ensure the safe and responsible development of AI, starting with a UK-hosted summit on AI safety later this year – backed by the US”.

Quite how much weight the US is throwing behind this AI summit is unclear. Biden made no mention of it on his own official Twitter account, merely stating that the two countries have a “common vision and shared values”, without diving into specifics.

Will the UK take the lead in AI regulation?

If the US is somewhat reticent to back the UK’s ambition to become the AI sheriff, it’s hardly surprising. The UK is not exactly being lauded for its forward-thinking tech regulation at the moment. Quite the opposite, in fact.

The Online Safety Bill, currently inching its way through Parliament, has drawn strong criticism from the leading US tech firms, mainly for the threat it poses to end-to-end encryption. The bill requires tech firms to provide security services with a backdoor into encrypted messaging systems, something tech firms warn is impossible without compromising end-to-end encryption itself.

In an open letter published in April, executives from WhatsApp, Signal, Threema and other messaging apps warned that the bill posed an “unprecedented threat to the privacy, safety and security of every UK citizen and the people with whom they communicate around the world”. Meta-owned WhatsApp has previously threatened to abandon the UK market if the bill goes through.

Having only recently left the EU, it also seems highly unlikely that European regulators would want to follow the UK’s lead. The EU has its own draft AI Act, which could become law before the end of the year.

Global agreement

Getting the world to agree on anything is a monumental challenge at the best of times. But with the ongoing war in Europe, and a growing antagonism between the US and China on technology, now seems an unlikely moment to find worldwide accord on how to regulate a technology with huge potential to be used as a weapon.

However, one of the AI industry’s most prominent executives believes there is scope for global agreement. OpenAI’s CEO, Sam Altman, was one of the signatories of that open letter warning of AI’s existential threat. He’s been on a worldwide tour, talking to international regulators about how the technology could be tamed.

“I came to the trip… sceptical that it was going to be possible in the short term to get global cooperation to reduce existential risk but I am now wrapping up the trip feeling quite optimistic we can get it done,” he said, according to a report from Reuters.

If he manages to get China, Russia and the US on the same page when it comes to AI regulation, it will be more remarkable than the rapid advancement of AI itself.

Interested in AI regulation?

The articles below should supplement this piece – here’s what we recommend you read next:

Avatar photo
Barry Collins

Barry has 20 years of experience working on national newspapers, websites and magazines. He was editor of PC Pro and is co-editor and co-owner of BigTechQuestion.com. He has published a number of articles on TechFinitive covering data, innovation and cybersecurity.

NEXT UP