
Governments vow to safety test AI. Now for the tricky bit: how?
Governments from around the world have pledged to work together to test new AI models before they’re unleashed on the public. How will they do that? That’s far from clear.
The UK government issued a statement following this week’s AI Safety Summit at Bletchley Park, stating that “governments” had reached “a shared ambition to invest in public sector capacity for testing and other safety research”. The statement didn’t specify which governments had reached this accord, although the statement distinguishes the agreement on AI testing from the broader Bletchley Declaration, which was agreed by all 27 attending governments and the EU.
The FT reports that the UK, USA and Singapore will be among the countries involved in testing AI, and that companies including ChatGPT creator OpenAI, Google DeepMind, Amazon and Microsoft are among the firms that will submit their products for testing.
“Until now the only people testing the safety of new AI models have been the very companies developing it,” said UK prime minister Rishi Sunak in a statement. “We shouldn’t rely on them to mark their own homework, as many of them agree.
“Today we’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released.”
Testing AI models
It’s not yet clear how the UK government or its international counterparts will find or fund the capacity required to test AI models, particularly if validation is required before these models are released.
Take OpenAI’s GPT model, for example. It has released five major versions in five years, and that’s only one of the many companies now ploughing billions into AI developments. It would need enormously well-resourced testing teams to keep up with the pace of innovation. And there are countless AI projects being developed by companies who aren’t subject to the agreement.
And there’s another problem: the agreement isn’t legally binding. That means AI firms won’t face any punishment if they push products out without government approval.
Sunak himself has admitted that developing legislation to force companies to submit products for testing could take many years.
In the meantime, the US and UK are pressing ahead with plans to develop AI Safety Institutes, while the EU is progressing with its own plans. Will they be able to put the safety brakes on the rapid evolution of AI or is it mere posturing? We’ll only find out in a few years’ time.
NEXT UP

Nathalie Parent, Chief People Officer at Shift Technology: “HR is the conscience of an organisation”
For more than 30 years, Nathalie Parent has led global HR teams, working primarily with software companies. Today she’s Chief People Officer at Shift Technology

AWS makes it cheaper to store little-used data with EFS Archive
Amazon introduces new storage class that makes it cheaper to store rarely used files

Why should we care about robot carers?
Robot carers are real, but caregiving has bigger problems, writes Richard Trenholm in this FlashForward edition