NCSC AI development guidelines fall short, security experts say

Yesterday we reported on Guidelines for Secure AI System Development document created by the National Cyber Security Centre. Despite, or perhaps because of, collaboration with 21 international agencies as well as Google, Microsoft and OpenAI, some security professionals fear it doesn’t go far enough to protect developers or users.

The guidelines garnered support from 18 countries so are not to be dismissed. Covering everything from secure design principles to deployment, it has been a mammoth task.

Lindy Cameron, NCSC’s CEO, said that the guidelines represent a “significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI”. As such, it will ensure security is a “core requirement” in AI development.

The document also sets out to define the scope of what can be considered as AI, including machine learning. “This is important in the guideline, as the scope of AI is very broad, and the guideline’s scope definition is clear and transparent,” says Joseph Carson, Chief Security Scientist at Delinea.

Not everybody is convinced, though.

Security experts’ view on NCSC AI development guidelines

Take Chris Hughes, Chief Security Advisor at Endor Labs — and Cyber Innovation Fellow with the US Cybersecurity & Infrastructure Security Agency. While happy to accept that the NCSC document is a helpful source of best practices for those developing and using AI systems, Hughes takes issue with the fact that it remains guidance only.

“The largest challenge in terms of following the guidance, or more importantly, having the guidance followed, is the fact that it’s all still largely voluntary,” he says. This is distinct from guidance that has been published by the EU, he adds, which talks in terms of must rather than should.

That’s in stark contrast to the NCSC advice, says Hughes. “[It] is not mandatory or binding, and suppliers and vendors can choose to follow the guidance — or not.”

Of course, this could change as the AI regulatory landscape is far from mature.

“I hope to see more governments around the world join in with endorsing and applying these guidelines,” Carson says, “which might eventually lead to some form of regulation to ensure that accountability will be enforced as well.”

However, this lack of regulatory maturity has left the NCSC open to claims of being vague about the guidance on offer.

“Overall, I love the focus on AI and security,” says Joseph Thacker, a security researcher with AppOmni. “It appears this guidance is trying to be more specific, but it’s still pretty vague in applications of practical actions.”

Then there’s the question of who will actually follow the guidance. While useful to organisations that have yet to commence on an AI journey and don’t know where to start, Hughes argues that it “isn’t very helpful for enterprises and other organisations that already have decent security”.

Further reading: download the full Secure AI System Development document from the NCSC

Avatar photo
Davey Winder

With four decades of experience, Davey is one of the UK's most respected cybersecurity writers and a contributing editor to PC Pro magazine. He is also a senior contributor at Forbes. You can find him at TechFinitive covering all things cybersecurity.

NEXT UP