Governments publish AI security guidelines – and look who’s helped write them
A new set of AI security guidelines has been published by a consortium of international governments, urging AI companies to follow a “secure by default” approach. The guidelines include contributions from OpenAI, the company which last week temporarily sacked its CEO over alleged security concerns.
The guidelines (PDF) have been drafted by the UK’s National Cyber Security Centre, the US Cybersecurity Infrastructure Security Agency and a range of other international partners, including representatives from Australia, France, Germany, Israel and Japan. The document has also received input from more than a dozen leading AI firms, including OpenAI, Microsoft, Google and Amazon.
Although they don’t place any new mandatory requirements on the developers of AI systems, they set out a broad range of principles that companies should follow.
The guidelines insist that security should be considered at every stage of the development of AI systems, including the design, development, deployment and ongoing operation and maintenance.
When it comes to the design, for example, the guidelines recommend that “system owners and senior leaders understand threats to secure AI and their mitigations”.
The guidelines also warn against choosing more complex models that might be more difficult to secure. “There may be benefits to using simpler, more transparent models over large and complex ones which are more difficult to interpret,” the document states.
Related reading: Can Australia go it alone on combating deepfake porn?
OpenAI’s security scandal
The release of the document comes only a week after the enormous controversy at OpenAI, where CEO Sam Altman was fired for not being “consistently candid in his communications with the board”, only to be reinstated days later following the resignation of the majority of board members.
Although the reason for the dispute has not been made public, Reuters claims that it was triggered by staff writing to the board, warning that a new AI system being developed within the company could trigger a threat to humanity. OpenAI declined to comment on the Reuters allegations.
Microsoft, another of the companies that have contributed to the guidelines, was also preparing to hire Altman and other OpenAI staff if the situation at the firm couldn’t be resolved internally. All of Microsoft’s key AI products are based on OpenAI’s GPT-4 system.
AI security guidelines: what next?
The guidelines are just that: guidelines. They carry no obligation for AI companies to adhere to them and they are not codified in any domestic or international laws.
Although many countries are currently exploring regulation of AI firms – including the US, UK and the EU – there is nothing to prevent AI companies from ignoring them.
The guidelines themselves instead urge companies to consider the potential damage to the business that insecure AI models might cause. “Where system compromise could lead to tangible or widespread physical or reputational damage, significant loss of business operations, leakage of sensitive or confidential information and/or legal implications, AI cyber security risks should be treated as critical,” the guidelines state.
NEXT UP
Slow buyers cause tech firms to rethink sales approaches as tough Q1 hits home
New research suggests tech sales were slow in Q1, with buyers of technology and professional services taking their time before committing to any solutions.
ByteDance says it has no plans to sell TikTok and refuses to bow to US pressure
ByteDance, the Chinese company that owns TikTok, stated that it “doesn’t have any plans to sell TikTok” on Toutiao, a social media platform that it also happens to own.
Solace Kidisil, Group COO of Nsano: “The difference between traditional finance and fintech is the questions we ask”
We interview Solace Kidisil, Group COO of Nsano, a fintech company from Ghana, offering digital payment solutions across Africa