Can Google’s AI Cyber Defense Initiative come to the rescue of global cybersecurity defenders?

Google has announced a new AI Cyber Defense Initiative that builds on last year’s Secure AI Framework (SAIF). Its mission: to reverse the defender’s dilemma and boost cybersecurity protection globally. But does Betteridge’s law apply in this case?

Betteridge’s law of headlines, coined by British technology journalist Ian Betteridge, states that any headline ending with a question mark can be answered with one word: no. Ordinarily, I’d accept that to be the case without argument, but I’ll make an exception for my headline today as it can be answered with two words: it’s complicated.

Using AI tools, Google is attempting to take the fight to cyberattackers — be they criminal or state-sponsored — in terms of both a reactionary and pre-emptive response. I would argue that Google’s AI Cyber Defense Initiative, launched during the Munich Security Conference, is as far-reaching as it is optimistic.

“We’re announcing new commitments to invest in AI-ready infrastructure, release new tools for defenders, and launch new research and AI security training,” stated Phil Venables and Royal Hansen in a joint statement. Venables being Chief Information Security Officer for Google Cloud, Hansen the Vice President of Engineering for Privacy, Safety and Security.

They added: “These commitments are designed to help AI secure, empower and advance our collective digital future.”


Related: UK law enforcement agency issues warning about AI-aided ransomware


Secure AI Framework versus AI Cyber Defense Initiative

But hold on, you might be thinking. Wasn’t this the purpose of Google’s SAIF, what with it containing elements such as bringing AI detection and response “into an organisation’s threat universe” and automating defences to “keep pace with existing and new threats”?

The difference, says Google, is that the SAIF was — is — a conceptual framework to guide organisations in deploying AI responsibly. In short, to help them secure their AI systems from the ground up.

In which case, what is the AI Cyber Defense Initiative, and how does it differ from SAIF?

Well, whereas SAIF was conceptual, the new initiative is much more practical in nature. “AI is at a definitive crossroads,” said Venables and Hansen. “One where policymakers, security professionals and civil society have the chance to finally tilt the cybersecurity balance from attackers to cyber defenders.”

This is where the defender’s dilemma comes in, defined as the fact that defenders have no margin for error and must deploy the best defences to keep attackers at bay. Attackers, meanwhile, only need to find one error, one hole, one vulnerability to break through.

Google describes this as dealing with yesterday’s threats (patching vulnerabilities, deploying preventive measures, public awareness campaigns etc) rather than those that are waiting in the wings.

“AI allows security professionals and defenders to scale their work in threat detection, malware analysis, vulnerability detection, vulnerability fixing and incident response,” Venables and Hansen insist.

Practical steps in AI Cyber Defense Initiative

Practically speaking, Google is already using a multilingual neuro-based text processing model called RETvec to increase spam detection in Gmail, as well as an AI tool called Magika that improves file type detection by 30%.

As part of the new AI Cyber Defense Initiative, however, Google is open-sourcing Magika to empower others. Google’s open-source security team has been using Gemini AI to aid vulnerability detection in code and to fix those vulnerabilities.

And it appears to work. Google’s detection and response team has saved more than 50% in the time it takes to output incident summaries since using generative AI in the task.

Google also announced 17 startups would be part of its Growth Academy: AI for Cybersecurity program, all to strengthen the transatlantic cybersecurity ecosystem. This next generation of cybersecurity experts is being aided by a $15 million seminars program and research grants.

“While people rightly applaud the promise of new medicines and scientific breakthroughs,” Venables and Hansen said, “we’re also excited about AI’s potential to solve generational security challenges while bringing us close to the safe, secure and trusted digital world we deserve.”

Davey’s view

The problem, as I see it, is not that this isn’t an initiative to be applauded, but rather that it potentially misses one huge hurdle to success: the attackers are also using AI to evolve their own agendas.

Betteridge’s law doesn’t apply to my headline as it’s not a simple no in response. Instead, as I said, it’s complicated. Complicated by the fact that AI hasn’t bypassed state-sponsored and criminal threat actors who aim to stay at least one step ahead of the defenders using exactly the same technologies.

What Google has on its side is the sheer size of the investment it can make, but even then, we shouldn’t discount the depth of attackers’ pockets.

Frameworks, initiatives and investments are all important steps in the right direction, but one misstep is still all that’s required for the attackers to succeed. And so it will remain for the foreseeable future.

I’m not knocking Google’s AI Cyber Defense Initiative, but I don’t think this will solve the defender’s dilemma just yet.


More on cybersecurity

Avatar photo
Davey Winder

With four decades of experience, Davey is one of the UK's most respected cybersecurity writers and a contributing editor to PC Pro magazine. He is also a senior contributor at Forbes. You can find him at TechFinitive covering all things cybersecurity.

NEXT UP