The UK National Cyber Security Centre published its seventh annual review today, covering key threat developments between September 2022 and August 2023. Perhaps unsurprisingly, artificial intelligence featured rather heavily in the NCSC Annual Report 2023.
The foreword by Oliver Dowden, Deputy Prime Minister, warns that AI makes “the cyber world a more dangerous place than ever before, and cyber security is rising up our risk register”.
GCHQ Director Anne Keast-Butler, meanwhile, affirms the positive angle: “AI has the potential to improve cyber security by dramatically increasing the timeliness and accuracy of threat detection and response.” Before adding that “all sectors need to be clear-eyed about the related cyber security risks”.
NCSC Annual Report 2023 raises deepfake concerns
The NCSC review isn’t all about AI, of course. In fact, it states that many other technologies will be increasingly crucial for the security landscape.
These include the likes of semiconductors, cryptography, telecom security, risks from radio frequency transmissions, and the potential for election interference using deepfakes.
And experts agree. “With the rise of deepfakes, the concerns around election interference are rightfully being raised by the NCSC,” says Eduardo Azanza, CEO at Veridas. “We’ve seen already that it can be extremely difficult to spot deepfakes, and humans are starting to fall for them.”
Azanza adds that “technologies with AI techniques such as biometrics, can also be used to detect deepfakes and protect the integrity of elections”.
Generative AI and cyber security
However, since the launch of ChatGPT and the associated media coverage it continues to spawn, AI has seemingly developed at a breathtaking pace.
“The need to protect vital assets has been brought to the fore since the start of the war in Ukraine,” says Dominic Trott, Director of Strategy and Alliances at Orange Cyberdefense. “This challenge has been heightened by the ‘theatre’ of this conflict increasingly pivoting online as threats target organisations’ web presence, as well as the rapid uptake of AI, and especially generative AI (GenAI) tools.”
“Our primary objective is to ensure that cyber security does not become a secondary consideration but is recognised as an essential precondition for the safety, reliability, predictability, and ethics of AI systems,” the NCSC Annual Report 2023 states.
In the short term, the NCSC sees AI as being more likely to “amplify existing cyber threats than create wholly new ones”. Still, it also admits that it will “almost certainly sharply increase the speed and scale of some attacks”.
AI and machine learning can boost security
From the defensive security perspective, the NCSC is using machine learning to spot complex activity patterns across multiple datasets to identify hidden malicious behaviour.
“Soon, we plan to use AI to more effectively spot mutated forms of malware to enable the identification and release of indicators of compromise (IOCs) more quickly than traditional software reverse engineering or code matching allows,” says the NCSC Annual Report 2023.
Jake Moore, the Global Cybersecurity Advisor at ESET, agrees. Kind of.
“While AI can be incredibly helpful and efficient in almost every industry, in equal measures it can also be the most dangerous and challenging new technology we have ever seen,” he says.
While admitting that the future of AI will inevitably be used illicitly, potentially forming part of nation-state large-scale attacks, Moore concludes that “these dangers are not imminent, but we need to learn from mistakes made about our relatively slow reaction to cybercrime and prepare for the misuse of this prevailing computing power”.
But we will give the final word to David Critchley, Regional Manager UKI at Armis.
“The warnings issued by the NCSC come as no surprise,” he says.
“AI-powered cyberwarfare is unfortunately in its heyday. This drastic change in boldness has brought cyberwarfare out from the shadows into the open – arguably flaunted by threat actors and nation-states – with seriously ill-intent.”
Nathalie Parent, Chief People Officer at Shift Technology: “HR is the conscience of an organisation”
For more than 30 years, Nathalie Parent has led global HR teams, working primarily with software companies. Today she’s Chief People Officer at Shift Technology
Amazon introduces new storage class that makes it cheaper to store rarely used files
Robot carers are real, but caregiving has bigger problems, writes Richard Trenholm in this FlashForward edition