UK law enforcement agency issues warning about AI-aided ransomware

A new report from the UK’s National Cyber Security Centre (NCSC), which forms part of GCHQ, presents a stark warning about the use of AI in cybercrime. The NCSC warns that not only are malicious threat actors already using AI technology to prepare and launch attacks, but the near-term impact is likely to grow with a focus on ransomware.

The report, ‘The near-term impact of AI on the cyber threat assessment’, makes two conclusions. First, that social engineering automation and interaction is the primary capability for threat actors. Second, that AI will go way beyond this in the coming year or two.

Ransomware actors who are already using AI to help with reconnaissance, phishing and coding are of particular note. If they already possess AI skills, they will also use them to assist with vulnerability research and lateral movement.

And there’s worse news still.

When the NCSC says that “AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years,” then businesses really should be listening. Hard.

“AI has simply increased the power enabling cybercriminals to act quicker and at scale,” Jake Moore, Global Cybersecurity Advisor with ESET and a former police digital crimes expert says, “whilst past and present phishing emails are fed into the algorithms and analysed by the technology, the better the outcomes naturally become.”

Moore doesn’t see a near-term future where such attacks don’t increase. “Until we find a robust and secure solution to this evolving problem we need to act to help teach people and businesses in how to protect themselves with what is available.”

The argument against AI-aided ransomware

Not everyone is as convinced by the NCSC findings.

“The impact of generative AI on cybercrime growth seems to be overestimated, to put it mildly,” says Dr Ilia Kolochenko, CEO at ImmuniWeb and Adjunct Professor of Cybersecurity and Cyber Law at Capital Technology University.

Most cybercrime groups have been successfully using various forms of AI for years, Kolochenko says, including pre-LLMs forms of generative AI, and so the introduction of LLMs will unlikely revolutionise their operations.

“While LLMs can help with a variety of simple tasks, such as writing attractive phishing emails or even generating primitive malware,” he continues, “it cannot do all the foundational tasks, such as deploying abuse-resistant infrastructure to host C&C [command & control] servers or laundering the money received from the victims.”

The ransomware business already works well enough, and experienced players will likely continue to do so. The idea that AI will enable new entrants to do better, especially considering that the Ransomware-as-a-Service model has existed for years, doesn’t wash with Kolochenko,

“Those who try to setup their own ransomware empire by relying on generative AI, without having the necessary technical skills or connections to launder the profits, will likely fail in many aspects,” he says, “and will end up being arrested and prosecuted after being detected by law enforcement agencies that are gradually improving their cybercrime investigations methodologies.”

Related news: Ransomware 2023 numbers: 100 million stolen records, $27.4 million average demand, attacks up by 84%

More UK stories

Avatar photo
Davey Winder

With four decades of experience, Davey is one of the UK's most respected cybersecurity writers and a contributing editor to PC Pro magazine. He is also a senior contributor at Forbes. You can find him at TechFinitive covering all things cybersecurity.

NEXT UP