AI chatbots a security risk due to LLMs, says UK’s National Cyber Security Centre

The UK National Cyber Security Centre (NCSC) has warned organisations about the inherent cybersecurity risks when integrating generative AI within products and services. It specifically called out the danger of Large Language Models (LLMs).

NCSC’s Tech Director for Platforms Research, David C, wrote that “LLMs occupy an interesting blind spot in our understanding” and that our understanding of such things as ChatGPT is “still in beta”. In case you’re wondering, the NCSC doesn’t publish surnames.

While it’s understandable that organisations are excited by generative AI, the “global tech community still doesn‘t yet fully understand LLM’s capabilities, weaknesses, and (crucially) vulnerabilities,” David C added.

LLMs struggle to distinguish between instruction and data

Research highlights one particular problem. An LLM cannot inherently tell the difference between an instruction and the data that has been provided to help in the completion of that instruction.

Most commonly, this kind of prompt injection has been used to manipulate the output from generative AI chatbots. This can bring reputational risk to your organisation, but the NCSC warns it could also be used with more dangerous intent.

David C gave the example of a bank using an AI chatbot to provide customer help. “An attacker might be able to send a user a transaction request, with the transaction reference hiding a prompt injection attack on the LLM,” he wrote.

Related reading: Journalist breaks bank security with AI voice

Then, when the chatbot is asked if the customer is spending more than normal this month, the AI comes across the malicious transaction; this “reprograms” it into sending money from the victim’s account to the attacker’s one.

The NCSC warning concludes that one of the most important implementation strategies is to ensure that when architecting the system integration and data flow, you are “happy with the ‘worst case scenario’ of whatever the LLM-powered application is permitted to do.”

However, David C also warns that while there is ongoing research into prompt injection attacks, “as yet, there are no surefire mitigations”.

Mitigating the AI chatbots security risk

“When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning,” said Jake Moore, Global Cyber Security Advisor at ESET.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks,” he added.

“It is vital that people are aware that what they input into chatbots is not always protected.”

The truth of the matter remains that generative AI and LLM technology is a fast-evolving sector. While this leads to great new features and use cases, it also means that there will be vulnerabilities we have not even thought of yet, still to come. 

In the words of Sergeant Esterhaus from the classic 1980s cop show Hill Street Blues: “Let’s be careful out there.”

Avatar photo
Davey Winder

With four decades of experience, Davey is one of the UK's most respected cybersecurity writers and a contributing editor to PC Pro magazine. He is also a senior contributor at Forbes. You can find him at TechFinitive covering all things cybersecurity.

NEXT UP