ChatGPT is a year old today but not all cybersecurity experts are celebrating

Happy birthday ChatGPT. It’s hard to process that the generative AI tool is only a year old today, considering how much impact it has had on organisations across every sector. But what about its impact on cybersecurity? What are the future implications for both defensive and offensive security teams?

To find out, TechFinitive has gauged opinions from cybersecurity experts. Not all of them feel like getting the birthday cake out.

More AI regulations to come in 2024

As we reported earlier this week, regulatory guidelines on the use of AI are still in short supply. “I think the [UK] government has been smart to balance precaution with room for innovation,” says Will Poole, Head of Incident Response at CYFOR Secure.

“With that said, next year will surely see governments and regulators continue to question aspects of AI so that businesses can benefit from the technology’s potential without unchecked risk.”

Poole also expects that “we will see legislation targeting the large, foundational AI models created by companies such as OpenAI rather than smaller, less powerful or open-source models”. This still leaves us with the smaller and readily available open-source generative AI models that will pose a risk to organisations and attract malicious actors like a magnet.

AI-enhanced phishing

According to the latest Hacker-Powered Security Report by HackerOne, just over half of the hackers questioned (53%) have started to use generative AI in some form. A notable 61% of those hackers do so specifically to find vulnerabilities in applications and services.

But there is good news. Chris Dickens, Sr Solutions Engineer at HackerOne, says that generative AI “has also become a powerful tool for [ethical hackers] to seek out vulnerabilities and protect organisations at even more speed and scale“.

Human risk security management platform CybSafe has also been doing some research. It found that some 89% of workers admitted to sharing sensitive information with generative AI tools such as ChatGPT.

Given that the same research suggests a mere 21% of people can tell the difference between human and AI-generated text, that’s a very real security issue.

The problem of dealing with those security risks can be highlighted in the finding that 69% of those asked still thought AI tools were a good thing despite the obvious security concerns.

“We’re seeing cybercrime barriers crumble, as AI crafts ever more convincing phishing lures,” says Dr Jason Nurse, CybSafe’s Director of Science and Research. “The line between real and fake is blurring, and without immediate action, companies will face unprecedented cybersecurity risks.”

How big is the risk? “33% of employees are entering sensitive data into AI on a weekly basis,” said Nurse, adding that this can lead to data leaks. “Our behaviour at work is shifting, and we are increasingly relying on generative AI tools. Understanding and managing this change is now crucial.”

Avatar photo
Davey Winder

With four decades of experience, Davey is one of the UK's most respected cybersecurity writers and a contributing editor to PC Pro magazine. He is also a senior contributor at Forbes. You can find him at TechFinitive covering all things cybersecurity.

NEXT UP