ChatGPT data leaks: one in three employees putting companies at risk
It has been quite the week for AI and ChatGPT’s parent company OpenAI. First, we saw Co-Founder and CEO Sam Altman sacked and his fellow Co-founder Greg Brockman resign. Microsoft then hired Altman, before he got his old job back after a threatened rebellion by OpenAI employees. If you thought that was enough drama in the world of generative AI, wait until you hear what cybersecurity experts have to say.
Research published by Add People reveals that a third of workers in the UK are using tools such as ChatGPT without corporate knowledge or permission. “This survey shows that a third of workers who use AI are putting their employers at risk if a data breach occurs,” Add People’s Chief Marketing Officer, Peter Marshall, says.
“The best way to avoid insecure AI use is to raise awareness of the risks with your staff and make recommendations on when and how to use tools like Bard and ChatGPT.”
Related reading: What is Google Bard: news and updates
Ian Reynolds, a Senior Security Consultant at SecureTeam agrees. “If your industry is engaging with AI, but your organisation has yet to officially implement any tools or strategies, now might be the time to establish some basic frameworks for your staff to follow.”
And don’t expect the security implications of generative AI to fade as we hurtle towards 2024; if anything, the issues will become even more apparent.
“Traditionally, identifying and exploiting complex, one-off API vulnerabilities required human intervention,” says Shay Levi, CTO and Co-Founder of Noname Security. “AI is now changing this landscape, automating the process, enabling cost-effective, large-scale attacks. In 2024, I predict a notable increase in the sophistication and scalability of attacks.”
IT decision makers’ deepfake concerns
Research by Integrity360 has found that more than two-thirds of IT decision-makers asked were “worried about cybercriminals’ use of deepfakes in targeting organisations”.
But it’s not all bad news, as James Hinton, Director Of CST Services, Integrity360, points out. “While AI will pave the way for novel threats, it will also form the bedrock of a variety of enhanced security solutions,” he says.
“In 2024, we’ll see the proliferation of AI and generative AI platforms being integrated into security tools, allowing huge amounts of data to be processed much more quickly, which will speed up operations such as instant response.”
But we couldn’t conclude what has been such a dramatic week in AI without a soap opera cliffhanger ending, could we?
Usman Choudhary, Chief Product and Technology Officer at VIPRE Security Group, warns that: “As much as AI is a tool that will help to make strides in strengthening cybersecurity defences, it is also a technique that is being widely deployed by threat actors to breach those safeguards with success.”
Tune in next week for another exciting episode of what AI holds for the future of cybersecurity…Can Australia go it alone on combating deepfake porn?
NEXT UP
Why Rotterdam is a tech haven: a love letter from a startup
We reached out to Kees Wolters asking for a comment on Rotterdam as one of the best cities in Europe for tech workers – he sent us what amounted to a love letter to the city, which we decided to publish in full (with his consent), below.
Verizon and Skylo launch direct-to-device messaging using satellites
Verizon and Skylo partnered to launch a direct-to-device messaging service for customers and Internet of Things (IoT) enthusiasts.
IBM pushes for EU to make AI open and collaborative
If the EU wants to remain a global digital leader then it needs to make AI open and trusted. So says IBM in its new digital policy agenda for Europe.