It has been quite the week for AI and ChatGPT’s parent company OpenAI. First, we saw Co-Founder and CEO Sam Altman sacked and his fellow Co-founder Greg Brockman resign. Microsoft then hired Altman, before he got his old job back after a threatened rebellion by OpenAI employees. If you thought that was enough drama in the world of generative AI, wait until you hear what cybersecurity experts have to say.
Research published by Add People reveals that a third of workers in the UK are using tools such as ChatGPT without corporate knowledge or permission. “This survey shows that a third of workers who use AI are putting their employers at risk if a data breach occurs,” Add People’s Chief Marketing Officer, Peter Marshall, says.
“The best way to avoid insecure AI use is to raise awareness of the risks with your staff and make recommendations on when and how to use tools like Bard and ChatGPT.”
Related reading: What is Google Bard: news and updates
Ian Reynolds, a Senior Security Consultant at SecureTeam agrees. “If your industry is engaging with AI, but your organisation has yet to officially implement any tools or strategies, now might be the time to establish some basic frameworks for your staff to follow.”
And don’t expect the security implications of generative AI to fade as we hurtle towards 2024; if anything, the issues will become even more apparent.
“Traditionally, identifying and exploiting complex, one-off API vulnerabilities required human intervention,” says Shay Levi, CTO and Co-Founder of Noname Security. “AI is now changing this landscape, automating the process, enabling cost-effective, large-scale attacks. In 2024, I predict a notable increase in the sophistication and scalability of attacks.”
IT decision makers’ deepfake concerns
Research by Integrity360 has found that more than two-thirds of IT decision-makers asked were “worried about cybercriminals’ use of deepfakes in targeting organisations”.
But it’s not all bad news, as James Hinton, Director Of CST Services, Integrity360, points out. “While AI will pave the way for novel threats, it will also form the bedrock of a variety of enhanced security solutions,” he says.
“In 2024, we’ll see the proliferation of AI and generative AI platforms being integrated into security tools, allowing huge amounts of data to be processed much more quickly, which will speed up operations such as instant response.”
But we couldn’t conclude what has been such a dramatic week in AI without a soap opera cliffhanger ending, could we?
Usman Choudhary, Chief Product and Technology Officer at VIPRE Security Group, warns that: “As much as AI is a tool that will help to make strides in strengthening cybersecurity defences, it is also a technique that is being widely deployed by threat actors to breach those safeguards with success.”
Tune in next week for another exciting episode of what AI holds for the future of cybersecurity…Can Australia go it alone on combating deepfake porn?
Our man in Barcelona – with the help of the rest of the TechFinitive team back home – provides his picks from this year’s Mobile World Congress.
Dell’s sustainability goals are ambitious, including a promise to achieve net zero emissions by 2050. Our sustainability expert combs through its latest report to see how well it’s doing right now.
Barbara Schulz, VP of International Customer Experience at GoTo: “The customer services space must avoid implementing AI for AI’s sake”
Barbara Schulz, VP of International Customer Experience at GoTo, explains why humans and AI must work together to make customers happy