Simon Horswell, Senior Fraud Specialist at Onfido: “Artificial intelligence is a key weapon in the battle against deepfakes'”
Does Simon Horswell, Senior Fraud Specialist at Onfido, have the best job in the world? He certainly loves it – unless, ironically, he’s faking – which isn’t a surprise when he talks about his fascination with the “cat-and-mouse” game that is modern-day fraud protection.
Today’s cat, otherwise known as a cyberattack group, has one crucial weapon: AI. For example, deepfakes have “seen a 3,000% increase” within a year. “A lone fraudster can potentially cost millions in fraud losses by assuming someone’s identity with deepfakes, costing a company, or ruining someone’s credit rating and crippling them with huge debts.” Something a British firm learned to its $25 million cost earlier this year.
Fortunately, us mice can use AI too. AI-powered tools, such as those developed by Onfido, are trained on the latest data to spot telltale signs. But, as Simon explains, you really need a layered approach to fight fraud effectively.
“If someone is a genuine user, then they should pass through all the layers with next to no friction,” he explains. “The doorways should all align so that they barely notice. Meanwhile, a bad actor should find that even if they pass through one layer, they get caught in the next, facing closed gates or brick walls.”
Think of it as a cat trap.
Our thanks to Simon for his time and we hope you enjoy reading the full interview below.
Could you please introduce yourself to our audience and share how you ended up working in cybersecurity?
I’ve always been fascinated by the cybersecurity space and the game of cat-and-mouse which is modern fraud prevention. I’ve been at Onfido for over five years and in my role as Senior Fraud Specialist, I’m at the very heart of understanding fraud trends, patterns and behaviours, and how we can keep our clients safe.
I’m based in our own R&D Fraud Lab which is a dedicated space for us to train and test our AI and biometric models to keep our platform robust and resilient to the latest identity fraud risks.
My passion for fighting fraud started during my time as an Immigration Officer in UK Border Control. Here, I was exposed to a range of fraud attacks and their methodologies. I became fascinated with the detection side of things and worked my way up as an Expert Document Examiner, responsible for offering expert opinions on the authenticity of secure identity documents, identifying new types of forgery, and training government agencies.
It was during my time as a Document Application Specialist at Foster + Freeman Forensic Science Innovation that piqued my interest in cybersecurity. We were already looking at ways to audit image files for signs of digital tampering, and the forensic investigation skills were still essentially the same at their core, just in a different environment.
Even back then, a lot of the industry was trying to find ways to automate the document and facial examination process, which got me thinking more about how people might try to beat the machine. When the opportunity came up at Onfido, I saw it as a chance to challenge my detection skills in a new arena.
What are some cases of deepfakes being used that particularly concern you?
From an outside perspective, it might seem that deepfakes have appeared almost overnight. But we’ve been tracking them for some time. The difference over the last 18 months has been that AI has made them vastly more accessible to the masses.
Deepfakes used to require significant computing power and technical expertise, and so were not worth the time of fraudsters who were seeking to scale attacks fast for ‘quick wins’. But now, the onset of generative AI and various face-swap apps means that almost anyone can create them.
Fraudsters now use them to impersonate friends, family, influencers and political figures. That’s why we’ve seen a 3,000% increase in attacks in 12 months.
Out of all targets, it’s politicians that have been the focal point for most high-profile attacks, as we’ve seen with Keir Starmer and Sadiq Khan, but with a General Election on the horizon in the UK, deepfakes could have the potential to significantly disrupt the democratic process. They could unduly sway public opinion, determine electoral results and ultimately spread disinformation, which is why we’re at the forefront of developing solutions to keep them at bay.
We also have to consider the significant threat that this new twist on fraud represents to businesses and individuals, both financially and emotionally. A lone fraudster can potentially cost millions in fraud losses by assuming someone’s identity with deepfakes, costing a company, or ruining someone’s credit rating and crippling them with huge debts.
We all spend so much time online, yet we’re rapidly approaching a time where we can’t trust what we see and hear with our eyes and ears unless we find solid ways to confirm authenticity.
Worth a read: Todd Wade, Interim Cyber Lead for Link Fund Solutions: “The deepfake cybersecurity challenges are only going to get worse”
What do you think are the best approaches to combating deepfakes?
Artificial intelligence is a key weapon in the battle against deepfakes and so we often describe it as an “AI vs AI” showdown. That’s why it’s critical that businesses invest in the right AI platforms – those that have been comprehensively trained to detect even the most subtle differences between authentic and synthetic images or video, which are often imperceptible to the human eye.
During the onboarding process, we’ve found the use of AI-powered biometrics particularly effective in combating deepfakes. In biometric capture and analysis, AI can automate the verification process and run comprehensive checks based on an individual’s unique physical characteristics, such as facial features, voice or fingerprints.
AI powers liveness checks, whereby the algorithm checks for facial movements, skin textures and micro-movements, and seeks to identify abnormalities such as unnatural blinking patterns or lip movements that are found in deepfakes. It can also detect when the background media is inconsistent with the individual in the photo or video – a sign that the submission has been tampered with.
Defensive AI, trained on the latest attack vectors at scale, is essential to match the evolving threat landscape and safeguard against deepfakes. AI’s ability to continuously learn and adapt is one of its biggest strengths, and that’s why it must be trained on large datasets of both real and fake media to provide sophisticated protection without impacting the user experience.
In remote onboarding and registration, combatting deepfakes successfully means keeping bad actors out without creating user friction. It must be fast and seamless, otherwise it comes at the expense of the customer journey.
What are the biggest cybersecurity challenges those in leadership roles are facing?
The global fraud landscape has shifted substantially in recent years. Generative AI and automation tools are now widely accessible and have created a network of new fraud opportunities. Pre-pandemic, a fraudster followed the typical working week pattern, a clock-in-clock-out, nine-to-five shift with weekends typically reflecting downtime. However, with the support of these technologies, many can now scale their attacks around the clock to hit as many targets as possible – without needing extensive resources or technical skills.
During a cost-of-living crisis, this can pose a serious challenge to business leaders. More than ever, cybersecurity needs investment and resources at a time when businesses have a number of competing priorities.
But make no mistake, it is critical that businesses go on the offensive against fraudsters if they want to stay ahead of evolving cyber threats. They simply cannot afford the repercussions of widespread fraud, like reputational damage, customer loss and revenue decline.
By working with the right experts, they can catch deepfakes, and other attacks, before they infiltrate their service. Invest in sophisticated AI and biometric solutions and organisations will find they can avoid the deepfake iceberg and set themselves up for sustainable long-term growth.
Worth a read: Dan Middleton, Vice President UK & Ireland at Veeam: “Cybercrime is now an industry. They have ERG policies”
What are some prevention strategies you believe every business should adopt?
To successfully mitigate identity fraud, online businesses must invest in the first line of defence – the remote onboarding process. With the right identity verification platform that includes both AI and biometric-powered checks, alongside sophisticated ID document scans, businesses can protect against a variety of cyber risks, whether that’s deepfakes, 2D/3D masks, replay attacks or social engineering scams.
At Onfido, we recommend adopting a layered approach to fraud detection. If someone is a genuine user, then they should pass through all the layers with next to no friction. The doorways should all align so that they barely notice. Meanwhile, a bad actor should find that even if they pass through one layer, they get caught in the next, facing closed gates or brick walls.
While there are many layers that could be applied to identity verification to keep bad actors out, businesses should always ensure they are checking users against a range of trusted data sources. This means that in addition to reviewing official ID documents, like passports and licences, and selfies, they should have platforms that can accommodate and track the geography of the user, including voter registers, and consumer, credit and utility databases to protect against synthetic identities and organised fraud rings.
Watch lists, sanction lists, and politically exposed persons are other important layers that can help businesses stay regulatory compliant.
What is it about generative AI that makes it so prone to exploitation by threat actors? Conversely, how can it be used for good?
I think it’s because generative AI is so good at creating convincing false visual material with very little human intervention or input. Things that would’ve taken a team of very technically skilled people a considerable length of time to produce can now be done in minutes or seconds from a simple prompt by virtually anybody.
While there’s no doubt that generative AI chatbots such as ChatGPT have made huge strides in the past few years, and have captured the attention of millions, these tools are prone to being used by fraudsters to perpetrate crime. For example, where previously a lot of phishing attacks were easy to spot by their poor grammar or spelling, a chatbot can make the messages far more convincing which increases the likelihood of someone being duped.
With deepfake technology, a fraudster can wear someone else’s face from just a single image. We can generate hundreds of fictitious faces in a matter of minutes, and then use a platform that can put them on thousands of document templates in convincing fully 3D-rendered environments.
At the same time, we have been increasingly using generative AI to strengthen our platform [at Onfido]. For example, generative AI is at the heart of our document verification checks and has been used to incorporate optical character recognition. In the Fraud Lab, we use generative AI to create thousands of images of identity documents to augment the training datasets for our document verification models.
On the biometric side, we mirror the tactics employed by fraudsters in our approach, using the same open-source methods to realistically swap one person’s face onto another. This provides us with a robust data set which we then use to train our AI anti-spoofing models.
However, we are conscious that generative AI must be used with care to create training data. If it is too narrow or biased, the model will be limited, performing well on specific types of fraud but failing to detect new or evolving techniques. The approach also allows us to train our models preemptively against attacks that we’ve considered but are yet to encounter.
NEXT UP
Hans-Martin Zogg, Business Director TPS, Leica Geosystems: “Ensuring accurate, tamper-free measurements in high-pressure environments is a complex problem”
If you’ve ever wanted to know how Olympics organisers measured distances thrown in field events, Hans-Martin Zogg, Business Director TPS, Leica Geosystems, has the answer.
Generative AI takes off in business – but don’t call it a bubble
Confused by AI? You’re not alone. Consultancies struggle to understand what’s next in AI, too
Balancing innovation and regulation – fighting financial crime in the fintech era
In fintech, innovation comes with a great responsibility to safeguard customers from money laundering, fraud, and financial crime.