Mike Britton, CISO at Abnormal Security: “The job for security leaders is only getting harder”
If you’re the type of person who always turns to the final page of a book to check the ending, allow us to save you the trouble here. The final words that Mike Britton, CISO of Abnormal Security, has to say are: “Companies should shift their security defences to prioritise… threat vectors that target humans as the vulnerability.”
It’s just one of the excellent pieces of advice tucked away in this in-depth interview, and if you’re a cybersecurity professional – or simply want to keep your own data safe – then we think you’ll find it a superb read. You might even want to scribble a few notes as you go along, as it’s packed with action points too.
So, why listen to Mike? First of all, because as CISO of Abnormal Security he leads the company’s information security and privacy programs. He also works closely with the Abnormal engineering teams to ensure platform security, essentially serving as the voice of the customer for feature development.
Before joining Abnormal, Mike spent six years as the CSO and Chief Privacy Officer for Alliance Data and previously worked for IBM and VF Corporation. In total, he brings 25 years of information security experience from multiple Fortune 500 global companies.
We hope you find his advice as informative as we did.
Could you please introduce yourself to our audience and share how you ended up working in cybersecurity?
I’m Mike Britton, CISO at Abnormal Security. Prior to Abnormal, I was the CISO at Alliance Data (now Bread Financial). I ended up in cybersecurity long before it was called cybersecurity. Back in the mid-90s I was lucky enough to get a job with MCI in mainframe security and was fortunate to gain experience there, along with stops at VF Corporation and IBM before landing at Alliance Data. I’ve been in cybersecurity for almost 28 years.
What are the biggest cybersecurity challenges those in leadership roles are facing?
The job for security leaders is only getting harder, and that’s due to a confluence of factors:
For one, work is becoming increasingly distributed, especially post-COVID. More organisations are shifting to cloud email as a result, but this introduces new attack vectors as attackers can now directly infiltrate email accounts by defeating authentication and exploiting misconfigurations.
The interconnected nature of cloud email also creates a broader attack surface. If an attacker successfully gains access to one email account, they could then get unconstrained access to all other connected cloud accounts and the data within them. It’s not uncommon for today’s enterprises to have hundreds of cloud applications within their IT ecosystem – the responsibility to secure each of these apps creates huge pressure for security leaders.
Lastly, cybercriminals are becoming increasingly advanced. The proliferation of generative AI has played a big role here because threat attackers now have an accessible tool that they can use to scale their attacks in both volume and sophistication. Now, even petty criminals can weaponise tools like ChatGPT to write highly targeted phishing emails to dupe their victims into making financial transactions or divulging sensitive information.
Worth a read: Whit Jackson, Vice President, VP Global M&E at Wasabi Technologies: “AI is having a massive impact on sports that cannot be overlooked”
What is your take on ethical hackers and their role in cybersecurity?
The term hacker always gets a bad rap and is often used in a negative connotation. Good security professionals have a natural curiosity to see how to make things behave in a manner beyond their intended use. Ethical hackers are critical to a strong security program. Having employees who think like an attacker and don’t accept the status quo is a great way to find issues before they become a problem. By proactively seeking out vulnerabilities in the organisation’s systems, ethical hackers can help organisations get a better understanding of where their gaps are and where to prioritise defences.
One important area that ethical hackers should test is social engineering. This is a growing threat that’s only getting worse with generative AI, and ethical hacking can help test employees’ susceptibility to socially engineered emails. They could look beyond email as well – for example, testing the help desk by calling to ask for a password reset and testing their verification process.
Which cybersecurity best practices are being adopted with the most success by companies?
Some security best practices that every organisation should adopt include:
- Strengthening MFA with passwordless and adaptive methods. Traditional MFA is commonly used, but it’s not immune to compromise. MFA bypass attacks are growing, with some threat groups now offering MFA bypass-as-a-service kits for sale on the dark web. Passwordless authentication factors, including biometrics (like fingerprints or facial recognition) and hardware tokens (like Yubikeys) can help reduce credential theft surrounding MFA. Adaptive authentication, which uses contextual information about the user—like their location, device type and the time of day—can also be used to determine which authentication factors should be applied in a given situation.
- Using AI to improve the detection of social engineering attacks. Security awareness training, while important, has largely focused on helping employees spot the hallmarks of a phishing attack, like poor spelling and grammar. But with generative AI, threat actors can eliminate these characteristics, making email attacks near-impossible to detect. Security awareness training should be paired with advanced technology to catch any attacks that might slip past human detection. Security solutions built natively with AI technology can put organizations in a better position to understand what normal behaviour looks like in their email environment and detect deviations that may indicate a potential attack, even when there aren’t any overt signs of malicious activity.
- Customer-centric security. Security teams are often thought of as an impediment to the business where the answer is often “no”. Security teams should build strong relationships with their internal stakeholders especially the engineering and technology teams, to ensure collaboration and trust.
Worth a read: Camellia Chan, CEO of Flexxon: “‘Generative AI is a goldmine for cybercriminals”
What is it about generative AI that makes it so prone to exploitation by threat actors? Conversely, how can it be used for good (in cybersecurity)?
Generative AI is dangerous in the hands of malicious actors because it enables them to launch sophisticated (and ultimately, successful) phishing emails. For example, it can help them eliminate the typos or grammatical errors that tend to characterise email attacks, and can even write convincing emails in a translated language. And if those threat actors are able to acquire snippets of their victim’s email history, they can incorporate this data within their generative AI prompts, bringing highly personalised context, tone and language into their email attacks, making them even more deceptive.
On the other hand, AI can be used for good in cybersecurity – by giving security teams a tool to help strengthen their threat detection capabilities while improving their efficiency.
Maintaining SOC [Security Operations Centre] productivity is getting harder due to the rising sophistication of attack tactics, budget cuts and the narrowing skills gap in the cybersecurity field. These are all incentives to begin leaning into AI/GenAI and automation, and there are a few different areas where CISOs can begin to see quick productivity gains through the use of AI in their SOC:
- Detecting social engineering threats. Using behavioural AI to learn typical user behaviours across email and collaboration and SaaS apps, security teams can baseline known behaviours and then detect deviations indicative of a potential attack. This helps overcome the limitations of many traditional security solutions that rely on detecting known indicators of compromise—something many attackers have learned to omit through social engineering techniques.
- Automating the triage and remediation of user-reported phishing emails. Manually sifting through user-reported phishing emails can consume hours of skilled analyst time. Using AI and automation to inspect, evaluate, and automatically remove (if necessary) user-reported emails can help free up valuable SOC analyst time.
They could even incorporate generative AI to promote security awareness throughout this process. For example, Abnormal recently released its AI Security Mailbox product, which provides a personalised response explaining if the email was deemed malicious, safe or spam and how a determination was made. Users can then converse directly with the AI security analyst, getting real-time feedback as it teaches them better security practices.
- Identifying risky misconfigurations drifts. By creating profiles of each vendor, application, employee and email tenant in the organisation’s cloud environment, AI can help security teams identify and take action on configuration gaps and drifts, including privilege escalations and new third-party app integrations.
What’s something that has drastically changed about cybersecurity since you first got started in the field?
Prior to the cloud, SaaS and hybrid workforce, security was heavily focused on securing the perimeter and preventing threat actors from breaking into specific technology infrastructure, including networks, data centres and cloud environments, email servers and employees’ PCs and smartphones.
Today, the technology landscape and the way people work has dramatically shifted which means that many of the legacy controls and technologies can be largely ineffective in stopping threat actors today. This has also shifted attackers toward credential theft and social engineering. Because of this, companies should shift their security defences to prioritise these new threat vectors that target humans as the vulnerability.
NEXT UP
Why Rotterdam is a tech haven: a love letter from a startup
We reached out to Kees Wolters asking for a comment on Rotterdam as one of the best cities in Europe for tech workers – he sent us what amounted to a love letter to the city, which we decided to publish in full (with his consent), below.
Verizon and Skylo launch direct-to-device messaging using satellites
Verizon and Skylo partnered to launch a direct-to-device messaging service for customers and Internet of Things (IoT) enthusiasts.
IBM pushes for EU to make AI open and collaborative
If the EU wants to remain a global digital leader then it needs to make AI open and trusted. So says IBM in its new digital policy agenda for Europe.