How will AI shape cyber risk both now and in the future?

Last updated:
Oct 22, 2024

What's inside?

Many commentators have described the applications of AI in the cyber domain as a ‘double- edged sword’: it has the potential to significantly enhance the defensive capabilities of organisations through more effective detection and response to cyber threats. Simultaneously though, the same AI technologies are starting to be leveraged by cybercriminals and threat actors to launch more sophisticated and targeted attacks.

With the FBI and UK’s National Cyber Security Centre (NCSC) recently warning that AI will almost certainly increase the volume and impact of cyber-attacks in the next few years, how will the cyber risk community look to manage the purpose and perils of AI-enabled cyber-attack and defence?

AI empowering cyber threat actors

As AI has evolved, so too have concerns around security, data privacy and risk management for businesses, as threat actors have sought to utilise AI for their own benefit. Such a risk is not limited to any specific group however, as cybercriminals and state-level cyber threat actors have already leveraged AI to varying degrees and levels of success. These include targeted AI-driven social engineering and phishing attacks that utilise AI models to identify and engage with targets through personalised and realistic messaging (including the use of deepfakes). Speech synthesis attacks have benefitted from recent advances in deep learning technology to replicate a victim’s speech to bypass and exploit voice authentication or human interaction.

Elsewhere, attackers have also deployed AI to augment ransomware and password-based attacks to improve performance and evasion techniques, whilst the growing risk of adversarial AI, where an attacker aims to disrupt or manipulate AI or Machine Learning models and training data, poses an enormous security and privacy risk to businesses that are increasingly outsourcing to companies utilising AI-enabled products and tools. Attempts to ‘poison’ and manipulate training data can not only be used to evade the growing number of AI-enabled detection systems utilised by businesses and security teams (see below) but can also be utilised to identify and exploit vulnerabilities in AI systems and compromise the integrity of existing AI models. This means an attacker can influence the learning process and bias model towards making incorrect predictions or decisions, which can be exploited for their benefit, whilst also being combined with other tactics to amplify the effectiveness of an attack.

Against the backdrop of an ever-evolving AI-enabled cyber threat landscape, we are naturally seeing an uplift in the capabilities and techniques of already-capable threat actors, such as well-established cybercriminal gangs and state/state-sponsored APT groups. However, potentially of greater concern is the potential for AI to provide a significant uplift and lower barrier of entry to those less-skilled hackers, such as the opportunistic cybercriminal or hacktivist. As such, businesses could and should expect to see a notable increase in AI-driven cyber activity in the coming years ahead. We may also see macro shifts in the cybercrime and ransomware ecosystem as less-skilled threat actors will be empowered to work more independently and ‘as-a-Service’ models potentially lose their potency as a result.

The AI ‘facelift’ for cyber defence

Responding to the potential of AI-enabled cyber threat actors is becoming a serious issue for enterprise cyber risk leaders. As a result, cyber security vendors have responded by promising various AI-enabled and automated solutions. One of the greatest challenges for customers has been cutting through the noise and understanding how AI could relieve the burden on cyber security teams and improve cyber resilience for the business.

One of the first solutions to emerge on the market is automated threat detection and response, which prioritises analysis of malicious files and behaviour through automated triage, isolation, containment and response. This is usually marketed by vendors providing Managed Detection and Response (MDR) tools and could even be offered by external Security Operation Centres (SOCs). These tools can improve accuracy, efficiency, provide proactive threat hunting capabilities and improve usability of information by centralising reporting more coherently. However, there are some drawbacks, false positives could introduce some inefficiency in the operating model and tools could create an over-reliance on a single piece of technology. Over-reliance could result in greater insecurity in case of service disruption, such as the recent CrowdStrike-induced Windows outage.

Vendors have also offered tools to automate vulnerability discovery and intrusion testing, which could have major benefits in speeding up vulnerability disclosure and patching. It could also enhance the annual vulnerability assessment strategy of the organisation, supplementing or working in tandem with any red teaming or pen testing that may only occur once a year. This can be characterised as part of an adaptive defence model, in which there are tools that continuously learn to adapt to incoming threats and reinforce cyber controls where needed. However, the drawbacks are like those outlined above, and these types of tools are costly so only the healthiest of cyber risk budgets would be able to accommodate them.

After an organisation has fallen victim to an attack, AI could also help in Digital Forensics and Incident Response (DFIR) by reverse engineering assistance and finding and classifying leaked credentials. One of the most significant benefits of automated DFIR is the speed and efficiency coupled with the accurate preservation of evidence, which can be cumbersome and time consuming for human analysts. This can help in reporting and documentation of the investigation, process, and findings. But, once again there are drawbacks to consider. A lack of human judgement may be a disadvantage, as the techniques used by attackers may require ‘out of the box’ thinking from humans to problem solve effectively. Tools that are generative are potentially more useful here, where the AI uses outputs from each use of the tool to educate itself and become more efficient at investigation.

Whilst these tools are set to change the defensive game for organisations, they’re also not a panacea. Human intervention is still heavily needed, and AI doesn’t change the fact that the first line of defence are users themselves. Even the use of GenAI for the purposes of DFIR raises questions about the management and disclosure of evidence if a breach reached criminal court: usually disclosure and discovery is completed and overseen by human lawyers, and it’s unclear how courts would react to evidential packages produced entirely by artificial intelligence. As with any technology, AI has the potential to reduce cyber risks, but it cannot eliminate them.

The beginning of an ‘AI-arms’ race?

Whilst we may be at the beginning of what some experts believe is an ‘AI arms race’, the risks and opportunities that are arising from AI-driven cyber products and tooling could also fundamentally shift the way cyber risk management functions. Simultaneously lowering the barrier and enhancing the efficacy of offensive tools and techniques is driving a race in which defensive tools and cyber practitioners are now trying to keep up.

There are wider impacts of this cyber-AI race: first and foremost, how AI impacts operators of the internet is still nascent. There’s a real possibility that adversarial AI-driven attacks will change the way Internet Service Providers operate and manage traffic, implementing automation around handling upstream threats. For the time being though, there’s a strong chance that law enforcement will be overwhelmed by cases of financially motivated attacks on citizens and businesses as a result of this growing suite of offensive tooling. Similarly, the regulatory and insurance landscape will also be required to play catch-up, as the next few years likely witness an explosion in AI-related litigation and claims, and as the threat from adversarial AI creates new and complex problems for tech lawyers and underwriters.

Ultimately, businesses must be prepared for the cyber-AI race by investing in automated tools and educating employees on both the risks and opportunities of this kind of technology. Building capability will be needed to reduce the impact of the adapting threat environment and aid cyber teams in what is an already huge task.

This is an extract from 'The essentials of AI risk'. You can read the rest here.

CONTRIBUTORS
Nick Robinson
Consultant, Crisis & Security Strategy
View profile
Sneha Nichols‑Dawda
Consultant, Crisis & Security Strategy
View profile
LATEST RELATED CONTENT

Stay a step ahead in an increasingly complex and unpredictable world

Our consultants stay on top of the latest megatrends that influence how organisations are attacked, whether related to terrorism, criminality, war or cyber.

We document their analysis here. Be the first to see it.

Subscribe