How might terrorist groups and other actors utilise GenAI?
What's inside?
The development of AI has significantly increased the capabilities, and therefore risks, posed by violent non-state actors.
Historically, terrorists and violent extremists have adapted to new technologies to further their goals, including the rise of online platforms and social media sites as effective forums for recruitment and propaganda.
AI presents a new and powerful tool for extremists, allowing them to expand their influence, generate and disseminate propaganda and carry out attacks at an unprecedented speed and scale. As such, it is likely that as AI continues to evolve, non-state actors will increasingly exploit it to further their objectives.
Unlike traditional AI systems, which predict and categorise information, GenAI can produce original outputs, including text, audio, video and multifunctional simulations, which can be spread online extremely quickly. Terrorist groups such as Islamic State (IS), Al-Qaeda, Hamas and Hezbollah have already begun exploiting GenAI in their activities, using it to spread propaganda and disinformation to support their causes. For example, IS has used GenAI to create a news bulletin which is posted online on a near-weekly basis featuring an AI-generated news anchor alongside updates and videos summarising the group’s activities.
A previous barrier for extremist groups has been the lack of skilled translators needed to create content for foreign-language speakers. AI virtually eliminates this barrier, providing extremists with the ability to create content in multiple languages and giving them more opportunity to reach and radicalise speakers of various languages.
In addition to generating propaganda, AI is being used by non-state actors to spread disinformation and misinformation. Since the start of the Israel-Hamas war, for example, AI-generated images, or deepfakes, have been used to spread false information about the conflict, with the aim to alter public opinion, sow discord and generate support for their cause.
Dis/misinformation can incite violence, as seen by the riots that took place across the UK in July and August 2024. Shortly after the stabbing of multiple children in Southport, an AI-generated image was posted on X showing bearded men in Muslim dress waving knives outside the houses of Parliament behind a crying child in a Union Jack t-shirt. The image was captioned “we must protect our children!” and was spread widely on social media. AI was similarly used to create xenophobic songs which referenced Southport and encouraged violence in response to the attack. These posts were viewed by thousands online and likely contributed to the scale of the riots and violence that followed in multiple towns.
In some instances, AI chatbots have been used by groups such as IS to initiate contact with vulnerable individuals. Once a connection is established, a human operative takes over the conversation to radicalise the individual further. As well as extremist groups, AI chatbots can play a role in attacks by lone-wolf actors. In 2021, for example, Singh Chail was arrested outside Windsor Castle on Christmas Day for attempting to assassinate Queen Elizabeth II with a crossbow. Leading up to the incident, it was discovered that he had exchanged over 5,000 messages with an AI chatbot that he had created himself using an app called Replika. Chail had developed a ‘relationship’ with the chatbot, and it had repeatedly encouraged him to carry out the attack.
AI chatbots have also contributed to inciting violent civil unrest. During recent anti-government protests in Kenya, AI tools were used to mobilise support. One such tool, Corrupt Politicians GPT, was created to expose corruption cases involving Kenyan politicians. Another, Finance Bill GPT, helped explain a controversial new finance bill and its potential effects on rising prices, which initially sparked the protests. These examples demonstrate how AI chatbots, coupled with social media, can contribute to mass protests and influence the broader political landscape.
Beyond propaganda and recruitment, AI has been weaponised by non-state actors in more direct ways too. AI-powered weapons such as drones are becoming increasingly affordable and easy to operate. These can be used to carry out targeted attacks and can even be armed with chemical, biological or radiological nuclear agents.
A NATO expert has warned that groups such as IS are exploring the possibility of weaponising self-driving cars. These could be programmed to drive into specific targets and detonate bombs. This development would significantly increase the risk of such attacks, particularly in crowded public areas such as town centres and arenas, highlighting how AI can significantly increase the warfare capabilities of extremists.
Lastly, AI systems can provide violent non-state actors with knowledge of new weapons and how to use them. A report published by the UK government has warned that by 2025, violent non-state actors could use AI to acquire knowledge on chemical, radiological and biological weapons, increasing the threat of deadly and sophisticated attacks. The availability of this information lowers the barriers to entry for non-state actors, enabling them to bypass traditional knowledge gaps that have historically limited their capabilities. IS has already reportedly released a guide on how to securely use AI, highlighting the growing interest in new technologies within extremist groups.
The World Economic Forum’s Global Risks Report 2024 identified disinformation and misinformation as the greatest short-term global risks, largely driven by “the potential of AI, in the hands of bad actors, to flood global information systems with false narratives”.
There is a critical need for governments and organisations to develop robust strategies to counter the misuse of AI and its potential to fuel extremism. As AI continues to evolve, these strategies will be essential in mitigating the escalating threats it poses to global security.
This is an extract from 'The essentials of AI risk'. You can read the rest here.
Stay a step ahead in an increasingly complex and unpredictable world
Our consultants stay on top of the latest megatrends that influence how organisations are attacked, whether related to terrorism, criminality, war or cyber.
We document their analysis here. Be the first to see it.