The use of advanced AI by state actors

Last updated:
Oct 7, 2024

What's inside?

AI might fundamentally change the way states pursue their foreign policy objectives, whether through military force, diplomacy, or other available options. It will also affect the external environment that states are attempting to respond to, creating two sides of the same coin. In the longer term, there are several areas where AI could have a severe impact on how states interact with each other.

The first significant area is the pursuit of autonomous vehicles, particularly autonomous weapon systems. This development is likely to materialise in the next 15 to 20 years, and when it comes into earnest use, it will be transformative. A prime example is the US Air Force’s Loyal Wingman program, which aims to build a completely autonomous fighter jet and is making good progress, while similar autonomous systems are also being developed for ground and underwater applications.

The implications of autonomous weapons systems are profound. They could allow states to carry out conventional warfare actions against other states without incurring political costs, only financial ones, as there would be no loss of human life. For instance, if China decided to make a move against Taiwan, the primary deterrent of massive casualties could be mitigated if they only used autonomous vehicles. This scenario increases the likelihood of conflict because it removes one of the main factors that hold states back from military actions: the human cost, which then translates into political cost.

This situation differs from nuclear weapons because nuclear weapons have a mutually assured destruction component. Conventional autonomous vehicles can be used without significant backlash or risk of uncontrolled escalation, making them a potentially attractive option for states.

The next significant area is the use of AI in decision-making processes. This includes decisions on the battlefield, diplomatic strategies, or intelligence agency operations. Russian President Vladimir Putin is already particularly focused on this area. The use of AI to dictate, drive or support military decision-making creates several pressing ethical dilemmas. For example, let’s say there’s a scenario where a sophisticated AI system advises a general to sacrifice a company, battalion or brigade of soldiers to secure an objective. If the AI predicts with 100% certainty that the entire unit will be lost but that the attack will create the conditions for a more important objective to be achieved elsewhere, it creates moral quandaries for which there is currently no established framework. The question of whether a general should make decisions based on AI suggestions that will lead to a certain loss of life is an area that requires much more consideration.

The importance of a ‘human in the loop’

Drone warfare in Ukraine provides an interesting case study of how rapidly this technology is evolving. Currently, electronic warfare defences are used to protect key assets by disrupting the connection between drones and their controlling stations when they enter a certain perimeter. However, the next step being explored is the deployment of AI at the drone level, allowing them to make their own targeting decisions without needing a connection to a control station. This development would render current electronic warfare assets ineffective against such drones. It demonstrates how quickly the space is moving and how AI is enabling technology that was previously only available to top-tier state actors to become more widely accessible. This question of how consistently there will be a “human in the loop” when making lethal decisions goes to the heart of the nature of warfare and is a test case for how much power we’re willing to give to autonomous artificial intelligence systems in general.

In the realm of “grey zone activities,” AI presents new challenges and opportunities for state actors. There have been repeated allegations in recent years around state-led foreign intervention in social issues in Western countries, including election interference. AI could significantly enhance these capabilities by deploying self-learning psychological operations against other nation-states. This could involve flooding the internet and social media with disinformation or inflammatory content at a scale too large for humans to counter effectively. AI could even instigate physical protests by creating the illusion of grassroots movements using multiple AI-controlled social media accounts. The barriers to entry for such operations are relatively low, making it likely that we will see an increase in AI-enhanced psychological operations at the nation-state level.

A recent report from Israel highlighted ethical concerns in the use of AI in warfare. It revealed that the Israel Defense Forces (IDF) were sometimes bypassing the “human-in-the-loop” principle when targeting Hamas commanders due to the overwhelming number of potential targets. This resulted in AI making autonomous decisions to target individuals, potentially contributing to increased civilian casualties. For example, the AI might determine that the best time to target a commander is when they are stationary for 6-8 hours, without considering that this might be when they are sleeping next to their family in a residential area. This underscores the importance of maintaining human oversight in AI-driven military operations.

The concerns extend to the private sector as well. If the EU or US implement regulations on AI development, which is arguably the right approach, they may be putting themselves at an economic (or even political or military) disadvantage compared to less ethically constrained actors. This could lead to faster advancements in AI capabilities in less regulated environments, potentially creating the very doomsday scenarios that regulations aim to prevent.

Additionally, there are second-order effects to consider. The explosion of AI development is likely to increase competition for resources essential to developing and maintaining this technology, such as semiconductors, high-end chips, and the rare earth metals used in their production. This could intensify competition in regions like Central Africa and South America, where large deposits of these materials are found. This competition could accelerate progress towards quantum computing as nations strive to leverage AI to its full potential, though this too comes with its own set of risks and opportunities.

The future of AI in state interactions will largely depend on how much risk different states are willing to take, including regarding the “human-in-the-loop” principle. Western nations maintaining this principle may deliberately limit their own capabilities to follow a path which is more ethically and morally sound. If other states like China or Russia choose to develop more autonomous and potentially more deadly systems, it could lead to an arms race similar to nuclear proliferation, potentially forcing more risk-averse states to follow suit.

Will AI be capable of influencing geopolitics?

AI is likely to have significant impacts on the global geopolitical environment when we look out to the 7 to 20-year timeframe. A good example of this is the relationship between politics and conflict. Clausewitz famously espoused that war is a continuation of politics by other means. What he meant by this was that even though warfare and conflict are violent and costly, they are still just representations of the politics and relationships within a country, and between countries.

An implication of this rule is that conflict and warfare cannot be waged without due consideration of that conflict’s impact on domestic and international politics. The recent war in Afghanistan is a good example of this. Even though the participants attempted to reduce casualties as far as possible among their own forces against an insurgent Taliban, the conflict was widely unpopular and had significant political repercussions at home, despite the focus of the hostilities being half a world away.

The domestic political impact of conflict has been one of the overriding guardrails against widespread conflagration in the 20th and 21st centuries. Societies will no longer accept the wholesale mobilisation and utilisation of citizens in wars that are deemed to be unjust. AI, and particularly autonomous systems, could change this dynamic in a highly significant way.

The second consideration from a geopolitical perspective would be how changes in the wider global economy because of AI would create more opportunities for tensions between geopolitical actors or would create more incentives for those competing geopolitical actors to work together. The deployment of AI, one could argue, actually builds interdependency both strategically and economically. Large-scale data centres and AI processors will need to be built and housed across the globe as demand increases. Thus, raw materials, land, workforce, money and energy are all needed of which no one country is in full control. These two questions - around the use and deployment of autonomous forces, and the incentives of AI to either push geopolitical actors apart or bring them together - will likely dictate the course of the rest of the 21st century.

This is an extract from 'The essentials of AI risk'. You can read the rest here.

Stay a step ahead in an increasingly complex and unpredictable world

Our consultants stay on top of the latest megatrends that influence how organisations are attacked, whether related to terrorism, criminality, war or cyber.

We document their analysis here. Be the first to see it.

Subscribe