For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
There is something profoundly changing about warfare with the use of artificial intelligence in military operations. Computers help identify targets on the battlefield and strategise the most effective way to attack them. But what if AI gets it wrong? ‘I think we have already accepted AI’s flaws without realising it’, says Klaudia Klonowska. She wrote her dissertation on the experimentation with AI in the defence sector and the implications for international law.

How exactly is AI used in warfare?

‘There are many ways, and they continue to evolve. A major trend of the last decade has been the development of computer vision systems that can identify objects. Algorithms are used to identify military compounds, tanks, warships, but also schools, hospitals and so on. Another, more problematic case, is the use of AI to predict behaviour of individuals and their affiliations based on information, for example, from social media and intercepted conversations. With the advent of language models, AI is now also used to provide strategies for attacking targets, such as to develop operational plans. Even though this may all sound impressive, these systems are not advanced; the aspirations are.’

Are we overestimating AI's capabilities?

‘I think we are. Some militaries place great importance on speed and therefore overlook AI's shortcomings. This is fuelled by hype from private companies and by overpromising what the technology can do. Currently, the military relies on AI to outmanoeuvre its adversaries. It overshadows the importance of accuracy and the need to minimize civilian harm. And many AI applications are used before being properly tested or verified and are applied to tasks they were not specifically designed for.’

These systems are not advanced, the aspirations are

What are the consequences of the way militaries use AI?

‘I don’t want to make sweeping statements about AI being good or bad, because it depends on how it’s designed and used. Militaries are relying on a system that is predictive to conduct targeting operations. We are increasingly accepting systems that are highly uncertain in high-stakes situations. Civilians are at greater risk when machines misidentify their targets. Humans, of course, also make mistakes. But AI can do so on a much bigger scale, as it can produce targets at a speed we have never seen before. Another problem is that, especially in warfare, it takes time to find out whether AI-based targeting systems are making mistakes.’

Can we not limit the use of AI to situations where we know it really works?

‘One of the biggest challenges we have now is understanding when AI fails. If AI doesn’t properly identify targets in rainy conditions, for example, we need to know that. We want people to override the system when it fails, but this has proven to be very difficult. Operators are strongly influenced by AI and calibrating their trust in the way that allows them to intervene exactly when needed is challenging, especially because it depends on many contextual and individual factors.’

Are there meaningful restrictions that the law can provide?

‘A lot of regulations talk about taking precautions whenever feasible. But if you are operating at high speed, you don’t have the time to take additional precautions in practice. AI creates a mismatch between speed and use. Another characteristic of AI is its constant evolution. I call this “tinkering”. This means the interface, data, and models are constantly updated. Engineers are working alongside commanders to ensure the system meets the military's needs.  “Tinkering” is necessary and, at the same time, problematic because it creates more uncertainty. Updated AI systems usually cannot be adequately tested. And every small change to a piece of code can have big consequences. This constant change of AI performance and the uncertainty it introduces is something the law does not have easy answers for – since any legal assessment is contextual, it is therefore important to consider what kind of context or conditions the AI generates.’

Interview
CV

Klaudia Klonowska will defend her thesis titled "Techno-Legal Tinkering in War: AI Decision-Support Systems and International Humanitarian Law” on 18 March. She conducted her research at the Asser Institute and the Amsterdam Law School and currently continues her research as a postdoctoral researcher at Sciences Po Paris.

What responsibilities do tech companies have?

‘International humanitarian law applies to states, not to companies. So, companies are not directly required to implement or consider international humanitarian law principles. At the same time, their choices in the design of AI technologies have significant implications for the ability of individuals using the systems to comply with the law. It is very important to know for what purpose a technology was designed, and which company values underlie it. There is currently an ongoing debate between tech company Anthropic and the Pentagon about the use of their model Claude Code in targeting operations. This event highlights the influence these companies can exercise through tech design in military practices. It also reveals that AI companies lacked clarity on how their systems would be used and what guardrails the government was imposing.  But states have not yet been very clear about how they can ensure that the technology they use in military operations aligns with legal norms.’

What do you hope happens with your research?

‘I want my research to address both academics and military officials who are involved in the integration of AI systems into the military. There is a lot of optimism, but I want to show that we need to be aware of the uncertainty surrounding AI. I want people to understand the complexity. There are no easy answers or solutions to how AI systems should be designed or used. Each system will require an individual assessment. But, as a general note, militaries must take care to deploy systems with as much care as possible before the uncertainties associated with their predictions turn into deadly consequences. My research traces some of these complexities.’