Amsterdam Law School
16 April 2026
Woodcock wasn’t surprised to find that existing targeting practices are subject to the often messy realities of warfare. ‘I knew that, of course. But when I studied these practices, I could really see that these mistakes already happen. While technologies are often presented as a solution to existing problems, I realised that AI not only causes mistakes but also reproduces them systematically and in a way that is hard to contest. In that sense, AI can amplify existing human biases and mistakes in military decision-making. AI appears to create a logical, more neutral process of decision making, while in part it is only reinforcing human biases on a large scale.’ She looked at the effects on International Humanitarian Law (IHL) and acknowledges that whilst this legal framework ‘hopefully protects from harm, it also tolerates a lot of harm’.
‘Commanders need to distinguish between civilians and lawful targets, like combatants. International Humanitarian Law requires that the incidental harm is not “excessive,” so civilians are not harmed excessivelyto accomplish a military goal, and feasible precautions should also be taken to avoid or minimise harm. These requirements are still the same. But when AI is used to inform these high-stakes decisions, something does change. Machine learning models are used to make predictions based on patterns. Contemporary military organisations use drone footage, open-source information and human intelligence. There is an overload of information that people cannot process without AI. That is the key use of AI in militaries: it can process huge amounts of data at high speed and scale.’
Legal categories like “civilian” and “soldier” can be interpreted in new ways
‘There are risks involved that need to be fully understood. AI is generally seen as better, faster and more objective than humans. When it comes to AI systems, we often look to accuracy rates. However, that rate is only an indication of how the system performs in a test environment, not in a messy war situation. Commanders talk with legal advisors about their decisions as well, but when AI is used to inform these decisions, that can lead to distorted decision-making. Legal categories like “civilian” and “soldier” can be interpreted in new ways when AI is continuously used to put people in one box or another to identify targets.’
‘AI simply works differently from the human brain. The best example is that AI models analyse the pixels of a picture when asked to identify objects in it. That’s very different from the way people look at a picture. An AI classification system was tasked with distinguishing between photos of dogs and wolves and was very successful. But it turned out that the system only looked at the snow in the background to determine if the animal in the picture was a wolf. It was based on something totally irrelevant. In this case, it became clear how AI was operating, but in most cases, it remains a black box. That makes it very hard for people to intervene in the system. On top of that, AI is often viewed as objective, leading people to uncritically rely on these systems. Imagine AI identifying targets in wars in life-or-death situations, when there is still a lot of uncertainty.’
Taylor Kate Woodcock defended her thesis on 25 March 2026. She conducted her research on AI, warfare and international law at the Asser Institute and the Amsterdam Law School in the context of the NWO-funded DILEMA project. Woodcock's research contributes to debates about how AI in warfare is reshaping international law and military decision-making.
‘I view international law as a practice, something people engage with in their daily life. Law is shaped by these practices and formed through legal interpretation. In international law, there are a lot of open, vague standards where commanders must exercise discretion, like “feasible precautions”, “excessive” civilian harm. You need a concept of reasonableness to interpret these standards. This is also an open concept that gains meaning through interpretation. AI can shape this discretion and alter reasonableness. Typically, mistaking a target is not necessarily against the law — if this decision was reasonable and made use of information reasonably available. But AI is making predictions about targets that are not 100 percent accurate, with margins of error being a feature of how these systems work. Those errors can become reasonable because the commander cannot foresee system errors in specific instances due to AI’s lack of transparency and predictability. This raises concerns about accountability and transparency in AI-assisted warfare. There is a real risk of causing systematic harm. It highlights the need for checks and balances, so commanders are aware of the risks of the system they are working with.’
‘It is important to preserve space to hesitate in the face of difficult decisions. Decisions about what is reasonable or not are more than just a mathematical formula. Commanders must weigh the harm to civilians against the military advantages. These legal assessments are not straightforward; they are highly complex. They require space for the commander to make decisions about people’s lives, and room for knowledge, experience, intuition, and values. It is important to be able to hesitate with all these factors in mind. Unfortunately, the use and speed of AI can close the space for hesitation.’
‘I hope that the research can shed light on AI for military organisations. Interestingly, we also see industries engaging with these issues more. The first step is understanding where the risks lie, and those are not yet fully understood.’