Amsterdam Law School
8 January 2026
‘In many ways. If you have a strange mark on your skin, you can use an app to scan your skin and tell you what’s going on. Or you can use technology to analyse the pattern of your coughing and inform you if you have a specific lung disease. We now also have generative AI, such as ChatGPT, which lets you get health consultations without a doctor’s supervision. This is all impacting the relationship between patients and healthcare professionals, as some responsibilities are being taken over by digital applications. I noticed many healthcare workers are already using these tools, which brings up legal and ethical concerns.’
Depending on your background, you will be impacted differently by AI
‘These innovations should benefit society as a whole on the same level and not increase inequality. We know not everyone is currently benefiting equally. What really triggered my interest in this subject is the question about fairness. Some people don’t have access to technology because they live in a country without adequate digital infrastructure. You also need a certain level of digital literacy to use generative AI. Elderly people, for that reason, may have less access to digital healthcare technologies. We also know that tools are often trained on databases that don’t represent all ethnicities. Studies show that many tools in the context of genetics and genomics testing, for example, have been trained on datasets consisting of about 80 per cent of people with European ancestry. Depending on your background, you will be impacted differently by AI.’
‘If you are not attentive to this problem, and over-rely on AI, minorities end up not receiving proper healthcare. My main concern is that once people realise these tools are not working for everyone, minorities start to lose trust in the healthcare system. It can backfire, and people might start to avoid the healthcare system altogether. We’ve had examples of this before.'
My main concern is that minorities start to lose trust in the healthcare system
Often, the most important aspect for patients is the ability to trust the system. You enter the hospital and decide to trust the system when you need help with your body or mental health. This should be strengthened, not weakened, by AI. A patient can only trust tools if they were designed for them. There is so much different research going on in this area: some focus on transparency, some on privacy. But for me, they are all different puzzle pieces of trustworthiness. We should get to the point where we can signal trust to the patients that allows them to rely on the healthcare system when they need help.’
Mahsa Shabani is an Associate Professor at Law for Health and Life at the Amsterdam Law School. She received a Vidi grant for her research project “Fairness in Data-Driven Medical Technologies: Solidifying Regulatory Oversight Based on a Multidimensional Conception of Fairness”.
‘We have many opportunities to mitigate this problem. Whenever you design a new tool, you need to obtain approval from medical regulatory authorities. At this moment, they are not attentive enough to the lack of inclusivity. I argue that they need to look more in-depth. Does this technology discriminate against minorities? Are there problems with the data? Only male data, for example, will not work for females. We should have a mandatory “fairness checkpoint” that is implemented by regulatory bodies. Part of my project is to define fairness more precisely. There is a fairness problem with AI in healthcare. I want to research how to solve that from a legal and regulatory side.’
‘Ultimately, I hope to develop a framework that both regulators and developers will use for more inclusive technology. I also want to raise awareness about existing problems and encourage open discussion about them.’