The misuse of a fraud prevention algorithm by Dutch authorities led to widespread discrimination. A story that triggered a national scandal and serves as a reminder of the importance of tools to monitor the use of AI.

In the Netherlands, part of daycare center expenses can be recouped as benefits. In order to avoid abuse, the Tax and Customs Administration has been equipped since 2012 with fraud detection tools that operate with the help of an algorithm.

The penalties enforced by the tax authorities between 2012 and 2019 proved to be severe: 26,000 households were accused of unduly receiving child benefits and ordered to repay them. Moreover, these measures were taken for trivial reasons, such as incomplete or improperly signed application forms, or imprecisely presented income. People who had to repay an amount greater than €3,000 were also registered as perpetrators of attempted fraud or serious financial negligence, which prevented them from accessing debt rescheduling services.

Warned in 2017 of suspected abuse, the Autoriteit Persoonsgegevens (AP), the Dutch data protection authority, conducted an investigation into the Dutch Ministry of Finance. Its findings were the subject of a parliamentary report in December 2020, and eventually led to the resignation of Prime Minister Mark Rutte’s government in January 2021.

Scathing conclusions

The investigation of what the Dutch are now calling the toeslagenaffaire (the daycare benefits case) indeed revealed that authorities had covertly established a database compiling the administrative and social profiles of several hundreds of thousands of people. The latter ignored the existence of, and the reasons for, these profiles, and therefore had no recourse to ask for their deletion. This database was obtained through a fraud detection system, called Systeem Risico Indicatie (or SyRI), which collected data from various files of the Dutch administration. Its use was ruled illegal by a national court in 2020 (in a separate case from this scandal) because it breached the right to privacy, as defined by the European Convention on Human Rights.

In this file, civil servants made a distinction between Dutch citizens and dual citizens, even though a law from 2014 had abolished this distinction in administrative procedures. The database was then analyzed by a deep learning algorithm to identify the most likely profiles to commit fraud. This algorithm proved to be biased as it selected people based on criteria that had little to do with their tax situation. Nationality but also first and last names, as well as place of residence, were considered risk factors. Consequently, families of Moroccan, Turkish or Surinamese descent made up the majority of those targeted by the tax authorities.

At the end of this investigation, the AP levied two fines on Dutch tax authorities for having infringed upon the GDPR. The first in December, 2021, to the tune of 2.75 million euros for storing information in regard to the dual citizenship of some taxpayers, and the second in May, 2022, to the tune of 3.7 million euros for having kept a list of former fraud convictions, which served to justify a ban from certain services or prompt tax investigations. Finally, every victim was awarded €30,000 in damages by the Dutch government.

The challenges of monitoring algorithms

The child benefits scandal did not only make an impact because of the scale of the harm caused. It was a reminder of the importance of protecting the interests and rights of individuals in automated decision making. The existence of algorithmic biases is not new; and the lack of monitoring of the use of AI not only perpetuates but more importantly justifies discrimination, and on a great scale. While the latter may originate in unprecedented negligence by the Dutch administration (illegal profiling of citizens according to their ethnicity, refusal to explain reimbursement requests), the algorithms themselves played an important part by identifying individuals from personal data that was collected and kept unbeknownst to them.

Moreover, a consequence of this scandal was to muddy the notion of accountability itself by making the reasons for administrative decisions opaque. Had it kept going, this situation could have become Kafkaesque: a revenue officer using this algorithm in good faith to view this database, but without being aware of all the rows and information it contains, would unintendedly be complicit in the biased operation of the administration caused by the algorithm. He would thus be investigating citizens without knowing they had been selected on the basis of their ethnicity. While justice is blind, injustice could become blind as well. This is why informing as much as possible about the conditions in which AI is used is an imperative. How do we reach this objective?

Registers in which algorithms are publicly described

In a report published in October, 2021, titled “Xenophobic Machine”, the Dutch office of Amnesty International recommended, following this case, the creation of an independent body whose mission would be to monitor the use of AI. The NGO explained that its mission would be to prevent any basic human rights violations. At this stage, no organization has yet seen the light of day and the government has answered this request with a table called “Impact Assessment Mensenrechten en Algoritmes”, which can be translated by “assessment of the risk of human rights violations by algorithms”. It is a set of responses to describe the use of an algorithm and its risks.

Similar initiatives are sprouting up across the world to make AI accountable and explainable. They focus on future legal obligations to comply with, steps to define a use of algorithms that is careful and respectful of people, as well as the development of tools to explain how they work.

Researchers, manufacturers and members of the European parliament thus created a study group called “AI 4 people” in 2018 to define conditions that would make artificial intelligence harmless. This work is behind the EU guidelines published in April, 2019, on ethics in regard to trustworthy AI. Another organization, Algorithm Audit, was created in 2021 in the Netherlands to make it possible to publish the accounts of organizations using algorithms. In order to do so, it seeks to develop the practice of algorithm auditing to provide European authorities with the tools and know-how to apply the future European “AI Act” regulations.

Some players who use AI solutions are making an effort. The right to explanation is planned for in the GDPR, insofar as an algorithm uses personal data collected by an administration. A citizen can ask for an explanation from any public agency that has made an administrative decision. In order to be able to provide an answer in such a case, some of them, like the Helsinki, Nantes or Amsterdam city councils, keep a public record in which the data and algorithms used by their services are listed. These records also provide information about the risks involved in their use and the monitoring method that is provided for. In the cases of Amsterdam and Helsinki, this approach originated in the decision by the service provider who designed these solutions, the Finnish company Saidot, to demonstrate transparency.

Measures to make AI explainable are also being taken across the world. The most accomplished among them is the Algorithmic Transparency Standard passed in the UK to deal with the algorithms used by the administration. These measures describe in detail how tools work with an AI and more particularly the operations performed by algorithms, the risks involved and the contact information of those in charge of their use within British administration. Their implementation will be entrusted to a pilot organization that will conduct inquiries within public agencies. It will be able to carry out corrections to allow the public greater access to information.

The monitoring of algorithms enshrined in law

In the United States, the Algorithmic Accountability Act passed in 2022 makes the audit of automated decision-making systems mandatory. It requires companies present in the United States (with over 50 million dollars in sales) to monitor for bias in the algorithms used. As for the European Union, over 2022, it will announce regulations, the AI Act, that will set requirements in terms of algorithmic transparency. In particular, the law will provide for a ban on assessing natural persons “based on their social behavior or their personal characteristics”. Finally, the use of AI for law enforcement purposes will be prohibited.

But how do ensure these laws are implemented? The world of research is also exploring tools to explain decisions made with the help of AI. Thus it is working to develop so-called “counterfactual” algorithms. These work by reproducing the operations carried out by another algorithm, but from modified initial data. Thus, a person to whom a bank has refused a loan, could submit to such an algorithm after having provided information on their profile (income, property, state of health). The counterfactual algorithm would then reproduce these operations but by modifying income or property in order to identify the conditions thanks to which the desired outcome, i.e. the granting of the loan, could be reached.

The work to monitor the operations of algorithms is ongoing. While no one can say when it will end, the only certainty is that it requires the participation of various players (researchers, political leaders, civil society representatives). Their watchfulness is necessary to account for all the administrative decisions to those they impact. Among the reasons for the outrage of the Dutch population during the toeslagenaffaire, the lack of explanation given to families that were forced to split up or make important financial sacrifices is not the least important. Preventing algorithmic biases confirms that while trust does not exclude monitoring, the second is a prerequisite to the first.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.