Press releases
Denmark: New Report - Mass surveillance and discrimination in automated welfare state
‘The way the Danish automated welfare system operates is eroding individual privacy and undermining human dignity.’ - Hellen Mukiri-Smith
People are unwillingly and unknowingly forfeiting their right to privacy
Individuals under investigation feel like they are ‘sitting at the end of a gun’
Amnesty suggests social scoring system used is prohibited under EU Artificial Intelligence law
The Danish welfare authority, Udbetaling Danmark (UDK), risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalised racial groups through its use of artificial intelligence (AI) tools to flag individuals for social benefits fraud investigations, Amnesty International said today in a new report.
The report, Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State, details how the sweeping use of fraud detection algorithms, paired with mass surveillance practices, has led people to unwillingly --or even unknowingly-- forfeit their right to privacy, and created an atmosphere of fear.
UDK has mandated a company, Arbejdsmarkedets Tillægspension (ATP), to administer social benefit and fraud control efforts. In turn, ATP has partnered with private multinational corporations, including NNIT, to develop fraud control algorithms tailored to ATP’s specifications.
Hellen Mukiri-Smith, Amnesty International’s Researcher on Artificial Intelligence and Human Rights, said:
“This mass surveillance has created a social benefits system that risks targeting, rather than supporting the very people it was meant to protect.
“The way the Danish automated welfare system operates is eroding individual privacy and undermining human dignity. By deploying fraud control algorithms and traditional surveillance methods to identify social benefits fraud, the authorities are enabling and expanding digitised mass surveillance.
“This expansive surveillance machine is used to document and build a panoramic view of a person’s life that is often disconnected from reality. It tracks and monitors where a social benefit claimant lives, works, their travel history, health records, and even their ties to foreign countries.
“People in non-traditional living arrangements — such as those with disabilities who are married but who live apart due to their disabilities; older people in relationships who live apart; or those living in a multi-generational household, a common arrangement in migrant communities — are all at risk of being targeted by the Really Single algorithm for further investigation into social benefits fraud.
“The Danish authorities must urgently implement a clear and legally binding ban on the use of data related to ‘foreign affiliation’ or proxy data in risk-scoring for fraud control purposes. They must also ensure robust transparency and adequate oversight in the development and deployment of fraud control algorithms.”
Dangerous algorithms and psychological toll
UDK and ATP use a system of up to 60 algorithmic models purportedly designed to detect social benefits fraud and flag individuals for further investigations by Danish authorities. During its research, Amnesty obtained partial access to four of these algorithms.
To power these fraud-detection algorithms, Danish authorities have enacted laws that enable extensive collection and merging of personal data from public databases of millions of Danish residents.
The data includes information on residency status and movements, citizenship, place of birth, and family relationships — sensitive data points that can also serve as proxies for a person’s race, ethnicity, or sexual orientation.
UDK argues that vast collection and merging of personal data to detect social benefits fraud is ‘legally grounded’. However, Amnesty’s findings show that the enormous amounts of data that is collected and processed is neither necessary nor proportionate.
Individuals interviewed by Amnesty described the harmful psychological toll of being subjected to surveillance by fraud investigators and case workers. The Chairperson of the Social and Labor Market Policy Committee at Dansk Handicap Foundation, Gitte Nielsen, said that people with disabilities who are constantly interrogated by case workers often feel depressed, and report that the constant scrutiny is “eating” away at them.
Describing the terror of being investigated for benefits fraud, an interviewee told Amnesty International: “[It is like] sitting at the end of the gun. We are always afraid. [It is as] if the gun is [always] pointing at us.”
Discriminatory targeting
The discrimination perpetuated by UDK, and ATP’s algorithms is happening in an environment of pre-existing inequalities — laws, rules, institutions, norms, and values — within Danish society. These discriminatory structures are embedded in the design of ATP’s algorithmic models, facilitating the categorisation of individuals and communities based on difference or “othering”.
To identify social benefits fraud in pension and childcare schemes, the Danish authorities deploy the Really Single algorithm to predict a person’s family or relationship status. One of the parameters employed by the Really Single fraud control algorithm includes “unusual” or “atypical” living patterns or family arrangements. Yet there is no clarity on what constitutes such situations, leaving the door open for arbitrary decision-making.
UDK and ATP also use inputs related to “foreign affiliation” in its algorithmic models. The Model Abroad algorithm identifies groups of beneficiaries deemed to have "medium and high-strength ties” to non-EEA countries and prioritises these groups for further fraud investigations. The research finds that this approach discriminates against people based on factors such as national origin and migration status.
UDK in their response to Amnesty have stated that the use of “citizenship” as a parameter in their algorithms does not constitute processing of sensitive personal information. Amnesty disagrees with this argument by UDK and ATP, since citizenship as a parameter could reveal a person’s race, ethnicity, migration status and directly discriminates because of national origin.
Use of prohibited systems
UDK and ATP provided Amnesty with redacted documentation on the design of certain algorithmic systems, and consistently rejected Amnesty’s requests for a collaborative audit, refusing to provide full access to the code and data used in their fraud detection algorithms.
The information that Amnesty has collected and analysed suggests that the system used by the UDK and ATP functions as a social scoring system under the new EU Artificial Intelligence law (AI Act) — and should therefore be banned.
UDK has rejected the assessment that its fraud detection system is likely to fall under the social scoring ban of the AI Act, but without offering a sufficient explanation for its reasoning.
Amnesty is urging the European Commission to clarify, in its AI Act guidance, which AI practices count as social scoring, addressing concerns raised by civil society.
The organisation also calls on the Danish authorities to ensure that the fraud detection system used by UDK is not a social scoring system — until then, the system must be stopped.
Under various international human rights treaties, European Union laws, and national law, Denmark has a legal obligation to safeguard human rights like privacy, data protection, freedom of expression, equality, and non-discrimination.
Right to reply
Ahead of the report’s publication, Amnesty reached out to NNIT, but the company did not provide further information about its contractual arrangements with UDK and ATP, citing confidentiality obligations. NNIT also did not disclose information about any human rights due diligence it conducted before entering its agreement with UDK and ATP.
The research on Denmark also builds on previous Amnesty research on public sector automation and digitalisation in the Netherlands, India and Serbia, and the resulting human rights risks and impacts of algorithmic decision-making in these jurisdictions.