Skip to main content
Amnesty International UK
Log in

France: Government must stop using dangerous AI-powered surveillance to tackle benefit fraud

Authorities must immediately stop using discriminatory algorithm used by the social security agency

The risk-scoring system treats marginalised individuals with suspicion

‘This system operates in direct opposition to human rights standards, violating the right to equality and non-discrimination and the right to privacy’ - Agnès Callamard

The French authorities must immediately stop the use of a discriminatory risk-scoring algorithm used by the French Social Security Agency’s National Family Allowance Fund (CNAF), which is used to detect overpayments and errors regarding benefit payments, Amnesty International said today. 

On 15 October, Amnesty and 14 other coalition partners led by La Quadrature du Net (LQDN) submitted a complaint to the Council of State, the highest administrative court in France, demanding the risk-scoring algorithmic system used by CNAF be stopped.

In 2023, LQDN gained access to versions of the algorithm’s source code – a set of instructions written by programmers to create a software – exposing the discriminatory nature of the system.

Since 2011, CNAF has used a risk-scoring algorithm to identify people who are potentially committing benefits fraud by receiving overpayments. The algorithm assigns a risk score between zero and one to all recipients of family and housing benefits. The closer the score is to one, the higher the probability of being flagged for investigation.

Agnès Callamard, Secretary General at Amnesty International, said:

“From the outset, the risk-scoring system used by CNAF treats individuals who experience marginalisation – those with disabilities, lone single parents who are mostly women, and those living in poverty – with suspicion. This system operates in direct opposition to human rights standards, violating the right to equality and non-discrimination and the right to privacy.

“While authorities herald the rollout of algorithmic technologies in social protection systems to increase efficiency and detect fraud and errors, in practice, these systems flatten the realities of people's lives. They work as extensive data-mining tools that stigmatise marginalised groups and invade their privacy.

“France is relying on a risk-scoring algorithmic system for social benefits that highlights, sustains and enshrines the bureaucracy’s prejudices and discrimination. Instead, France should ensure that it complies with its human rights obligations in the first place that of non-discrimination. The authorities must address current and existing AI-related harms amid the country’s quest to become a global AI hub.”

 

Putting people at risk

There are currently 32 million people in France receiving a benefit from CNAF. Their sensitive personal data, as well as that of their family, is processed periodically, and a risk score is assigned.

The criteria that increase one's risk score include parameters which discriminate against vulnerable households, including being on a low income, being unemployed, living in a disadvantaged neighbourhood, spending a significant portion of income on rent, and working while having a disability. The details of those who are flagged due to having a high-risk score are compiled into a list that is investigated further by a fraud investigator. 

Amnesty did not investigate specific cases of people flagged by the CNAF system. However, its investigations in the Netherlands and Serbia suggest that using AI-powered systems and automation in the public sector enables mass surveillance and the amount of data collected is disproportionate to the purported aim of the system.

Amnesty has also exposed how many of these systems have been quite ineffective at doing what they purport to do—whether it be identifying fraud or errors in the benefits system. 

It has also been argued that the scale of errors or fraud in benefits system has been exaggerated to justify the development of such tech systems, often leading to discriminatory or racist or sexist targeting of particular groups - particularly migrants and refugees.

Over the past year, France has been actively promoting itself internationally as the next hub for  AI technologies, culminating in a summit scheduled for February 2025. At the same time, France has also legalised mass surveillance technologies and consistently undermined the EU’s AI Act negotiations.

Under the newly adopted European Artificial Intelligence Regulation (AI Act), AI systems used by authorities to determine access to essential public services and benefits are considered to pose high risk to rights, health and safety of people. Therefore, they must meet strict technical, transparency and governance rules, including an obligation on deployers to carry out an assessment of human rights risks and guarantee mitigation measures before deployment. 

In the meantime, certain systems, such as those used for social scoring, are considered to pose unacceptable level of risk and therefore must be banned.

It is currently unclear whether the system used by CNAF qualifies as a social scoring system due to a lack of clarity in the AI Act on what constitutes such a system. 

Regardless of its classification however, all evidence suggests that the system is discriminatory. It is essential that authorities stop employing it and scrutinise biased practices that are inherently harmful - especially to marginalised communities seeking social benefits.

Regulation and algorithms of discrimination

The European Commission will issue guidance on how to interpret the prohibitions in the AI Act prior to their entry into force on 2 February 2025, including what would qualify as social scoring systems.

In August 2024, the AI Act came into force. Amnesty, as part of a civil society coalition led by the European Digital Rights Network, has been calling for EU artificial intelligence regulation that protects and promotes human rights.

In March this year, an Amnesty briefing outlined how digital technologies including artificial intelligence, automation, and algorithmic decision-making are exacerbating inequalities in social protection systems across the world.

In 2021, Amnesty’s report Xenophobic Machines exposed how racial profiling was baked into the design of the algorithmic system by the Dutch tax authorities that flagged claims for childcare benefits as potentially fraudulent. 

View latest press releases