Artificial intelligence

EU: Artificial Intelligence Regulation Threatens Social Safety Net, Warns HRW

The European Union’s plan to regulate artificial intelligence is ill-equipped to protect people from flawed algorithms that deprive them of lifesaving benefits and discriminate against vulnerable populations, Human Rights Watch said in report on the regulation. The European Parliament should amend the regulation to better protect people’s rights to social security and an adequate standard of living.

The 28-page report in the form of a question-and-answer document, “How the EU’s Flawed Artificial Intelligence Regulation Endangers the Social Safety Net,” examines how governments are turning to algorithms to allocate social security support and prevent benefits fraud. Drawing on case studies in Ireland, France, the Netherlands, Austria, Poland, and the United Kingdom, Human Rights Watch found that this trend toward automation can discriminate against people who need social security support, compromise their privacy, and make it harder for them to qualify for government assistance. But the regulation will do little to prevent or rectify these harms.

“The EU’s proposal does not do enough to protect people from algorithms that unfairly strip them of the benefits they need to support themselves or find a job,” said Amos Toh, senior researcher on artificial intelligence and human rights at Human Rights Watch. “The proposal also fails to put an end to abusive surveillance and profiling of people living in poverty.”

In some countries, algorithms have become a potent tool for rationalizing austerity-motivated cuts to the welfare budget. In Austria, the algorithm the government uses to predict employment prospects cuts some people off from the support they need to find a job, jeopardizing their right to an adequate standard of living. The algorithm, which purports to save costs by prioritizing who can use job support programs, is discriminatory, underestimating the employability of women over 30, women with childcare obligations, migrants, and people with disabilities.

Surveillance technologies that intensify the monitoring and profiling of people living on low incomes are also spreading across Europe. In the Netherlands, the government rolled out a risk scoring program known as SyRI in predominantly low-income neighborhoods, in an effort to predict people’s likelihood of engaging in benefits or tax fraud. The government suspended the program in April 2020, after a court ruled that it violated people’s privacy. However, civil society groups have raised alarm about a bill in parliament that would authorize data sharing between public authorities and the private sector to detect fraud and potentially revive risk scoring.

In Ireland, the government requires people to register for a digital identity card to gain access to welfare benefits and other public services, such as to apply for citizenship. Some applicants and civil society groups have said that the registration process – which uses facial recognition to sift for duplicate or fraudulent applications – is unnecessary and unduly intrusive.

The EU’s proposal is a weak defense against these dangers. The regulation would ban a narrow list of automated systems that pose “unacceptable risk” to rights, such as algorithms used by public authorities to measure people’s “trustworthiness” and single them out for “detrimental or unfavorable treatment” based on these scores. But it is unclear whether this vague language would ban abusive profiling methods such as SyRI or Austria’s employment profiling algorithm.

The regulation’s rules on automated systems that pose a “high risk” to human rights are also inadequate. It tries to prevent algorithmic discrimination by prescribing standards for better quality data and software design as the system is being developed. But this narrow focus on technical measures will compel neither public authorities nor the private sector to confront flawed policy choices and social inequities that contribute to algorithmic discrimination, such as austerity-motivated budget cuts or disparate access to educational and employment opportunities.

Major loopholes in transparency requirements for “high-risk” systems would undermine people’s ability to understand and challenge automated decisions that could deny them benefits or underestimate how much support they need. The regulation’s failure to guarantee resources, training, and labor protections for workers who oversee these systems can also create an ineffectual oversight regime that threatens the rights of people it is supposed to safeguard.

The European Parliament should amend the regulation to ban social scoring that unduly interferes with human rights, including the rights to social security, an adequate standard of living, privacy, and non-discrimination, Human Rights Watch said. Scoring tools that analyze records of people’s past behavior to predict their likelihood of committing benefits fraud, or that serve as a pretext for regressive social security cuts, should be banned. The regulation should also include a process to prohibit future artificial intelligence developments that pose “unacceptable risk” to rights.

“High-risk” automated systems require stringent safeguards, Human Rights Watch said. The European Parliament should ensure that the regulation requires providers of automated welfare systems and agencies that use them to conduct regular human rights impact assessments, especially before the systems are deployed and whenever they are significantly changed.

These assessments should, for example, involve welfare case workers and people receiving benefits in testing benefits calculation algorithms, and devise a plan to provide people in vulnerable situations with enhanced financial support and counseling as they transition from existing benefits programs to automated systems.

To ensure that impact assessments are not just a box-ticking exercise, the regulation should require member states to establish an independent oversight body that is empowered to conduct regulatory inspections and investigate complaints from the public.

“The automation of social security services should improve people’s lives, not cost them the support they need to pay rent, buy food, and make a living,” Toh said. “The EU should amend the regulation to ensure that it lives up to its obligations to protect economic and social rights.”

Surveillance technologies that intensify the monitoring and profiling of people living on low incomes are also spreading across Europe. In the Netherlands, the government rolled out a risk scoring program known as SyRI in predominantly low-income neighborhoods, in an effort to predict people’s likelihood of engaging in benefits or tax fraud. The government suspended the program in April 2020, after a court ruled that it violated people’s privacy. However, civil society groups have raised alarm about a bill in parliament that would authorize data sharing between public authorities and the private sector to detect fraud and potentially revive risk scoring.

Source: https://www.eurasiareview.com/13112021-eu-artificial-intelligence-regulation-threatens-social-safety-net-warns-hrw/

Donovan Larsen

Donovan is a columnist and associate editor at the Dark News. He has written on everything from the politics to diversity issues in the workplace.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button