Fight against facial recognition inequalities and access bias to unemployment insurance benefits

0

[ad_1]

About $ 800 billion in unemployment benefits have been paid by the federal government since the start of the pandemic, a staggering figure that reflects the scale and scale of the impact of COVID-19 on the workforce. American work.

The pandemic has accelerated the collision course of two inexorably linked forces: the tens of millions of Americans who are legitimately claiming and desperately in need of unemployment benefits, and the criminals who have taken hold of the pandemic to commit social fraud. Unemployment Insurance (UI) estimated between $ 89 billion to $ 400 billion.

The perfect storm swirling around the user interface was not lost on the White House and Congress, which is why, in August 2021, the United States Department of Labor (DOL) announcement new funding for the American Rescue Plan Act, which provided $ 2 billion “to prevent and detect fraud, promote fair access, ensure timely payment of benefits and reduce backlogs.”

With billions of dollars at stake and benefits for millions of workers at stake, it is essential that government policymakers gain a full understanding of the key differences between technologies and approaches to ensure the timely and accurate payment of unemployment benefits, while fighting UI fraud. They also need to understand the implications of “equitable access” so that those who need it most get help, free from barriers created by technological bias and inequality.

Fighting User Interface Fraud: The Current Tech Landscape
The data strongly suggests that the two main technological identity verification approaches to tackle UI fraud and benefit delivery (facial recognition and data matching) create racial, social and economic inequalities and bias. access. A skewed approval process threatens to disenfranchise those citizens most dependent on government benefits. Not to mention the privacy concerns with both approaches. This is important because 60% of the $ 2 billion in DOL funding has gone to facial recognition and data matching providers.

Congressional leaders, advocacy groups and others are increasingly raising red flags about the use of facial recognition for government benefit programs – bipartisan concerns exposed during recent hearings of the House held during the summer by the Judicial sub-commission and Financial Services Committee. Attention is focused on a set of fundamental issues related to facial recognition and data matching.

Equity issues
For UI, facial recognition uses biometric data and official documents to verify that the person claiming and claiming UI benefits is who they say they are. But facial recognition algorithms struggle to accurately identify individuals with darker skin tones, creating issues of fairness in the distribution of benefits that can unfairly deny or delay payments based on race.

Concerns about inequalities in facial recognition accuracy are not new. A 2019 National Institute for Standards and Technology (NIST) to study found “… that the majority of facial recognition algorithms were much more likely to misidentify racial minorities than Whites, Asians, Blacks and Native Americans being particularly at risk.” “

Data matching solutions, which have also raised concerns, use a different approach to facial recognition. They analyze disparate data stored in a data lake to use matching algorithms to validate the information. Some rely on questions based on credit history, such as type of car owned, known past addresses, existence of a credit and banking history (all requirements could have a negative impact on communities. of color, young people, unbanked, immigrants, etc.) impede the receipt of timely benefits.

Access problems
Some facial recognition technologies require a person to upload their government-issued ID, which not all people have. Millions of people in the United States do not have government ID. While this does not mean that they are not eligible for unemployment in some states, it makes verifying their identity to access unemployment benefits much more difficult.

Even more problematic, facial recognition often requires the possession of a smartphone. Seventy-six percent of American adults earning less than $ 30,000 a year own a smartphone, which means that about a quarter of the adult population will face barriers with this aspect of the benefit process.

Confidentiality concerns
According to a report from the Government Accountability Office, Americans are concerned about their ability to remain anonymous in public and the use of technology that collects and stores their images without their consent. Although some federal and state laws regulate the use of this data by businesses, there is no comprehensive federal privacy law governing the collection, use, and sale of personal information by businesses. private sector companies.

Some facial recognition technologies require additional PII information from UI claimants, and these claims use facial recognition capabilities (not to mention that this information may be sold to third parties). While there are policies in place to prevent entities from collecting and using a person’s biometric data, such as the Illinois Biometric Information Privacy Act (BIPA), there is nothing just not enough regulation around the technology at the federal level.

A more holistic approach to identity analysis
As of July 2021, unemployment agencies in 25 states are already using commercial facial recognition software for verifying the identity of Americans claiming unemployment benefits. Federal agencies also use the software for identity verification. And for residents of many of those states, using facial recognition for identity verification is the only option for receiving unemployment benefits, despite the issues of fairness, access and privacy that they might have and a general lack of regulation of emerging technology.

Before federal, state and local agencies double up on facial recognition for citizen services, it is critical to assess the value of a more holistic approach that relies on identity analysis from sources of evidence. data that does not provide the same kind of inherent equity and access. bias.

Using data to detect identity theft is smart, but the key is choosing the right data that minimizes bias. This means leveraging solutions that use sources with less inherent bias such as digital devices, IP addresses, cell phone numbers, and email addresses. Data-driven identity analysis is essential not only to identify and reduce fraud, but also to reduce friction for citizens who claim legitimate unemployment insurance benefits. Analysis happens instantly and behind the scenes, without getting in the way of the user during the application process. It is only when something suspicious is reported that the system introduces obstacles, such as having to call a phone number to verify additional information.

This is a far cry from the documented experiences of Americans going through the Unemployment Insurance benefits process who have encountered problems with identity verification with facial recognition. Government agencies should consider technologies that fight fraud and increase efficiency, and also avoid the pitfalls of prejudice and inequality that penalize communities most in need of unemployment insurance benefits.

Shaun Barry is a renowned fraud and integrity expert who is currently the Global Head of Government and Healthcare at SAS. Barry has worked with governments (central, state / provincial and local levels), healthcare payers and other organizations around the world for 25 years to drive innovation and efficiency. He is a frequent speaker at industry events and has testified before legislative bodies for many governments. Barry has also served on advisory boards that provide citizen oversight for the US Internal Revenue Service.

[ad_2]
Source link

Leave A Reply

Your email address will not be published.