The IRS/ID.me debacle: An instructive moment for tech

0

We look forward to presenting Transform 2022 in person again on July 19th and virtually from July 20th to 28th. Join us for insightful conversations and exciting networking opportunities. Register today!


When the Internal Revenue Service (IRS) signed an $86 million deal with identity verification provider ID.me last year to provide biometric identity verification services, it was a monumental vote of confidence in the technology. Taxpayers could now verify their identities online using facial biometrics, a move designed to better secure how American taxpayers manage federal tax affairs.

However, after vocal opposition from privacy groups and bipartisan lawmakers who raised privacy concerns, the IRS backed out in February and abandoned its plan. These criticized the requirement for taxpayers to submit their biometric data in the form of a selfie as part of the new identity verification program. Since then, both the IRS and ID.me have provided additional options, giving taxpayers the ability to choose to use ID.me’s service or authenticate their identity via a live virtual video interview with an agent. While this move may appease the parties who have expressed concerns — including Sen. Jeff Merkley (D-OR), who proposed the No Facial Recognition at the IRS Act (S. Bill 3668) at the height of the debate — the very public Misunderstanding by the IRS’s deal with ID.me has clouded public opinion on biometric authentication technology and raised bigger questions for the entire cybersecurity industry.

Although the IRS has since agreed to continue offering ID.me’s biometric facial recognition technology as an identity verification method for taxpayers with an opt-out option, confusion still persists. The high-profile complaints against the IRS deal have unnecessarily weakened public confidence in biometric authentication technology, at least for now, and allowed fraudsters to feel much relieved. However, as the ID.me debacle fades in the rear-view mirror, there are lessons for both government agencies and technology providers to learn.

Don’t underestimate the political value of a controversy

This recent controversy underscores the need for better education and understanding of the nuances of biometric technology, the types of content that may be subject to face recognition versus face matching, the use cases and potential privacy issues arising from these technologies and the required regulations to better protect the rights and interests of consumers.

For example, there is a major disparity between using biometrics with the user’s explicit informed consent for a single, one-off purpose that benefits the user, such as identity verification and authentication to protect the user’s identity from fraud, versus scraping biometric data in any identity verification transaction without permission or use for unauthorized purposes like monitoring or even marketing purposes. Most consumers do not understand that their facial images can be collected for biometric databases on social media or other websites without their express consent. When platforms like Facebook or Instagram explicitly communicate such activities, they are usually buried in the privacy policy, which is described incomprehensible to the average user. In the case of ID.me, companies implementing this “scraping” technology should be required to educate users and obtain explicit informed consent for the use case they enable.

In other cases, different biometric technologies that appear to perform the same function may not be created equally. Benchmarks such as the NIST FRVT provide a rigorous assessment of biometric matching technologies and a standardized means of comparing their functionality and ability to avoid problematic demographic performance biases on attributes such as race, age, or gender. Biometric technology companies should be held accountable not only for the ethical use of biometrics, but for the equitable use of biometrics that work well for the entire population they serve.

Politicians and privacy activists hold biometrics technology providers to a high standard. And they should – the stakes are high and privacy is important. As such, these companies must be transparent, clear, and—perhaps most importantly—proactive when it comes to communicating the nuances of their technology to these audiences. A misinformed, fiery speech from a politician trying to win hearts during a campaign can undermine otherwise consistent and focused consumer education. Senator Ron Wyden, a member of the Senate Finance Committee, stated, “No one should be forced to submit to facial recognition to gain access to critical government services.” And in doing so, he mislabeled face matching as facial recognition, and the damage was done .

Perhaps Sen. Wyden wasn’t aware that millions of Americans undergo facial recognition every day when using critical services — at the airport, in government facilities, and in many workplaces. But by not addressing this misconception from the start, ID.me and the IRS allowed the public to be openly misinformed and showcase the agency’s use of facial recognition Technology as unusual and shameful.

Honesty is a business imperative

Despite a barrage of third-party misinformation, ID.me’s response was late and confused, if not misleading. In January, CEO Blake Hall said in a statement that ID.me doesn’t use 1:many facial recognition technology — comparing one face to others stored in a central repository. Less than a week later, the latest in a series of inconsistencies, Hall traced back, stating that ID.me uses 1:many, but only once, during registration. An ID.me engineer pointed out this mismatch in a forward looking Slack channel post:

“We could disable 1:many face search but then lose a valuable anti-fraud tool. Or we could change our public attitude towards using 1:Many Face Search. But it seems we can’t always do one thing and say the other because that inevitably gets us in hot water.”

Communicating transparently and consistently with the public and key opinion leaders using print, digital media and other creative channels will help counteract misinformation and provide reassurance that facial biometrics technology, when used with explicit informed consent, will help protect consumers used is safer than the traditional alternatives.

Get ready for regulation

Rampant cybercrime has prompted more aggressive federal and state legislation, while policymakers have placed themselves at the center of the tension between privacy and security and must act from there. Agency heads can claim that their legislative efforts are driven by a commitment to the safety and privacy of their constituents, but Congress and the White House must decide what sweeping regulations will protect all Americans from the current cyber threat landscape.

There is no shortage of regulatory precedent to refer to. The California Consumer Privacy Act (CCPA) and its pioneering European cousin, the General Data Protection Regulation (GDPR), model how to ensure users understand the type of data companies collect from them, how it is used, measures to Monitoring and managing this data and how you can opt-out of data collection. So far, officials in Washington have left privacy infrastructure to states. The Biometric Information Privacy Act (BIPA) in Illinois and similar bills in Texas and Washington regulate the collection and use of biometric data. These rules require organizations to obtain their consent before collecting or disclosing an individual’s likeness or biometric information. They must also store biometric data securely and destroy it in a timely manner. BIPA imposes fines for violations of these rules.

If lawmakers would develop and pass legislation that combined the principles of the CCPA and GDPR regulations with the biometrics-specific rules of BIPA, greater confidence in the security and convenience of biometric authentication technology could be established.

The future of biometrics

Biometric authentication providers and government agencies need to be good shepherds of the technology they provide – and procure – and more importantly, when it comes to educating the public. Some hide behind the perceived fear of giving cybercriminals too much information about how the technology works. The fortunes of these companies, not theirs, rest on the success of a particular operation, and wherever there is a lack of communication and transparency you will find opportunistic critics eager to publicly misrepresent biometric facial recognition technology to further their own ends.

While multiple lawmakers have cast facial recognition and biometrics companies as bad actors, they missed an opportunity to weed out the real culprits – cyber criminals and identity fraudsters.

Tom Thimot is CEO of authID.ai.

data decision maker

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.

If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers

Share.

Comments are closed.