20-year Anniversary Report: WorldReach

WorldReach, an Entrust company: Biometrics for the people: In defence of the responsible use of emerging identity technologies

On the 20-year birthday of the Biometrics Institute, we at Entrust are proud to be active members, joining with industry partners to promote the responsible use of biometrics for the public good. We believe that, with proper legal safeguards, whether we’re accessing services at home or crossing international borders, biometrics will continue to improve all our lives as citizens.

A new paradigm

We are in the early stages of a new paradigm in the way citizen-consumers communicate with government and commercial service providers. Public expectations are shifting towards digital services and away from waiting rooms, paper forms, and valuable documents sent through the mail.

These changes are in part the result of rapid improvements in biometric technologies. For example, according to the Centre for Strategic and International Studies, “Facial recognition has improved dramatically in only a few years. As of April 2020, the best face identification algorithm has an error rate of just 0.08% compared to 4.1% for the leading algorithm in 2014, according to tests by the National Institute of Standards and Technology (NIST)[1].”

Public understanding of biometrics

The view taken by many across the industry is that facial recognition technology (FRT) has recently reached or surpassed the accuracy of other biometric modes such as fingerprints and iris, and that all these options are significantly better at identifying individuals than is the naked eye[2].

Despite this, biometrics generally – and FRT, in particular – have often taken a beating from the press, politicians and some academics. A notable example is the 2020 New York Times piece on the social media scraping activities of Clearview AI[3], now facing several lawsuits, and the anti-biometric campaign waged by the Toronto Globe and Mail[4]. Moreover, some municipalities, including San Francisco, have banned the use of FRT for some purposes[5].

Much of this critique is legitimate and well intentioned. It is arguable that technology is moving faster than legislation, leading to concerns about overreach, particularly in law enforcement use cases. But the critique is often guilty of conflating separate issues and oversimplifying issues on which the public would benefit from a more nuanced understanding. Here are two examples.

Much of the animus directed at FRT is focused on just one use case: police surveillance. Press articles often use ‘facial recognition’ and ‘surveillance’ interchangeably, never taking the trouble to distinguish between one-to-one, one-to-few and one-to-many use cases, or to explain the differences between face detection, face recognition and face verification. The evidence given by Chuck Romine of NIST to the US House Committee on Oversight and Reform in January 2020 is an object lesson in how to do so in a way that’s easily understood.[6]

Secondly, concerns about the sometimes-differing performance of FRT across demographic groups have led to misleading headlines about ‘bias’, giving the impression that the makers or users of such technologies have racist or sexist intentions, without giving the whole story about the issues and how the industry is tackling them. That story is told in its most complete form in the 2019 NIST report on demographic effects.[7]

One of NIST’s key conclusions was that the most accurate facial algorithms in fact show very little by way of differing accuracy across demographic groups. Another notable conclusion is that algorithms are only as good as the data sets on which they have been trained. Given the huge diversity of the real world, the most accurate algorithms are those which have been developed on the most diverse data sets.[8]

A good news story

Those of us in the identity industry are not naïve about the challenges ahead on the public acceptance of biometrics, but we often wish the good news about improvements to the lives of ordinary people was as prominent as the controversies about surveillance.

Here’s one example that’s close to the hearts of those of us in the Identity Verification team at Entrust. Following the UK’s 2016 decision to the leave the EU, the Home Office was faced with a significant challenge: how to identify and register several million EU nationals living and working in the UK under freedom of movement provisions, in order to grant them a new settled status in UK law.

Since 2018, we have worked with the Home Office on the innovative EU Settlement Scheme, which allowed applicants for the new status to apply entirely remotely (if they chose to do so), augmenting the online application with an identity verification process performed on a smartphone in just a few minutes. This process uses market-leading facial matching and liveness capabilities (as well as an NFC document check) to confirm that the applicant is a real person and the owner of a genuine passport. This powerful data packet allows the Home Office to grant permanent settlement in the UK, without seeing applicants in person or receiving their passports in the mail, in most cases.

By the time the scheme came to an end in June 2021, more than 6 million people had applied, the overwhelming majority choosing to identify themselves through the digital route. According to Home Office figures, at the time of writing almost 5.5 million of those applicants have been granted settled status in the UK.[9]        

Here is just one example of biometric technologies making life better. There is no surveillance in play here, and no agency overreach. Facial matching was used only on a one-to-one basis (to link the person securely to their own passport), the digital route was entirely voluntary, and the scheme adhered to both GDPR and UK government requirements for the handling of personal data, none of which is retained by Entrust.

The scheme was so successful that this approach to remote applicant identification is now being rolled out across several other UK immigration programs.

A way forward

Concerns about the responsible use of biometrics are well-founded. This is precisely why Biometrics Institute membership includes lawyers, privacy advocates and academics as well as technology providers.

But we need to elevate the public debate by accurately capturing different use cases and appropriate legal responses.

We at Entrust firmly believe that, when used responsibly, in ways that enhance individual privacy – whether we’re traveling seamlessly across borders or securely accessing digital services – biometric technologies will continue to improve lives.

[1] https://www.csis.org/blogs/technology-policy-blog/how-accurate-are-facial-recognition-systems-%E2%80%93-and-why-does-it-matter

[2] https://www.cbsnews.com/news/facial-recognition-60-minutes-2021-05-16/

[3] https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

[4] https://www.theglobeandmail.com/opinion/article-what-happens-when-our-faces-become-data/

[5] https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html

[6] https://www.nist.gov/speech-testimony/facial-recognition-technology-frt-0

[7] https://www.nist.gov/publications/face-recognition-vendor-test-part-3-demographic-effects

[8] https://itif.org/publications/2020/01/27/critics-were-wrong-nist-data-shows-best-facial-recognition-algorithms

[9] https://www.gov.uk/government/collections/eu-settlement-scheme-statistics

Jon Payne, Director Strategic Alliances, Identity Verification
+1 703 883 7022

Applications and use cases | Privacy and policy | Research and development | Technology innovation

Lead the debate with us on the
responsible use of Biometrics