Biometrics Insititute logo

Face Recognition Systems and Error Rates - is this a concern?

Biometrics Institute - Thu 24 May 2018

There have been several articles (see below) recently raising concerns over the accuracy of face recognition technology and also a response from Big Brother Watch, questioning the reliability of facial recognition systems (https://bigbrotherwatch.org.uk/all-media/dangerous-and-inaccurate-police-facial-recognition-exposed-in-new-big-brother-watch-report/).

The Institute has discussed this issue with its Expert Groups, noting the importance of education and information on the responsible use of biometrics. Here is what they explained to us.

The relative accuracy of a face recognition watchlist system can be understood in terms of the false identification rate (the chance of identifying the wrong person) and false non-identification rates (the chance of missing the correct person). Factors such as image quality and database size, amongst many others, all impact on the accuracy. The Big Brother Watch report does not contain sufficient data to assess the true accuracy of the system, and the figures showing inaccuracies of 98% and 91% are likely to be a misunderstanding of the statistics and are not verifiable from the data presented.

Most FRS applications will generate some level of misses and it is therefore the responsibility of the user and any governing or advisory body overseeing the introduction of such technology to conduct a proper evaluation and a privacy impact assessment prior to introducing the technology. This includes understanding and testing the accuracy of the application before and during operational use.  

The system must also function in a way that is lawful, necessary and proportionate. This applies to the capturing and storing of images in the database as well as the actions taken when a potential match is made by the system. The verification of a match is a crucial process in any biometric system and this must be conducted rigorously to filter out false positives before further action is taken.

The Biometrics Institute is pro-actively working on good-practices for the use of biometrics and has developed several good-practice guides for its members. For more information please visit our website www.biometricsinstitute.org

 Some additional notes:

 * Refer to the NIST FIVE report for independent accuracy of live FR systems. 

https://www.nist.gov/programs-projects/face-video-evaluation-five

 * FRS is a detection tool that supplements the essential police work of protecting the community

 * It’s a tool that should be operated by trained face comparers who will minimize false matches no matter the performance of the system.

 

Articles on the issue:

Facial recognition technology is "dangerously inaccurate"

http://www.itpro.co.uk/data-protection/31117/facial-recognition-technology-is-dangerously-inaccurate

Welsh police facial recognition has 92% fail rate, showing dangers of early AI

https://www.techrepublic.com/article/welsh-police-facial-recognition-has-92-fail-rate-showing-dangers-of-early-ai/

And http://www.wired.co.uk/article/face-recognition-police-uk-south-wales-met-notting-hill-carnival

 

And the UK ICO blog on this

https://iconewsblog.org.uk/2018/05/14/facial-recognition-technology-and-law-enforcement/

What do you think?

 

 

 

Comments

The misinterpretation of statistical data by IT Journalists who normally only report on products such as mobile phones and tablets is to be expected. The closest the journalists normally get to statistical analysis would be a comment about a particular products market share.

The Biometrics Institute article itself is a bit light on considering it came from the Experts Group. The NIST FIVE Report referenced in the article was compiled in 2017 from software submitted in 2015 and 3 years later is no longer relevant for some of todays face recognition solutions that have used Artificial Intelligence to fine tune algorithm performance well above the figures quoted in 2015.

There are many components to a face recognition solution using live CCTV feeds, and the environment itself plays a large part in determining how many faces are even captured let alone identified. As others have stated, without that data these statistics are meaningless #fake news.


I agree that these error rates are misinterpreted.

Some of the reasons for why this misinterpretation occurred are offered in this article by Carl Gohringer from Allevate: https://policinginsight.com/opinion/false-positives-what-the-facial-recognition-statistics-really-mean

Carl points out the major reason for the misinterpretation is that "Face Recognition does not identify criminals. Humans do."

This is an important insight as it underlines that the accuracy of the FR system is dictated by the accuracy of the people that use the systems. In these deployments the FR system is merely directing the attention of a surveillance officer to potential matches.

Following this logic through, to improve the accuracy of these systems we need to know more about the human component of the system, and critically how to best combine human and machine processing.



< Back