Facial-recognition algos vary wildly, US Congress told, as politicians try to come up with new laws on advanced tech

www.theregister.co.uk | 1/17/2020 | Staff
cyanbytecyanbyte (Posted by) Level 3
Click For Photo: https://regmedia.co.uk/2020/01/16/shutterstock_somebodys_watching_me.jpg

Vid A recent US government report investigating the accuracy of facial recognition systems across different demographic groups has sparked fresh questions on how the technology should be regulated.

The House Committee on Oversight and Reform held a hearing to discuss the dossier and surrounding issues on Wednesday. “Despite the private sector’s use of the technology, it’s just not ready for prime time,” said Rep Carolyn Maloney (D-NY), who chaired the meeting.

Report - PDF - America - National - Institute

The report [PDF], published by America's National Institute of Standards (NIST) in December, reveals how accurate, or rather inaccurate, some of latest state-of-the-art commercial facial recognition algorithms really are.

NIST tested 189 algorithms submitted by 99 developers across a four datasets comprising of 18.27 million images taken of 8.49 million people.

Face - Recognition - Exhibit - Differentials - Magnitudes

“Contemporary face recognition algorithms exhibit demographic differentials of various magnitudes,” the report said. “Our main result is that false positive differentials are much larger than those related to false negatives and exist broadly, across many, but not all, algorithms tested. Across demographics, false positives rates often vary by factors of ten to beyond 100 times. False negatives tend to be more algorithm-specific.”

In other words, “different algorithms perform differently,” Charles Romine, director, of the Information Technology Laboratory at NIST and a witness at the hearing, explained. The rate of misidentifications in false positives and false negatives is dependent on the application.

Applications - Positives - Romine - Searches - Image

The most risky applications were when false positives occurred in what Romine described as “one to many searches,” where an image is ran against a database of many images to look for a match. “False positives of one to many search is particularly important as the applications could include false accusations,” he said.

For example, a high risk one-to-many search would be matching people’s faces across a database of mugshots to look for suspected criminals. “This issue was not even on my radar until...
(Excerpt) Read more at: www.theregister.co.uk
Wake Up To Breaking News!
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!