Over the past half-decade, a number of banks have explored advances in biometrics, technology that uses people’s physical markers as access points to their stored money.
Although fingerprints, iris scans and facial recognition are touted for the added security they can offer account holders, a study released last week by the National Institute of Standards and Technology (NIST) indicated most commercial facial-recognition systems exhibit bias.
The agency tested 189 facial-recognition algorithms from 99 developers and found black and Asian people were up to 100 times more likely to be misidentified than white men in “many-to-one” searches, a type that law enforcement might use to identify a suspect in a crime.
The technology also falsely identified older adults up to 10 times more often than middle-aged adults; women more often than men; and Native Americans more often than any other ethnic group.
The agency found disproportionate error rates among some of those populations in “one-to-one” searches, as well. One-to-one biometrics can be used in lieu of passwords or to unlock smartphones.
Lawmakers and civil liberties groups are raising alarms over the technology’s potential negative effects. Cities such as San Francisco, Oakland and Berkeley in California this year banned local government use of the technology.
At stake is some of the $59 billion to which CB Insights, in a study this month, projected the biometrics industry to be…