The CEO of a facial recognition company is saying that facial recognition is not mature enough, and is therefore dangerous to use. He's probably in a position to know.
(If anyone wants to dispute his assertion that facial recognition has racial bias, this has been known for many years now, as the "HP computers are racist" video clearly shows.)
During my undergraduate days I had the opportunity to take some courses in the Social Sciences. I can’t say classes like “Ethnicity in the Media” or “Gender in Society” were necessarily at the top of my interest list but in retrospect, courses like those transformed my way of looking at pretty much everything in life. For that I am eternally grateful because it prevents me from locking myself into a particular side of any issue with such veracity that everything associated with the opposition becomes evil. It also allows me to apply a bit of healthy skepticism to everything I read and see and try to find the underlying motives.
The TechCrunch opinion piece you cited, for example, was written by the CEO of the facial recognition software developer, Kairos. The very first paragraph of his article puts his rival Amazon’s Rekognition software in the same sentence as the oh-so-scary-evil-empire China and words like “darker” and “offensive.” I have to ask, what is his real motivation here?
According to the author, he is concerned with protecting human rights and preventing false arrests and murder in the hands of law enforcement because it may target certain demographics unfairly. He is openly against the use of technology in law enforcement but doesn't specify whether it's all technology (e.g. body cams and cyber tools to identify child predators) or just the tools he deems unfair. On the Kairos website however, it states, “Facial recognition is no longer only an application for high-risk locations, such as airports, nuclear power plants and government buildings. A growing number of businesses realize that the ability to identify and recognize specific individuals can help to improve customer service and to serve as a proactive way of protecting their assets.” It goes on to list threat detection and known individual detection as use cases and also that “the next phase of this technology is to provide demographic and emotion analysis on faces…” Now, I wonder what metrics they intend to have businesses feed into the software to protect their assets?
I completely agree that facial recognition technology isn’t perfect; no new technology is. We have known for years that biometrics is problematic. It clearly requires more development before it can be the one and only source for the prosecution of crimes. We can’t say however, that every flaw in software is a racial problem. If we did, should we say that Uber’s self-driving car was sexist because it hit a woman last March?