“I don't think taking the technology away from the police is going to solve the problem.”

– iOmniscient CEO Rustom Kanga

The use of facial recognition is really getting out of control nowadays with those companies that are developing machine-learning (ML) software being more than eager to use the billions of images available online to finetune their equipment. Everyone who is in a position of power loves it. It does have some rather innocuous uses though, such as Facebook spotting your friends or shops which want to identify repeat customers and perhaps highlight known shoplifters.

But there is a darker side to it with authoritarian regimes using it as a weapon to subjugate their own people and police forces around the world utilizing facial recognition to justify stop and search procedures. Those of us being monitored in public never gave our permission and in fact we don't even know that it is actually happening to us. This is a technology that invades our privacy at a new level.

Apart from the privacy aspect, the false positives which plague these systems are a big concern. People of color are misidentified more often than white people with Asians and African Americans being 100 times more likely to be wrongly flagged. The issue here is the data set being used to train the artificial intelligence (AI) facial recognition software. If it is trained on a data set comprised of mainly white males, it's going to become really good at identifying white males. In Asia, false positives on Asians are far fewer because that is what the systems have been trained upon.

https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software

http://gendershades.org/ Gender Shades: Intersectional phenotypic and demographic valuation of face data sets and gender classifiers.

In recognition of the failings of facial recognition systems, Microsoft, perhaps in an attempt to gain the moral high ground, has made the call for public regulation and corporate responsibility of the technology. The company voiced concerns about fundamental human rights protections such as privacy and freedom of expression and laid out its recommendations for government regulation.

Facial recognition technology: The need for public regulation and corporate responsibility - Microsoft on the Issues
All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial...

So, in light of the recent protests about discrimination and police violence, Microsoft have made the announcement that they won't sell facial recognition to police departments until there is some form of federal regulation in place. Microsoft president Brad Smith said:

“We will not sell facial recognition to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”

Smith further stated:

“I think it is important to see what IBM have done. I think it is important to recognise what Amazon has done. It is obviously similar to what we are doing. But if all the responsible companies in the county cede this market to those who are not prepared to take a stand, we won't necessarily serve the national interest or the lives of the black and African American people of this nation well. We need Congress to act, not just tech companies alone. This is the only way that we will guarantee that we will protect the lives of people.”

He further said

“We've been taking a principled stand in advocating not only for ourselves, but the tech sector, and, under the law, a principled stand for the country and the world.”

Fine words indeed.

There was no commitment made by Smith as to whether the new policy would include sales of facial recognition to other government agencies such as US Immigration and Customs Enforcement (ICE) or the Drug Enforcement Administration (DEA).

Microsoft says it won’t sell facial recognition software to police until there’s a national law ‘grounded in human rights’
Brad Smith, Microsoft’s president, said on Thursday that the company will not sell its facial recognition software for police use.

However, on top of selling facial recognition to police departments, it has come to light that Microsoft has been repeatedly marketing the technology to the DEA, this according to emails obtained by the ACLU. Nathan Freed Wessler, senior staff attorney at the ACLU said:

“Even after belatedly promising not to sell surveillance tech to police last week, Microsoft has refused to say whether it would sell the technology to federal agencies like the DEA. It is troubling enough to learn that Microsoft tried to sell a dangerous tech to the US Drug Enforcement Administration given that agency's record spearheading the racist drug war, and even more disturbing now that Attorney General Bill Barr has reportedly expanded this very agency's surveillance authorities, which could be used to spy on people protesting police brutality.”
Emails Show Microsoft Tried to Sell Face Recognition System to Federal Law Enforcement
Today the American Civil Liberties Union released emails revealing how Microsoft aggressively marketed its face recognition product to the federal Drug Enforcement Administration (DEA). The

https://data.aclum.org/wp-content/uploads/2020/06/20-00004-L-OCR.pdf

I mentioned in a previous article the partnership between Microsoft and Veritone. This company's cloud-based software IDentify, relies on the Microsoft cloud to help law enforcement agencies flag the faces of potential suspects. So despite the rhetoric from Microsoft that they no longer sell facial recognition to police departments, it's blatantly obvious that they are working hand-in-glove with Veritone. If they were true to their word, Microsoft would stop providing cloud services to the likes of Veritone, especially in light those altruistic proposals from Brad Smith.

Veritone IDentify
Veritone IDentify | For Law Enforcement | Intelligent, Rapid Suspect Identification

This corporate posturing by Microsoft, IBM and Amazon is all well and good but it will not halt the use and rampant abuse of facial recognition by police departments and oppressive regimes worldwide. Whilst these big tech companies may be innovators, they are not the top suppliers of this technology. There are other players around who have a large presence in this segment, namely, Japan's NEC and Ayonix, Germany's Cognitec and Australia's iOmniscient.

None of these companies have stated that they will follow the lead of Microsoft, Amazon and IBM and stop selling facial recognition to police departments in the US. They, in fact, continue to expand their sales globally and, in the case of iOmniscient, it is of concern that the Hong Kong police have had access to their technology for over 3 years. Engineers from the company have trained dozens of officers in its use but iOmniscient have declined to comment on whether or not the police in Hong Kong were using its facial recognition.

Hong Kong police have facial recognition tech – but are they using it?
Hong Kong law enforcement authorities have access to artificial intelligence software that can match faces from any video footage to police databases, but it is unclear if it is being used to quell months-long pro-democracy protests, according to people familiar with the matter.

NEC appear to be happy to continue supplying facial recognition to law enforcement as evidenced by their statement affirming support of the Black Lives Matter movement:

“We are committed to continue to partner with you and the communities you serve to cooperatively ensure that our efforts to make society safer, equally make society more just and inclusive.”

https://nectoday.com/a-message-from-the-office-of-mark-ikeno-president-and-ceo-of-nec-corporation-of-america/

Keen not to miss out, London's Metropolitan Police have also started to use NEC's technology. This move has not been well received by advocacy groups in the UK and they have given warnings about the undermining of democratic freedoms.

Silkie Carlo, the director of Big Brother Watch stated:

“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK.”
Big Brother Watch: Defending Civil Liberties, Protecting Privacy
Big Brother Watch exposes and challenges threats to our privacy, our freedoms and our civil liberties at a time of enormous technological change in the UK.

The Liberty advocacy group were equally damning in their assessment with their director Clare Collier saying in a statement:

“Facial recognition technology gives the state unprecedented power to track and monitor any one of us, destroying our privacy and free expression.”
Liberty
Liberty challenges injustice, defends freedom and campaigns to make sure everyone in the UK is treated fairly. Join us. Stand up to power.

The Privacy Advocate