The news that IBM will no longer produce facial recognition technology may not sound so big at first. The company’s commitment to oppose this kind of racially biased surveillance technology fits in with a welcome trend of actions being taken after anti-police protests against the police have engulfed the nation. Although some already warn that IBM’s move will not end the age of facial recognitionothers say it is an important step in the right direction.
IBM takes a stand against the development of technology that could lead to human rights violations. Activists and researchers have raised the alarm for years about the myriad problems of facial recognition technology, including racial and gender bias and privacy risks. Some welcome IBM’s announcement as a remarkable move, emphasizing that the big technology company’s resources will now be directed elsewhere. IMB’s decision to relinquish facial recognition may also signal other major sellers of this technology.
This was not a silent announcement from IBM. In a letter to congressmen, Said Arvind Krishna, CEO of IBM, that the company will no longer make general purpose facial recognition and analysis software because it is concerned about the use of the technology by law enforcement agencies. He clarified that IBM “strongly opposes” the use of facial recognition “for mass surveillance, racial profiling, violations of fundamental human rights and freedoms.” The letter also outlined various efforts the company would make in response to ongoing demonstrations against police brutality, such as the endorsement of a federal registry of police misconduct.
The news follows extensive efforts by organizers and researchers to highlight how facial recognition may have ingrained racial and gender bias. Notably in 2018, Joy Buolamwini and Timnit Gebru were co-authors a very influential article emphasize bias in facial recognition systems. This groundbreaking research showed racial and gender differences in the accuracy of AI-powered face classification software, especially for dark-skinned women. It occurred specifically analysis of differences in IBM technology, under systems sold by other companies. Research by the National Institute of Standards and Technology published earlier this year has provided even more evidence of this the technology can and is often biasedincluding based on gender, age and race.
IBM was not super specific in its announcement. However, a person familiar with the matter said that IBM will limit itself to developing visual object detection and will no longer use APIs that can be used to enable facial recognition for external or internal developers. The company declined to comment on its reasoning for stopping the use of the technology. Deborah Raji, a technology fellow at the AI Now Institute, noted that the company has somewhat quietly removed the ability to perform facial recognition from its API last fallwhile warning that IBM was far from the most dangerous provider of this technology.
‘[Big companies] have a big impact on the public discussion about it, and they have a big influence on the policy discussions about it, ”said Raji about facial recognition technology. “If they can reject the technology, it can send a signal to policymakers and the public that this technology is not in order.”
“IBM is definitely going in the right direction and will hopefully put some pressure on these other companies to take that step,” she added, although she was careful to achieve a full victory until other companies, such as Amazon, quit pushing their technology to the police.
The Associated Press, meanwhile, reported that IBM’s decision “is unlikely to impact profits” as the company focuses heavily on areas such as cloud computing and other companies have had more success in the facial recognition technology market. Still, Mutale Nkonde, a research fellow at Harvard and Stanford who leads the nonprofit AI for the People, told the AP that “the symbolic nature of this is important.”
As the public has become increasingly aware of the use of facial recognition in recent years, several places have recently switched to the use of the technology by their government agencies. But the ongoing demonstrations against police brutality have led to greater control over the use of the technology by law enforcement officers. Last week, during a public commentary session in Philadelphia on facial recognition, citizens expressed concerns that police would receive funding for the technology. Then, on Tuesday, the Boston City Council held a hearing on a regulation to ban facial recognition by the government.
Buolamwini, the founder of the Algorithmic Justice League, spoke at the Boston hearing, where she warned of the risks and problems associated with facial recognition. She also published a medium post on Tuesday and praised IBM’s decision, calling it a “first step towards corporate responsibility to promote equitable and responsible AI.”
“This is a welcome acknowledgment that facial recognition technology, especially as deployed by the police, has been used to undermine human rights and harm black people specificallyas well as indigenous people and other people of color (BIPOC), “Buolamwini wrote. She also noted that IBM had responded to their investigation by demonstrating inequalities in its technology through issue a statement immediately, contrary to the reaction of other companies that sell facial recognition.
Some of these other space companies have a much larger company that focuses on facial recognition. Amazon’s facial recognition technology, Rekognition, was launched in 2016 and has since been sold to police in the United States. After an investigation by the American Civil Liberties Union (ACLU) into the use of Rekognition by law enforcement officers, Amazon workers protested the company’s practices, and while Amazon has called for regulation of facial recognition technology, the company continues to sell its Rekognition technology.
Others who develop and sell facial recognition are not tech companies with household names. For example, Clearview AI is a company that has reportedly offered the technology to hundreds of police forces across the US. The company controversially scraped billions of images from the Internet without people’s permission, potentially allowing law enforcement officers to identify someone with just an image of their face. On June 8, Senator Edward Markey demanded to know if Clearview partnered with law enforcement officers to use its technology on demonstrators.
So it remains unclear how IBM’s announcement will ultimately impact what other companies are doing with facial recognition. Amazon did not respond to repeated requests for comment as to whether it plans to reconsider selling its technology to law enforcement officers. Clearview AI has not responded to Recode’s request to comment on the use of facial recognition by protesters.
Meanwhile, NEC, who reportedly sells facial recognition to government agencies in the United States, there seemed to be a willingness to discuss the use of the technology. Prior to IBM’s announcement, NEC America’s president and CEO, Mark Ikeno, sent a statement to Recode about the risks of facial recognition used to target demonstrators.
“We would be completely opposed to any attempt to use facial recognition or other technologies to prosecute people for exercising their First Amendment rights,” said Ikeno, “and we are not aware of any law enforcement customers who use it this way. use.”
Still, organizers and researchers are clear about what they want next. In her Medium post, Buolamwini called on technology companies working in artificial intelligence to donate a minimum of $ 1 million to organizations focused on racial justice within the technology sector, such as Data for Black Lives and Black in AI, emphasizing that technology companies should group finance who can hold them externally liable. She also called on companies involved in facial recognition to make a commitment to sign it Safe Face Pledge.
Raji, from AI Now, believes the actions of major technology companies like IBM can help broaden regulatory adoption. “That regulation should hopefully affect the world’s NECs and Clearviews who are willing to do so,” she said.
In an interview with NPR this week, Nate Freed Wessler, a lawyer with the Speech, Privacy and Technology Project of the ACLU, wondered if anyone would follow IBM’s example.
“It is good that IBM has taken this step, but it cannot be the only company,” said Freed Wessler. Amazon, Microsoft and other companies are trying to make a lot of money by selling these dangerous, questionable tools to the police. That should stop now. ‘
But ultimately, no matter what big tech companies do, the lack of strict government regulations will expose facial recognition technology to abuse or misuse.
“There will always be a small, shady company willing to do the worst things with technology you can do, and sell it to anyone who will buy it,” Evan Greer, Deputy Director of the Digital Rights Group Fight for told Recode the future. “It will not be enough just to call on companies to cancel their contracts or go back to make this kind of technology. We need legislators to intervene and do their jobs.”
While several proposals have been made, there is currently no federal law regulating facial recognition. The technology is under increasing skepticism from national politicians, namely Senator Jeff Merkley, who has worked on relevant legislation. The Algorithmic Justice League was released in May a white paper calls for the creation of a federal office overseeing facial recognition technologies. Meanwhile, the local authorities have continued with them own regulations and proposals to use or prohibit the face recognition technology.
Even though IBM’s commitment is symbolic, the company is big enough and powerful enough to break the perception of facial recognition. That alone could determine the future of technology.
Open Sourced is powered by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.