Microsoft Is Scrapping Some Bad A.I. Facial Recognition Tools

CBx Vibe:Mean Mug” Yung Bans

By CultureBanx Team

  • Microsoft is throwing in the towel on its artificial intelligence service for detecting, analyzing and recognizing faces
  • Some facial recognition systems would confuse light-skin men 0.8% of the time and would have an error rate of 34.7% for dark-skin women

As an outspoken proponent to properly regulate facial recognition technology, Microsoft (MSFT –1.32%) announced it would get rid of its A.I. tools in this space. A.I. is still the most disputed part of technology and is becoming increasingly more commonplace as companies look to incorporate it across their platforms. Now, Microsoft is finally putting an end to its role in the potential for abuse that facial recognition technology has, which could lead to incidents of racial profiling.

Why This Matters: Following a two year review, and 27-page document, the tech giant wants to have tighter controls of its artificial intelligence products. In the past Microsoft has asked governments around the world to regulate the use of facial recognition technology. The software giant wants to ensure the technology which has higher error rates for African Americans, does not invade personal privacy or become a tool for discrimination or surveillance.

There are some companies that heavily rely on Microsoft’s facial recognition technology. For example, Uber (UBER -4.70%) uses the software in its app to verify that a driver’s face matches the ID on file for that same driver’s account. This seems like a meaningful way of using facial recognition tools.

However, there is a lot of harm that can come from this type of tech. MIT Research shows commercial artificial intelligence systems tend to have higher error rates for women and Black people. Some facial recognition systems would only confuse light-skin men 0.8% of the time and would have an error rate of 34.7% for dark-skin women.

Back in 2019, Microsoft quietly deleted its MS Celeb database, which contains more than 10 million images. Images compiled included journalists, artists, musicians, activists, policy makers, writers and researchers. The deletion came after the tech company called on U.S. politicians to do a better job of regulating recognition systems.

Additionally, in Microsoft’s 2018 SEC annual report, it noted that “A.I. algorithms may be flawed. Datasets may be insufficient or contain biased information. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

What’s Next: Remember artificial intelligence systems inherently learn what they are being “taught”. The use of facial recognition technology has a disparate impact on people of color, disenfranchising a group who already face inequality. It says a lot about the harmful nature built into A.I. for a company like Microsoft to be throwing in the towel on the technology. The real question is, will the rest of the industry do the same.

CBx Vibe:Mean Mug” Yung Bans

CONTRIBUTOR

CultureBanx Team

Welcome to CultureBanx, where we bring you fresh business news curated for hip hop culture!