Image Source: Microsoft

Microsoft Shuts Down Police Party At The A.I Playground

 

By Claire Moraa

  • Microsoft is imposing limitations on the use of its Azure model AI facial recognition feature by the police
  • AI facial recognition error rates are up to 34% for dark skinned females

We can all agree AI has revolutionized every industry but in the same breath, it has also perpetuated inequalities and prejudices especially in minority communities. One sector that has been hard hit with AI racial biases is law enforcement with at least six known people wrongfully arrested. And it’s not just about prejudices, it has the potential to monitor, track, and surveil individuals without their consent. Without clear guidelines and standards, there is an increased risk of misuse, abuse, and infringement of individuals’ rights.

Why This Matters: The police making use of AI in facial recognition can exacerbate existing biases. These concerns are not just coming from nowhere. AI has been there for a while and while it’s been marinating, one thing that stands out is its ability to make stuff up. If we’re talking numbers, people of color are more at risk. According to a study done by Harvard, facial recognition algorithms have higher error rates for people with darker skin tones compared to those with a lighter skin tone. For instance, the stats prove that the error rates are up to 34% for dark skinned females. If these outcomes were to be used in a real-life setting to identify wrong doers, it would be a cycle of unjust outcomes, including false identifications and wrongful arrests.

And if we really get into the intricacies, AI tools don’t have the intelligence to perpetuate these biases. They’re models that are fed with information and trained to predict. It therefore leaves a lot to ponder on as whether these injustices are a deliberate action to target marginalized communities. Microsoft has made a big step and it’s a start but these restrictions must be far-reaching. Other law enforcement agencies across the world are still at liberty to use AI for facial recognition. So while its a step in the right direction, it needs to reach every global corner.

Still, as a major player in the technology sector, Microsoft’s decision to set limits on the use of its AI for facial recognition can have a ripple effect across the industry. It may encourage other companies to assess and regulate the use of their AI technologies in similar ways, leading to industry-wide efforts to establish responsible and ethical guidelines for AI applications. Companies that demonstrate a proactive approach to addressing ethical concerns around AI and data privacy are likely to enhance their reputation and maintain the trust of their stakeholders.

Situational Awareness: By taking this action, Microsoft is demonstrating a commitment to ethical use of technology and recognizing the potential risks associated with facial recognition technology in law enforcement. This decision aligns with growing concerns about the potential misuse of AI in ways that may infringe on individuals’ rights and privacy. And while AI is here to stay, the future of technology and business will be shaped by a continued focus on ethical AI, regulatory developments, diversity and inclusion and data security.

CBX Vibe:Reserve The Right” Glen Washington

Welcome to CultureBanx, where we bring you fresh business news curated for hip hop culture!