Skip to main content

Google execs say we need a plan to stop A.I. algorithms from amplifying racism

 

Two Google executives said Friday that bias in artificial intelligence is hurting already marginalized communities in America, and that more needs to be done to ensure that this does not happen. X. Eyeé, outreach lead for responsible innovation at Google, and Angela Williams, policy manager at Google, spoke at (Not IRL) Pride Summit, an event organized by Lesbians Who Tech & Allies, the world’s largest technology-focused LGBTQ organization for women, non-binary and trans people around the world.

Recommended Videos

In separate talks, they addressed the ways in which machine learning technology can be used to harm the black community and other communities in America — and more widely around the world.

Please enable Javascript to view this content

https://twitter.com/TechWithX/status/1276613096300146689

Williams discussed the use of A.I. for sweeping surveillance, its role in over-policing, and its implementation for biased sentencing. “[It’s] not that the technology is racist, but we can code in our own unconscious bias into the technology,” she said. Williams highlighted the case of Robert Julian-Borchak Williams, an African American man from Detroit who was recently wrongly arrested after a facial recognition system incorrectly matched his photo with security footage of a shoplifter. Previous studies have shown that facial recognition systems can struggle to distinguish between different black people. “This is where A.I. … surveillance can go terribly wrong in the real world,” Williams said.

X. Eyeé also discussed how A.I. can help “scale and reinforce unfair bias.” In addition to the more quasi-dystopian, attention-grabbing uses of A.I., Eyeé focused on the way in which bias could creep into more seemingly mundane, everyday uses of technology — including Google’s own tools. “At Google, we’re no stranger to these challenges,” Eyeé said. “In recent years … we’ve been in the headlines multiple times for how our algorithms have negatively impacted people.” For instance, Google has developed a tool for classifying the toxicity of comments online. While this can be very helpful, it was also problematic: Phrases like “I am a black gay woman” were initially classified as more toxic than “I am a white man.” This was due to a gap in training data sets, with more conversations about certain identities than others.

There are no overarching fixes to these problems, the two Google executives said. Wherever problems are found, Google works to iron out bias. But the scope of potential places where bias can enter systems — from the design of algorithms to their deployment to the societal context under which data is produced — means that there will always be problematic examples. The key is to be aware of this, to allow such tools to be scrutinized, and for diverse communities to be able to make their voices heard about the use of these technologies.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Windows 11’s February 2025 update fixes annoying bugs
Windows 11 logo on a laptop.

Microsoft's February 2025 cumulative update brings much-needed relief to Windows 11 users, fixing Auto HDR issues that caused game crashes, audio output disruptions, and USB webcam detection problems, as reported by Bleeping Computer. The patch, KB5051987 for Windows 11 24H2 users and KB5051989 for 23H2 addresses these irritating bugs and is mandatory.

The update fixes the Auto HDR problem that interfered with the colors and caused game crashes, improving the gaming experience. Furthermore, the update fixes a bug that cut off audio output, especially if you were using a digital-to-analog converter (DAC), though others were affected. Moreover, a rare issue displayed a "This device cannot start" message, but you may not have seen that one.

Read more
Windows 10 KB5051974 update adds a new app without asking
A Dell laptop with Windows 10 sitting on a desk.

Microsoft has released the KB5051974 cumulative update for versions 22H2 and 21H2, adding security fixes and patching a memory leak. However, as Bleeping Computer reports, the update also includes a surprise: the new Outlook for Windows app.

The update is mandatory because it includes the January 2025 Patch Tuesday security updates. Once you install it, you will notice the new app icon near the classic one in the Start Menu's apps section. Since the new app can operate concurrently, you don't have to worry about interfering with the old one.

Read more
Microsoft Supercharges AI to fix Windows software bugs
Windows 11 on several devices.

Microsoft is developing an AI system to make detecting and fixing software problems on your Windows 11 PC easier, MSPowerUser reports. The system analyzes error data to resolve issues efficiently, and Microsoft is also working on turning Copilot into a multi-user chat platform.

MSPowerUser recently came across a new patent document with a publication date in February 2025. Specifically, it's a 25-page document that describes how the new system would work. According to the document, the new AI system would detect the issues and suggest or apply solutions to refine the troubleshooting process. Although the AI system is designed for developers, regular users can also benefit by getting automated fixes and smart support. Furthermore, the system can create reports for more complex issues to assist developers in debugging more efficiently.

Read more