Skip to main content

Facebook increasingly using AI to scan for offensive content

facebook ai screen offensive computer male facing dark background
Image used with permission by copyright holder
“Content moderation” might sound like a commonplace academic or editorial task, but the reality is far darker. Internationally, there are more than 100,000 people, many in the Philippines but also in the U.S., whose job each day is to scan online content to screen for obscene, hateful, threatening, abusive, or otherwise disgusting content, according to Wired. The toll this work takes on moderators can be likened to post-traumatic stress disorder, leaving emotional and psychological scars that can remain for life if untreated. That’s a grim reality seldom publicized in the rush to have everyone engaged online at all times.

Twitter, Facebook, Google, Netflix, YouTube, and other companies may not publicize content moderation issues, but that doesn’t mean the companies aren’t working hard to stop the further harm caused to those who screen visual and written content. The objective isn’t only to control the information that shows up on their sites, but also to build AI systems capable of taking over the most tawdry, harmful aspects of the work. Engineers at Facebook recently stated that AI systems are now flagging more offensive content than humans, according to TechCrunch.

Recommended Videos

Joaquin Candela, Facebook’s director of engineering for applied machine learning, spoke about the growing application of AI to many aspects of the social media giant’s business, including content moderation. “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people,” Candela said. “The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.”

Please enable Javascript to view this content

The systems aren’t perfect and mistakes happen. In a 2015 report in Wired, Twitter said when its AI system was tuned to recognize porn 99 percent of the time, 7 percent of those blocked images were innocent and filtered incorrectly, such as when photos of half-naked babies or nursing mothers are screened as offensive. The systems will have to learn, and the answer will be deep-learning, using computers to analyze how they, in turn, analyze massive amounts of data.

Another Facebook engineer spoke of how Facebook and other companies are sharing what they’re learning with AI to cut offensive content. “We share our research openly,” Hussein Mehanna, Facebook’s director of core machine learning, told TechCrunch. “We don’t see AI as our secret weapon just to compete with other companies.”

Bruce Brown
Bruce Brown Contributing Editor   As a Contributing Editor to the Auto teams at Digital Trends and TheManual.com, Bruce…
Nvidia’s $3,000 Project Digits puts a 1-Petaflop AI on your desk
Nividia Project Digits on a desktop

During a 90-odd-minute keynote address at CES 2025 in Las Vegas on Monday, Nvidia CEO Jensen Huang showed off a powerful desktop computer for home AI enthusiasts. Currently going by Project Digits, this $3,000 device takes up about as much space as a Mac mini and offers 1 PFLOPS of FP4 floating point performance.

Nvidia reportedly used its DGX 100 server design as inspiration for the self-contained desktop AI, with Projects Digits being powered by a 20-core GB10 Grace Blackwell Superchip on 128GB of LPDDR5X memory with a 4TB NVMe solid-state drive (SSD).

Read more
Google TV will soon get Gemini’s AI smarts
Using the Google TV Streamer.

Starting later in 2025, yelling at your TV will finally accomplish something thanks to a new Google initiative announced Monday ahead of CES 2025. The company plans to incorporate its Gemini AI models into the Google TV experience as a means to “make interacting with your TV more intuitive and helpful.”

Google claims that this “will make searching through your media easier than ever, and you will be able to ask questions about travel, health, space, history, and more, with videos in the results for added context,” the company wrote in its announcement blog post. Google had previously forfeited a significant chunk of its market value after its Gemini prototype, dubbed Bard, flubbed its space-based response during the model's first public demo in 2023. Google also had to pause the AI's image-generation feature in early 2024, after it started outputting racially offensive depictions of people of color.

Read more
Here’s everything OpenAI announced in the past 12 days
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI kicked off its inaugural "12 Days of OpenAI" media blitz on December 5, each day unveiling new features, models, subscription tiers, and capabilities for its growing ChatGPT product ecosystem during a series of live-stream events.

Here's a quick rundown of everything the company announced.
Day 1: OpenAI unleashes its o1 reasoning model and introduces ChatGPT Pro

Read more