Skip to main content

Social media platforms are finally acting like the mini-governments they are

For years, companies such as Facebook and Twitter have taken a somewhat laissez-faire approach to moderating what’s posted to their platforms. Even in the wake of the Cambridge Analytica scandal, which underlined the enormous influence these platforms can have on politics, economics, and online discourse, social networks have largely relied on piecemeal efforts to stop malicious groups from abusing their platforms.

In the absence of any overarching and regularly enforced moderation policy, and by actively avoiding being “arbiters of truth,” these platforms have continued to fester, enabling hate speech and misinformation to run rampant.

Recommended Videos

But the tides are seemingly starting to turn. Recently, social networks have begun dropping the hammer and actively revising their policies on hate speech, fact-checking political leaders, and more. In other words, social media platforms are finally stepping up and governing themselves.

No longer turning a blind eye to political voices

Since May, Twitter, after years of letting President Donald Trump run lawless, has banned and flagged several of his tweets for reasons ranging from publishing misleading content to glorifying violence. Earlier this month, Facebook backtracked on its iron-willed stand against taking action on posts from political leaders and said the social network will now label content no matter how newsworthy it is.

Reddit is turning up its community oversight efforts as well. As part of a major hate speech purge, the company took down the subreddit r/The_Donald, one of the platform’s largest and most controversial political communities, because it had devolved into a cesspool of harassment and hateful posts.

Similarly, Amazon’s streaming video platform, Twitch, temporarily suspended the Trump administration’s channel for violating its policies against hateful content. A few weeks ago, Snapchat announced it would no longer promote the president’s account. Most recently, YouTube banned a handful of prominent white supremacist channels over similar concerns.

Kicking off a new era of social media moderation

Considering this wave of self-moderation, it is safe to say that online platforms are, at long last, behaving as the mini-governments that they truly are.

One of the key factors behind this mass shift is Trump’s crackdown on Section 230, a piece of legislation that prevents social media companies from being held responsible for the content they host. The new executive order seeks to strip these protections off and bring more oversight to how giants like Twitter moderate their platforms.

That executive order, which was said to be rushed as a retaliatory move against Twitter, has essentially backfired and proved that the president has little sway over social networks. Despite the fact that the executive order changed nothing, however, the increased scrutiny it inspired has pushed many social media companies to ensure their platforms are free of hate speech or any sort of objectionable content. And that’s what has seemingly happened over the following weeks.

But that’s not all. The police killing of George Floyd has spurred a wave of protests across the nation and the activism hasn’t left tech companies unscathed. For the first time, Facebook employees publicly criticized the company, and a handful even quit. Reddit co-founder Alexis Ohanian stepped down from the company’s board and urged it to fill his seat with a Black candidate.

For Facebook, the tipping point was likely the advertiser exodus. In the last month, the social network has faced an organized boycott by some of its biggest advertisers including Target, Microsoft, Starbucks, Unilever, and several more — citing the company’s tendency to let hate groups flourish. How deep of a dent did they leave? Facebook, after a two-day stock decline, lost nearly $60 billion in market value.

Then there’s the more political angle. With Trump’s deteriorating position in the polls, tech companies may feel safer in taking action against him as his term nears its end. These are, lest we forget, the same platforms who refused to act on a tweet in which Trump threatened to nuke North Korea three years ago. By rolling out these long-overdue updates at such a juncture, tech companies will also partially escape scrutiny if they were to land in a similar position as the 2016 elections later this year.

Will this politically motivated shift last?

While these efforts are steps in the right direction, they unfortunately highlight a worrying truth about online platforms: They’re still more reactive than proactive. The majority of these policy changes and updates only apply to areas that, at the moment, threaten online platforms’ positions.

For instance, there’s a growing Reddit community of about 140,000 members that discusses all the ways the social network’s policies are being abused in topics that are not in focus. Facebook banned dozens of anti-government extremist pages, but a report by BuzzFeed News pointed out how the company has been making money from “boogaloo” accounts through ads.

Glaring issues such as harassment continue to plague social networks across the world, and on any given day, it’s not uncommon to see offensive hashtags trending. Twitter took down initial tweets of the “Plandemic” video when it made headlines but in the following days, the video has continued to resurface and the social network has refused to take down the conspiracy handles spreading them.

Online platforms are more intertwined with politics than ever. But while this latest round of updates is a welcome development, tech companies will have to bring in more systemic changes to stay ahead of the curve instead of simply catering to new controversies. Because in an increasingly online-first world, Big Tech can’t afford to lag behind malicious actors and trends.

Shubham Agarwal
Former Digital Trends Contributor
Shubham Agarwal is a freelance technology journalist from Ahmedabad, India. His work has previously appeared in Firstpost…
PayPal vs. Venmo vs. Cash App vs. Apple Cash: which app should you use?
PayPal, Venmo, Cash App, and Apple Wallet apps on an iPhone.

We’re getting closer every day to an entirely cashless society. While some folks may still carry around a few bucks for emergencies, electronic payments are accepted nearly everywhere, and as mobile wallets expand, even traditional credit and debit cards are starting to fall by the wayside.

That means many of us are past the days of tossing a few bills onto the table to pay our share of a restaurant tab or slipping our pal a couple of bucks to help them out. Now, even those things are more easily doable from our smartphones than our physical wallets.

Read more
How to change margins in Google Docs
Laptop Working from Home

When you create a document in Google Docs, you may need to adjust the space between the edge of the page and the content --- the margins. For instance, many professors have requirements for the margin sizes you must use for college papers.

You can easily change the left, right, top, and bottom margins in Google Docs and have a few different ways to do it.

Read more
What is Microsoft Teams? How to use the collaboration app
A close-up of someone using Microsoft Teams on a laptop for a videoconference.

Online team collaboration is the new norm as companies spread their workforce across the globe. Gone are the days of primarily relying on group emails, as teams can now work together in real time using an instant chat-style interface, no matter where they are.

Using Microsoft Teams affords video conferencing, real-time discussions, document sharing and editing, and more for companies and corporations. It's one of many collaboration tools designed to bring company workers together in an online space. It’s not designed for communicating with family and friends, but for colleagues and clients.

Read more