Blog | Communications Media and Entertainment

Understanding the Human Element of Moderation

By identifying inaccuracies and flagging malicious or inappropriate posts for removal, moderators help users enjoy all the benefits of social media.

NOVEMBER 06, 2020

Social media is the new watering hole.

It’s where we go to socialize, connect, learn. It’s a feed of experiences, filling our devices and lives with new information. But with this digital reality comes tangible ramifications.

For example, the recent surge in misinformation on social media has real potential to directly impact important political events, like U.S. elections. Such misinformation posts have motives, intentions, and agendas behind them – which can be disseminated and forwarded by users that aren’t aware of the original intent of the piece. Therefore, to minimize the negative effect of such content, every post needs to have a person to moderate it.

Content moderation is the connection between the digital and physical realms. As moderators manually review social media activities, product reviews, and other user-generated posts, they flag those that may have negative impact. By identifying inaccuracies and flagging malicious or inappropriate posts for removal, these moderators help users enjoy all the benefits social media has to offer.

But content moderation isn’t a run-of-the-mill job. Manually reviewing pieces of content and reading through malicious and harmful posts is not only time-consuming, it’s also psychologically draining.

With the profession defined by such unique challenges, online content moderation can’t be treated like a standard job. Moderators need advanced technology to support their efforts.

Let’s explore why.

Understanding the Human Element of Moderation

Seeking: Social Media Saviors

The information age unloaded innovation onto society, upheaving the way we speak and interact with one another. In 2004 it gave us Facebook, with Twitter emerging only two years after that. Suddenly, social media was in the spotlight.

Then came the posts, videos, GIFs – countless avenues for voices around the world to speak their piece. Twitter alone has around 500 million new tweets each day, or 6,000 tweets every second.

While this digital revolution led to unique storytelling, widespread knowledge-sharing, and strengthened connections, it also led to the spread of contentious and harmful messages or even misinformation. In fact, the latter has grown exponentially in 2020 due to the coronavirus pandemic. A recent survey showed that 60% of 16 to 24-year-olds in the UK had recently used social media for information about the coronavirus, and 59% had come across fake news on the subject.

Today’s digital users must somehow decipher between what is real or not, what is safe versus not. It’s a challenge unique to the internet age, and yet an issue we’re all ill-equipped to deal with. It’s also far from the experience social media providers hope we have on their platforms.

This is precisely why content moderation is so vital.

A Modern-Day Public Service

As social media expanded, so did content moderation as a profession – with Facebook alone employing 15,000 content moderators to work on its platform. While the profession has become common, it’s far from traditional.

Tasked with protecting millions of users from malicious content and negative experiences every day, content moderators effectively conduct an act of public good. Moderators are like firefighters or first responders, handed a set of responsibilities that come with their own fair share of threats.

In order to root out this negativity, moderators must look through a diverse spectrum of content. The good, the bad, and the ugly. The type of material – and sheer volume of it – is often overwhelming, which impacts moderator happiness and well-being.

Protecting Those That Protect Us

Toxic content takes a toll. As moderators work to protect users from harm, organizations must look at ways to safeguard these employees’ well-being as well. That starts by looking beyond the traditional methods for content moderation.

First and foremost, bring in backup. Artificial intelligence (AI) is the helping hand content moderators need to address today’s complex digital landscape. That’s because AI-based platforms continuously analyze and learn to recognize certain patterns through automated cognitive intelligence, making the moderation process quicker and smarter each time.

While AI is great for productivity and speed, that doesn’t mean content moderators aren’t integral to the process. The human touch is still required in order to stay ahead of today’s digital threats, as moderators bring human empathy and situational thinking to the table.

This balance is at the crux of Sutherland’s Content Moderation solution, which blends the strengths of humans and AI to moderate content at scale to create safe and trustworthy online environments for organizations and their communities.

Sutherland’s AI-enabled platform automates content screening, while its proprietary Happiness and Social Indices and dedicated psychologists protect moderators and monitor well-being, empowering them to safely and effectively do their job. This approach shields moderators – or Guardians as we call them – from danger as they work to do the same for society.

Our Wellness app can also engage the moderators across their preferred activities. This ensures moderators’ well-being in a virtual environment as well.

When content moderation is approached holistically, using the latest technology, moderator mental health remains a priority and the public digital experience remains intact. In this way, the social watering hole can create a positive experience for all.

 

Content Strategy, Creation and Services

Vikas Verma

VP & Global Head of Research

Vikas has years of experience in investment management, venture capital, private investing, special-sits, and growth equities.

Vikas Verma

Related Insights