The global crisis brought on by the COVID-19 pandemic along with lockdown measures and social distancing norms has fueled the need for social interaction and connection. The world has seen an accelerated shift from offline to virtual social activities seeking that connection.
Online communities offer people a platform not just to connect and interact but also seek information, knowledge and insights. The pandemic has advanced online communities to a “must-have” powerhouse and go-to source for intelligent conversations, deep learnings and meaningful connections.
Among the must-haves for any successful online community, being engaging and relevant as well as safe and truthful are definitely at the top of the list.
Community engagement is strongly dependent on the credibility and quality of your Subject Matter Experts (SMEs) who pitch in with well-researched and high-quality information that takes the engagement a level up and inspires insightful and thought-provoking conversations.
Unfortunately, communities must deal with misinformative (and sometimes malicious) contributions too. Sadly, this behavior can take on a life of its own if not dealt with promptly. Worse still are small pockets of members who start mischief for mischief’s sake. Their behavior can be insidious, often starting out small but building momentum the longer they remain unchecked. This type of behavior spoils enjoyment for many members and discourages participation.
In today’s social environment, the healthy growth of online communities depends on effective content moderation (CoMo). And effective content moderation includes these key factors:
1. Weed out misbehavior. At its source. At once.
When misbehavior or misinformation pop up, there’s no time to lose. It needs to be dealt with immediately. Fortunately, there are proven technologies and techniques that can do the job. The Human + AI-powered CoMo model is a powerful combination that works particularly well.
The first step scans new content with artificial intelligence (AI), natural language programming (NLP) and natural language understanding (NLU). For more on these technologies, see Key 3.
The next step is to engage your SMEs to quash misinformation as soon as it crops up in their conversations. Your community trusts SME knowledge — and they’re your best defense.
2. Fight falsehoods with vetted, validated truth
When contributions are made that are either untrue or not entirely accurate, SMEs or content moderators armed with the right information need to engage immediately with vetted facts that are validated for the community. SMEs then need to stay constantly engaged to tactfully keep conversations and content truthful, useful and enjoyable — the best way to retain your members.
3. Deploy AI/NLP/NLU in concert with human moderators
When used in the pre-moderation stage, AI and NLP/NLU can be powerful tools to quickly identify hateful, false, abusive, profane and egregiously inappropriate content. These tools are trained to spot somewhat obvious noncompliant trigger words, phrases, sounds and images.
Where AI tech hits its red line is when text is displayed over images, evaluating subtle contextual nuance, identifying slang/vernacular speech and uncovering certain “dog whistle” signal phrases and images. This is when instinct, feel, nuance and finesse (and even knowledge of local vernacular phrases) are required.
To handle this type of content, human moderators shoulder the load. These experts are on guard around the clock to keep content safe, truthful, authentic and reliable for a voracious and vocal community audience.
4. Speak in the voice of the local community
This may not be obvious at first, but it’s essential. SMEs who speak local languages and dialects whenever possible improve engagement and help avoid misunderstandings.
As an added defense, deploy content moderators who fluently speak local languages and dialects and understand regional idioms and vernacular. Their fluency helps to quickly identify locally posted undesirable content and comments that could slip by less-fluent moderators.
5. Take good care of your moderators
A typical moderator makes hundreds of difficult judgment calls and sifts through large amounts of harmful and egregious content every working shift. No two are exactly alike. Working under pressure like this could cause some moderators to experience post-traumatic stress disorder, fatigue and burnout. That’s why maintaining moderators’ physical and mental well-being is a top priority for smart social brand communities.
 “Social Media: Misinformation and Content Moderation Issues for Congress”
Congressional Research Service, January 27, 2021
“For example, in the first quarter of 2020, AI technology flagged 99% of violent and graphic content and child nudity on Facebook for review before any user reported it. In contrast, Facebook’s AI technology identified only 16% of bullying and harassment content, suggesting content moderators are better able to identify this form of policy violation.”
 “Misinformation online is bad in English. But it’s far worse in Spanish.” Washington Post October 28, 2021
 “Information Quality and Content Moderation” Google You Tube pages 16-18 2020