The internet has become a powerful conduit for spreading misinformation and disinformation, affecting critical areas like public health and elections. Its reach is vast, and no part of life is immune to its influence. The World Economic Forum predicts that this escalating threat could become the top global risk within two years.
As the United States approaches another critical election cycle, the issues of misinformation and disinformation are once again at the forefront of public discourse. These phenomena, which have been weaponized in past elections, pose significant threats to the integrity of democratic processes. Understanding their implications is crucial for voters, policymakers, and the media.
What is Misinformation and Disinformation: Understanding the Difference
At first glance, misinformation and disinformation might seem like two sides of the same coin, but the distinction is crucial – which is the intent.
- Misinformation: This refers to any inaccurate or misleading information that is spread unintentionally. This could be sharing an unverified news story, or a social media post based on faulty information, without any deliberate intention to do so.
- Disinformation: This refers to deliberate sharing of falsified or misleading information, with an intentional agenda to deceive the readers. Political propaganda or fabricated social media posts designed to stir outrage are prime examples of this.
What makes disinformation particularly dangerous is its ability to craft alternate realities. Unlike traditional propaganda, which relies on repetitive messaging, modern disinformation campaigns are sophisticated, using a blend of truth and lies to create narratives that feel believable. By exploiting existing social, political, or economic divisions, these campaigns tap into deep-seated emotions like fear, anger, or pride, making their false narratives more likely to take root and spread.
Disinformation – A Tool for Political Manipulation
Disinformation is frequently used as a calculated tactic by various actors—ranging from foreign governments to political groups and rogue organizations—to influence public opinion and destabilize democratic systems.
During election cycles, disinformation manifests in multiple ways:
- Voter suppression tactics: False claims about changes in voting procedures, such as polling locations or deadlines, aim to confuse voters, particularly in minority communities, resulting in lower turnout.
- Fabricated endorsements: Manipulated or completely fabricated endorsements from public figures or organizations are used to manipulate voter sentiment, leveraging people’s trust in familiar faces and names.
- Smear campaigns: Disinformation is also weaponized to damage candidates’ reputations through false allegations, aiming to shift public opinion and diminish an opponent’s standing.
These strategies collectively undermine the democratic process, creating an atmosphere of confusion and distrust.
The Impact of Misinformation and Disinformation on Previous US Elections
The consequences of misinformation and disinformation are far from hypothetical—they’re happening in real-time and affecting real lives.
- 2020 – False Claims About Mail-in Voting Procedures
In the 2020 election, misinformation about voting methods, especially mail-in ballots, caused confusion that such ballots would be discarded or manipulated. Such information spread widely on social media, leading many to believe in voter fraud despite evidence to the contrary.
Impact: This misinformation fueled mistrust in the election results, contributing to the claims of a “stolen election” and culminating in the Capitol riots on January 6, 2021. - 2020 – “SharpieGate” Conspiracy
In Arizona, disinformation spread alleging that poll workers were giving voters Sharpies to fill out their ballots, which would supposedly invalidate votes for Donald Trump due to the machines being unable to read them. This claim was thoroughly debunked as Arizona’s voting machines were able to process Sharpie-marked ballots without issue.
Impact: Despite being false, the “SharpieGate” conspiracy created confusion and distrust in the voting process, particularly in Maricopa County, a key battleground in Arizona. - 2016 – Russian Interference via social media (2016 Election)
Russian operatives used fake social media accounts to spread disinformation, promoting divisive content and conspiracy theories during the 2016 election. This effort was intended to amplify societal divisions and reduce confidence in the democratic process.
Impact: These disinformation campaigns reached millions of Americans and contributed to the polarized political environment. - 2016 – “Pizzagate” Conspiracy
- The “Pizzagate” conspiracy falsely claimed that a child trafficking ring involving political figures was operating out of a Washington, D.C., pizzeria. The baseless claim circulated on social media and led to real-world consequences.
- Impact: A man, believing the disinformation, entered the pizzeria with a gun and fired shots. Though no one was harmed, the incident underscored how dangerous disinformation can be.
The Rise of the Falsehood
Misinformation and disinformation spread rapidly due to several key factors:
- Social Media Echo Chambers: Algorithms on social media reinforce users’ beliefs by showing similar content, creating echo chambers that increase susceptibility to misinformation.
- Emotional Triggers and Clickbait: Sensational headlines designed to provoke emotions like fear and anger are more likely to be shared, even if the information is false, reducing critical thinking.
- Reliance on Online Media: As trust in traditional media declines, unverified online sources gain more influence, increasing the spread of misinformation.
- Lack of Media Literacy: Without strong media literacy skills, people struggle to differentiate between fact and fiction, making it easier for misinformation to spread.
- Automation and GenAI: As if human-generated falsehoods weren’t enough, we now face the rise of AI-generated misinformation and disinformation. Automated bots and generative AI allow malicious actors to spread disinformation at scale, mimicking real users and creating widespread belief in false narratives. Generative AI models, such as GPT-3, can produce convincingly human-like text at scale. In the wrong hands, this technology can flood social media with fake reviews, bogus news articles, and deceptive posts, all designed to manipulate public opinion or damage reputations.
The Dilemma of Misinformation Management
Most companies are hesitant to assume the role of “arbitrators of truth,” avoiding judgments on the accuracy of every piece of content. Instead, they focus on well-documented misinformation campaigns, such as those surrounding the Sandy Hook shooting, where false narratives are already established. This strategy allows them to address clear-cut cases of misinformation without wading into the murky waters of subjective truth, but it leaves significant gaps in their response capabilities.

Due to the sheer volume and rapid spread of content online, many emerging misinformation campaigns go unnoticed and unchecked. Without a comprehensive way to track all potential threats, these false narratives often only gain attention when they reach a critical mass and become high-profile issues. By then, the damage is done, making it difficult to contain their spread and mitigate their impact. This reactive approach underscores the need for more proactive solutions to combat misinformation before it spirals out of control.
Defending the Truth – with Precision
The fight against misinformation and disinformation isn’t just about fact-checking or debunking false claims; it’s about restoring trust in institutions, media, and each other. It’s about recognizing that in this silent war, the real casualty is our shared reality. We partner with some of the most iconic brands in the world to help them master trust & safety. At Sutherland, we understand the dangers of misinformation and disinformation and have leveraged our extensive expertise for developing solutions that amalgamate the best of tech and human capabilities to safeguard online credibility.
Sutherland’s research-driven Trust & Safety practice, staffed by sector experts, is dedicated to mitigating the impact of misinformation and disinformation. Leveraging our proprietary AI-powered suite of tools, we accurately identify and flag suspicious content that diverges from verified facts.
Sutherland utilizes advanced bots that continuously crawl the internet to identify emerging trends and potential issues. These bots flag relevant information to agents working under our Intel Watch Desk, who then assess and report these findings to clients. This proactive monitoring allows both Sutherland and its clients to stay ahead of risks, ensuring that timely and informed actions are taken to address challenges effectively.
This collaborative approach between humans and technology enables us to deploy a multi-layered approach to safeguarding digital environments; thereby helping clients make strategic decisions and mitigate threats in real time.
This holistic strategy fortifies our clients’ digital ecosystems against the growing threat of false information. We take a comprehensive approach to information integrity, offering initiatives that range from enhancing digital literacy for employees and end-users to fine-toothed fact-checking initiatives. Our solutions ensure that all client content adheres to verified facts and standards, while identifying and addressing any misinformation that contradicts major events.
To keep your corner of the internet clean and secure, you need to bring together a robust strategy, the best of trust & safety expertise and state-of-the-art tools. Connect with Sutherland’s experts to understand how you elevate and secure your digital spaces.