What is misinformation?

Misinformation is false or inaccurate information that is spread without the intent to deceive. The person sharing misinformation believes it to be true and does not intend to mislead others. Those who share misinformation believe it is accurate, often without fact-checking or considering the source.

The reasons for sharing misinformation range from a lack of understanding or misunderstanding of a topic to confirmation bias, which is the tendency to favour information that confirms existing beliefs.

Misinformation examples include sharing an incorrect fact on social media that the person believes to be true or spreading a rumour based on an unreliable source without verifying its accuracy.

What is disinformation?

Disinformation is deliberately created and spread to deceive or mislead people. It is often used for malicious purposes like sowing discord, influencing public opinion or promoting a particular agenda. Disinformation can even be used for commercial purposes.

Examples include using fabricated sources, fake news articles and deepfakes to spread misinformation, as well as launching social media bot campaigns to promote a specific narrative.

Disinformation can be a serious problem because it can erode trust in institutions, spread hate speech and even influence elections.

Difference between misinformation and disinformation

The key difference between misinformation and disinformation revolves around intent. Misinformation is the spreading of false or inaccurate information without any intention to deceive. People who share misinformation believe the information to be true and are often unaware that it is incorrect. On the other hand, disinformation is deliberately crafted and disseminated with the intent to mislead or deceive.

For example, when someone deliberately creates a fake news story to discredit a political opponent, it is disinformation. However, if a person shares that story, not realising that it is false, it is misinformation.

If someone keeps sharing misinformation, even after they learn that it is incorrect, misinformation can turn into disinformation because it has become an intentional deception.

Types of misinformation and disinformation

There are different types of misinformation and disinformation, with the underlying difference being whether the intent of the content is designed to deceive. Here are the main types:

  • Errors – Mistakes made by established news agencies in their reporting, typically unintentional and corrected when discovered, but can still contribute to the spread of misinformation
  • False connections – When headlines, captions or visuals do not match the content they accompany, it can lead to misunderstandings
  • Misleading content – Information that is presented in a way that misleads the audience, such as presenting opinions or comments as factual information
  • False context – Factually accurate content that is presented with false contextual information, leading to misinterpretation
  • Imposter content –Content that impersonates genuine sources by using the branding or style of established agencies, with the aim of misleading
  • Fabricated content – Completely false information created with the intent to deceive, such a fabricated news story about a celebrity scandal that never happened
  • Manipulated content – Genuine information or imagery that has been altered or distorted with the aim of misrepresentation, attracting attention or provoking emotional responses
  • Satire and parody – Humorous or exaggerated content that is not intended to be taken seriously but may fool some readers. Although the intent is not to deceive, satire and parody can still mislead those who are unaware of the context
  • Sponsored content – Advertising or public relations material that is created to look like a genuine news article or report but is actually promoting a product, service, or viewpoint
  • Propaganda – Content designed to influence attitudes, values and knowledge to benefit a specific agenda and manipulate public perception and opinion

What is a disinformation campaign?

A disinformation campaign is a deliberate and orchestrated effort to spread false or misleading information to a specific audience. The goal is to deceive, manipulate or influence the target group’s opinions, actions or decisions. This could involve sowing discord, undermining trust in institutions or promoting a particular political agenda.

Disinformation campaigns knowingly spread false or misleading information, such as fabricating entirely new information, twisting facts to create a false narrative or taking information out of context.

The techniques used in disinformation campaigns vary based on its specific contexts or goals. However, certain commonalities include emotional manipulation, where content is designed to trigger strong emotions like fear or anger to increase engagement and dissemination. Additionally, exploiting divisions is a prevalent strategy, where discord is sown by intensifying existing societal or political divisions.

Common tactics used in disinformation campaigns include:

  • Fake news stories – Fabricated articles designed to mislead the public about events or situations
  • Deepfakes – Manipulated videos or audio recordings to make it appear as if someone said or did something they never did
  • Doctored images and videos – Altered images and videos to misrepresent reality
  • Social media bots and trolls: Automated accounts or individuals spreading false information and manipulating online conversations
  • Fabricated statistics and data – Falsified data or statistics used to support a particular narrative

Misinformation and social media

Misinformation – false or inaccurate information that is shared without the intent to deceive –  thrives on social media for several reasons. The speed and reach of social media are unparalleled, allowing a single post to reach millions within hours. Social media algorithms exacerbate this by prioritising content that engages users, which often includes sensational or emotionally charged information.

Confirmation bias plays a significant role, as people are inclined to share and believe information that aligns with their pre-existing beliefs, further amplifying misinformation. Additionally, these platforms can create echo chambers where users are mostly exposed to information that reinforces their existing beliefs, reducing the likelihood of encountering fact-checks or opposing viewpoints. The lack of gatekeepers, such as editors and fact-checkers found in traditional media, means anyone can post anything, regardless of its accuracy.

Misinformation Beyond Social Media

While social media is a significant vector for misinformation, it spreads through other channels as well. Messaging apps like WhatsApp and Telegram facilitate the rapid sharing of information within closed groups, where fact-checking is less likely.

Similarly, forwarded emails and text messages can quickly disseminate misinformation, especially among older generations less familiar with online verification.

Websites and blogs with hidden agendas or insufficient editorial oversight can also spread misinformation, often masquerading as legitimate news sources. Even traditional media outlets can sometimes propagate misinformation if they rely on unverified sources or sensationalise stories for ratings.

Why Social Media Matters

Social media’s impact on the spread of misinformation is particularly significant due to its scale and reach. The potential audience for misinformation on these platforms is enormous, making it a powerful tool for influencing public opinion. Social media platforms encourage active participation, leading to the rapid amplification of misinformation through likes, shares and comments. Furthermore, the decentralised nature of social media makes it challenging to track and contain misinformation once it starts spreading.

How does disinformation spread and what can be done against it?

Disinformation is spread in a number of ways. This includes:

  • Social engineering – Mischaracterising and manipulating events, incidents, issues and public discourse to sway public opinion towards a certain agenda
  • Inauthentic amplification – Using trolls, spam bots, fake accounts (sock puppets), paid accounts and sensational influencers to boost the visibility and reach of harmful content
  • Micro-targeting – Exploiting ad placement and engagement tools on social media to identify and engage specific audiences likely to share and further disseminate disinformation
  • Harassment and abuse – Using a mobilised audience, fake accounts and trolls to obscure, marginalise and drown out journalists, opposing views and transparent content. These methods collectively enable the widespread and effective spread of disinformation.

Fighting disinformation requires a combination of technological solutions and human vigilance. Artificial intelligence and machine learning can be used to analyse vast amounts of information. By identifying patterns and flagging anomalies, these tools can help detect potential disinformation campaigns. Similarly, content verification can assess the authenticity of digital content using techniques like watermarking and attaching provenance information (tracking the origin and history of the content).

Educating people about the characteristics of disinformation, how to identify false information and the importance of developing critical thinking skills is crucial. Encouraging users to verify information before sharing and promoting responsible online behaviour can significantly impact the spread of disinformation. A more informed and sceptical user base can create a more resilient online community less susceptible to manipulation.

Misinformation and disinformation in threat intelligence

In threat intelligence, both misinformation and disinformation pose significant challenges. Though unintentional, misinformation can still hinder threat intelligence analysts. Like disinformation, misinformation creates uncertainty and makes it harder for analysts to distinguish genuine threats from false information. Both misinformation and disinformation consume analyst time and resources to investigate false leads that could be used to identify real threats.

Disinformation campaigns often utilise misinformation as a starting point, leveraging the confusion caused by misinformation to achieve its goals.

Why threat intelligence needs to monitor misinformation and disinformation

An abundance of conflicting information can make it difficult for analysts to produce the actionable threat intelligence needed to make informed decisions regarding potential threats. Threat intelligence analysts must be adept at identifying and filtering out misinformation and disinformation campaigns. This requires careful source verification, cross-referencing information with reliable sources, and analysing the intent behind the information.

Additionally, effective threat intelligence goes beyond identifying specific threats. It requires understanding the broader context of a conflict, including the potential use of disinformation as a weapon. This allows for a more informed response to the overall situation.

How Silobreaker can help identify misinformation campaigns

To counteract misinformation and disinformation, intelligence teams need to employ source diversification – collecting information that is as comprehensive and representative as possible. But manually sifting through and identifying misinformation and disinformation across a vast array of sources is a time-intensive and error-prone process.

Silobreaker streamlines and processes large volumes of data at scale, with a central search facility that links entities and connects relationships at speed. This improves accuracy of intelligence and reduces false positives.

It transforms raw, unstructured data into timely, actionable insights by breaking down silos across cyber, physical security and geopolitical threats and risks – enabling a comprehensive view of information landscapes. Silobreaker automates the collection, processing, aggregation, analysis and dissemination of intelligence in a single unified platform, bringing together unstructured and structured data from millions of sources – including open, dark web and finished intelligence – based on PIRs, to deliver a relevant, single source of truth.

Learn more about how Silobreaker can help you tackle misinformation and disinformation in your threat intelligence here.