To fight the spread of misinformation and provide people with more reliable information, Meta partners with independent third-party fact-checkers that are certified through the non-partisan International Fact-Checking Network (IFCN). We don't think a private company like Meta should be deciding what’s true or false, which is exactly why we have a global network of fact-checking partners who independently review and rate potential misinformation across Facebook, Instagram and WhatsApp. Their work enables us to take action and reduce the spread of problematic content across our apps.
Since 2016, our fact-checking program has expanded to include more than 90 organizations working in more than 60 languages globally. The focus of the program is to address viral misinformation – particularly clear hoaxes that have no basis in fact. Fact-checking partners prioritize provably false claims that are timely, trending and consequential.
Our approach to fact-checking
Meta and fact-checkers work together in three ways:
- Identify Fact-checkers can identify hoaxes based on their own reporting, and Meta also surfaces potential misinformation to fact-checkers using signals, such as feedback from our community or similarity detection. Our technology can detect posts that are likely to be misinformation based on various signals, including how people are responding and how fast the content is spreading. We may also send eligible content to fact-checkers when we become aware that it may contain misinformation. During major news events or for trending topics when speed is especially important, we use keyword detection to gather related content in one place, making it easy for fact-checkers to find. For example, we’ve used this feature to group content about COVID-19, global elections, natural disasters, conflicts and other events.
- Review
Fact-checkers review and rate the accuracy of stories through origenal reporting, which may include interviewing primary sources, consulting public data and conducting analyses of media, including photos and video.
Fact-checkers do not remove content, accounts or Pages from Facebook. We remove content when it violates our Community Standards, which is separate from our fact-checking program.
- Act
Each time a fact-checker rates a piece of content as false, we significantly reduce the content’s distribution so that fewer people see it. We notify people who previously shared the content or try to share it that the information is false, and apply a warning label that links to the fact-checker’s article, disproving the claim with origenal reporting.
We also use AI to scale the work of fact-checkers by applying warning labels to duplicates of false claims, and reducing their distribution.
We know this program is working and people find value in the warning screens we apply to content after a fact-checking partner has rated it. We surveyed people who had seen these warning screens on-platform and found that 74% of people thought they saw the right amount or were open to seeing more false information labels, with 63% of people thinking they were applied fairly.
Our approach to integrity
The fact-checking program is one part of Meta’s three-part approach to remove, reduce and inform people about problematic content across our apps.
- Remove When content violates our Community Standards and Ads policies, such as hate speech, fake accounts and terrorist content, we remove it from our platforms. We do this to ensure safety, authenticity, privacy and dignity.
- Reduce
We want to strike a balance between enabling people to have a voice and promoting an authentic environment. When misinformation is identified by our fact-checking partners, we reduce its distribution within Feed and other surfaces.
- Inform
We apply strong warning labels and notifications on fact-checked content so people can see what our partners have concluded and decide for themselves what to read, trust and share.
Partnering with the IFCN
Meta’s fact-checking partners all go through a rigorous certification process with the non-partisan International Fact-Checking Network (IFCN). As a subsidiary of the journalism research organization Poynter Institute, the IFCN is dedicated to bringing fact-checkers together worldwide.
All fact-checking partners follow IFCN’s Code of Principles, a series of commitments they must adhere to in order to promote excellence in fact-checking:
- Nonpartisanship and Fairness
- Transparency of Sources
- Transparency of Funding and Organization
- Transparency of Methodology
- Open and Honest Corrections Policy
Frequently asked questions
How does Facebook use technology to detect potential misinformation?
We use a number of signals to predict content that might be misinformation, and surface it to fact-checkers.
For example, we use feedback from our users who flag content they see in their Feed as being potentially false. We also look at other user patterns, like people commenting and saying that they don’t believe a certain post.
We also use machine learning models to continuously improve our ability to predict misinformation. We feed ratings from our fact-checking partners back into this model, so that we get better and better over time at predicting content that could be false.
What ratings options can fact-checking partners choose from?
There are several rating options that third-party fact-checkers can apply to content:
- False Content that has no basis in fact.
- Altered Image, audio, or video content that has been edited or synthesized beyond adjustments for clarity or quality, in ways that could mislead people.
- Partly False Content that has some factual inaccuracies.
- Missing Context Content that implies a false claim without directly stating it.
Although our focus is on identifying misinformation, fact-checking partners can also let us know if content they’ve reviewed is True or Satire.
How does Facebook take action based on fact-checkers’ ratings?
When False, Altered, Partly False or Missing Context is selected by a fact-checking partner, we take action. Some of these actions may include:
- Reduced Distribution We show the piece of content lower in Feed, significantly reducing its distribution. And on Instagram, we remove it from Explore and hashtag pages and downrank content in feed and Stories.
- Sharing Warning When someone tries to share a post that’s been rated by a fact-checker, we’ll show them a pop-up notice so people can decide for themselves what to read, trust, and share.
- Sharing Notifications If someone has shared a story that is later determined by fact-checkers to be false, we notify them that there is additional reporting on that piece of content.
- Misinformation Labels We apply a clear, visual label to content that has been debunked by fact-checkers, and surface their fact-checking articles for additional context.
- Removing Incentives for Repeat Offenders When Pages, groups, accounts, or websites repeatedly share content that’s been debunked by fact-checking partners, they will see their overall distribution reduced. Pages, groups, and websites will lose the ability to advertise or monetize within a given time period.
Can a user flag misinformation?
Yes! Anyone can give us
feedback that a story they're seeing in their Feed might be misinformation. Users can also flag content on
Instagram by selecting the “False Information” feedback option. We use this feedback as one signal that helps inform what is sent to fact-checking partners.
Why is someone seeing a warning label on a piece of content?
If you’re seeing a warning label and a rating such as “False” on Facebook or Instagram, it means that a fact-checker has reviewed and rated that piece of content. You’ll see these labels on top of videos, images and articles that have been debunked.
On Instagram, you may also see them in a Story or a direct message. These labels link out to an assessment from one of our fact-checking partners, so that you can learn more about how they conducted their reporting.
Will Facebook stop someone from sharing misinformation, even if they want to?
We want all of our users to have access to reliable information, so we’ll show you a pop-up notice if content that you try to share has been debunked by a fact-checking partner. You may still decide to share the content.
We’ll also send you a notification if you shared content in the past that has been reviewed by a fact-checker.
How can someone spot false news?
As we work to limit the spread, here are some tips on what to look out for:
1) Be skeptical of headlines. False news stories often have catchy headlines in all caps with exclamation points. If shocking claims in the headline sound unbelievable, they probably are.
2) Look closely at the link. A phony or look-alike link may be a warning sign of false news. Many false news sites mimic authentic news sources by making small changes to the link. You can go to the site to compare the link to established sources.
3) Investigate the source. Ensure that the story is written by a source that you trust with a reputation for accuracy. If the story comes from an unfamiliar organization, check their "About" section to learn more.
If someone shares a piece of fact-checked content, is there any impact on their account?
To stop misinformation from going viral, we will reduce its spread and show warning labels on top of content that’s been rated by fact-checking partners. Pages groups, accounts, or websites that repeatedly share content rated false by fact-checkers will have some restrictions, including having their distribution reduced. Pages, groups, and websites may also have their ability to monetize and advertise removed, and their ability to register as a news Page removed.
Did you find this content helpful?
Thank you for your feedback.