On 7 January 2025, Mark Zuckerberg, CEO of Meta (Facebook, Instagram and WhatsApp), announced that Meta was ending its third-party fact-checking programme. This programme was instituted by Meta in 2016 to counter the spread of misinformation across its social media platforms. In his announcement, the CEO said concerns around ‘too much censorship’ had prompted the change and that Meta wanted to get ‘back to its roots around free expression’.
Following in the steps of social media platform X (formerly Twitter), Meta will now rely on Community Notes, a crowd-sourcing system where individual users voluntarily submit notes on posts, including flagging them for being false or misleading. Set up by Twitter in 2021, Community Notes was originally intended to complement, not replace, professional moderation.
While, so far, the removal of the fact-checking programme only applies to Meta in the US, an open letter to Meta from fact-checkers in over 100 countries (including Ireland’s FactCheck by the Journal.ie), stated that this decision ‘threatened to undo nearly a decade of progress in promoting accurate information online’, and noted that the programme ‘required all fact-checking partners to meet strict non-partisanship standards through verification by the International Fact-Checking Network. This meant no affiliations with political parties or candidates, no policy advocacy, and an unwavering commitment to objectivity and transparency’. The letter calls for further funding for public service journalism and states that ‘fact-checking is essential to maintaining shared realities and evidence-based discussion, both in the United States and globally’.
According to a 2024 study carried out by the World Economic Forum, misinformation and disinformation are the most severe short-term risks the world faces. The Global Risks Report 2024 brought together insights from 1,490 experts across academia, business, government, the international community and civil society, and found that over the next two years, misinformation and disinformation will present one of the biggest ever challenges to the democratic process. This risk is magnified by the widespread adoption of generative AI to produce ‘synthetic content’, ranging from deepfake videos, voice cloning and the production of counterfeit websites.
The ever-evolving nature of digital technology, combined with policy and legislative changes underlines the importance of digital media literacy and the need for individuals to have the skills and knowledge to find, interrogate and evaluate information in order to make informed choices about the content that they consume, create and circulate. For tips and hints on how to STOP, THINK and CHECK that the information that you read, see or hear is accurate and reliable, visit the Be Media Smart website. The website also features an Ask an Expert section, where common questions around the verification of information are answered by the Be Media Smart experts.