It’s almost three years since “fake news” became a major talking point and the problem seems more intractable than ever. The major online platforms are awash with conspiracy theories, disinformation, and extremist propaganda. The alarming implications are evident across the world. Disinformation about vaccines is linked to a global rise in preventable diseases. Ireland had a threefold increase in measles cases between 2017 and 2018 (Pollak 2019). Just as troublingly, there is a global spike in far-right extremists pushing disinformation about minority groups (Newton 2018). Of course, such content always had a home online, but it was largely confined to fringe spaces where most people were unlikely to see it.
A major part of the problem is that social platforms amplify extreme content. This is an unintended consequence of algorithms that are designed to maintain our attention. Writing in The New York Times last year, Zeynep Tufekci noted that YouTube’s recommendation algorithm “seems to have concluded that people are drawn to content that is more extreme than what they staRTÉd with”. So when someone seeks out a video about the flu vaccine, the recommendations will include anti-vaccination conspiracy theories.
The major social-media platforms have made some moves to address this issue. Facebook and Twitter both promised to reduce the prominence of anti-vaccine content. Political disinformation is more challenging and controversial. In the wake of terror attacks by white supremacists, New Zealand Prime Minister Jacinda Ardern convened a summit during which five major tech companies pledged to tackle extremist material. Unfortunately, we have seen such non-binding pledges before. Last September, the same companies committed to an EU code of practice for transparency in political advertising. However, the resulting actions of this self-regulatory standard are so far uninspiring (Mozilla 2019).
While there is a growing case for regulating the platforms, it must be acknowledged that content moderation is technically challenging (Sakuma 2019). Moreover, the motivations behind the promotion of extreme content extend far beyond technology to include issues of politics, social inequality, and education. This calls for multiple, overlapping countermeasures that are not confined to regulation.
The online practices of ordinary citizens are a crucial and somewhat overlooked node in the problem. After all, disinformation and extreme content only have an impact when ordinary citizens are willing to believe it and promote it among their peers. This year, the Digital News Report asked a series of questions about information quality and how people react in response to concerns about accuracy. Some 61 per cent of Irish online news consumers are concerned about the reliability of online information. Yet, there is a gap between those who expressed concern and those who have taken action to address that concern.
To evaluate the reliability of a news story, 42 per cent of respondents compared multiple sources. This is a simple strategy, which is widely recommended by media literacy experts. On social media, it is especially important to check sources because, as previous Digital News Reports have shown, people often do not notice the source when a news item appears in their feed. Consequently, it is relatively easy for people to be duped by false articles that mimic the look of established news media or by the re-sharing of old articles.
Some people have begun to pay more attention to the reliability or reputation of sources: 22 per cent stopped using certain news sources due to concerns about accuracy and 32 per cent have become more reliant on sources that are considered reputable. Of course, what is considered reputable is somewhat subjective, but these actions indicate an awareness of basic media literacy principles.
Survey responses regarding the role of peers are especially interesting. Social platforms rely on information-sharing among peers, which creates a highly personal environment for evaluating information. That is, we are more likely to trust information we receive from peers and less likely to directly confront peers about the information they share. To stop the spread of disinformation, these social dynamics need to shift. It is promising to find that 26 per cent of Irish respondents declined to share a news story because they doubted its accuracy and 25 per cent ignored news when they were unsure about the trustworthiness of the person who shared it.
Only a quarter of respondents are currently taking such actions, but we can hope to see evidence of increased media literacy in the coming years. Last March, Media Literacy Ireland launched the Be Media Smart campaign to encourage people to check the reliability of information. Such campaigns are an important part of promoting the skills and competencies that are necessary to make informed decisions about media content. The DCU FuJo Institute is leading a major research project (called Provenance) to address this issue through automated content verification and a ‘verification indicator’ for social media users. The goal of the project is to intervene in the attention cycle of social media by encouraging people to think “is this legitimate?” or “where is this coming from?” before they hit like, share, or retweet.
Digital News Report (Ireland) 2019
On June 12, the DCU FuJo Institute and the Broadcasting Authority of Ireland launched the annual Digital News Report (Ireland) 2019. The full report is available on the DCU FuJo website.