[Proposal] Fighting Fake News with Reason

Introduction

The spread of fake and misleading information has increased over the recent years (Di Domenico et al., 2021), and has had significant consequences both on individual decision making and public policy (Naeem et al., 2020).

Since politically-driven motivated reasoning is common (Thaler, 2020), and influences how news is perceived (Tsang, 2020; Vegetti & Mancosu, 2020; Kudrnac, 2020), teaching cognitive skills to fight misinformation seems an important, long-run objective (Allchin, 2018). As of yet, such programs have had mixed success (Badrinathan, 2020), documenting substantial heterogeneity in how people of opposing beliefs choose to consume and analyse information.

Proposal

In my 2017 field experiment, I studied whether teaching students techniques how to build and refute an argument, and support their views with evidence, helps students detect fake news. While I found that the treatment generally improved argumentative reasoning by 0.1 of a standard deviation, there was substantial heterogeneity (about 0.2σ) in students’ performance on a fake news test depending on their political ideology. The experiment was not powered to study the mechanisms underlying this heterogeneity, and thus I propose this online experiment to do so.

I aim to focus on two aspects of political news consumption:

  • Do people pre-emptively choose to read different articles when they know they will be asked to fact-check and critically analyse the news?
  • How does partisanship affect news scrutiny?

In line with past research, I hypothesize:

H1a: Subjects prefer to read articles that are aligned with their own political views.

H1b: Subjects prefer to fact-check articles that are opposed to their own political views.

H2: Subjects apply less critical scrutiny to articles aligned with their own political views.

I propose three mechanisms for H2:

  • selective (in)attention
  • desire to protect own views and not face counter-arguments
  • lack of cognition in the face of ambiguous evidence

Experimental Design

The experiment will consist of three parts:

First, the subjects will complete a questionnaire about their demographics, and political opinions. (Political opinions questions will be taken from the European and World Values Surveys.) They will also be presented with two article headlines, and asked to choose which article they would prefer to answer questions about later. Of these headlines, one will be liberal-, and the other conservative-leaning. One group (“fact-checking”) will be told that they will be asked to analyse and fact-check the article in question, whereas the other group (“reading”) will not be told what type of questions they will be asked. (The subjects’ choice will be implemented with a 70% probability.) Taken together, this allows for a test of H1, namely whether subjects tend to “read” a different type of article than they choose to “fact-check”. Additionally, I will also ask the subjects why they chose one article over the other.

Second, the subjects will read a short lecture about how statistics could be presented in misleading ways. To verify that the subjects have understood the materials and are able to apply these skills, they will complete a fact-checking and analysis of a “neutral” article.

Finally, the subjects will be presented with either a liberal- or conservative-leaning article (depending on their choice and the randomizer), and will be again asked to perform a fact-check and analysis.

In parts 2 and 3, as the subjects fact-check and analyse the articles, the experimental interface will track where and how many times subjects click to get more information about the article (e.g., whether they check the source of the article), and how much time they spend on the analysis. Both of these tests will be incentivized such that correct answers yield monetary bonuses.

By comparing the subjects’ test results from parts 2 and 3 (within-subjects), and from parts 3 (between-subjects), I will be able to test H2. Moreover, the subjects’ clicking behavior and answers to test questions will provide information about the underlying mechanisms.

I will also incorporate an attention check and an experimenter demand check.

Sample Size & Cost

To have sufficient power to evaluate H1 and H2 (between subjects), a t-test with alpha=0.05, power of 80%, and effect size of 0.2 (in line with my 2017 field experiment concerning heterogeneity) requires approx. 400 subjects per cell. My 70-30 randomization of article assignment guarantees that even if all subjects in both conditions (“reading” and “fact-checking”) prefer one article over the other, at least 30% of subjects will analyse an article they did not prefer. With 400 as 30% of the minimal sample size, this calls for a total of approximately 1340 subjects.

I aim to pilot the experiment with 60 subjects in order to check that subjects understand the tasks, and the software correctly monitors behavior. If no problems are found, I will merge these 60 subjects with the main analysis pool.

With 1400 subjects total, 5 GBP payment for a 45-min experiment, the Prolific estimate returns 9333 GBP. I will use the remainder of the Prolific grant (approx. 700 GBP) plus my ASFEE 2019 conference prize (750 EUR) I won for the purpose of running a follow-up experiment to my field work to pay the subjects’ performance bonuses. Therefore, I am asking for the full 10 000 GBP and I will supplement it with another money source.

Open Science Commitment

The experimental design, hypotheses, and pilot are pre-registered at aspredicted.org. Link: https://aspredicted.org/pr2kc.pdf

Both the dataset and code will be made publicly available upon publication at a public data repository Dataverse, my personal website (lenkafiala.com), and (if possible) at the journal’s website.

I will target journals that allow for open access publication, or ensure a pre-print version of the manuscript is freely available without any restrictions.

Ethics

I will obtain IRB approval of the experiment from my institution prior to conducting the study.