Are your respondents lying to you? A study on social desirability bias in Prolific.

1. Why conduct this study?

Social scientists are interested in understanding attitudes to contentious issues. However, measuring these attitudes can be challenging because of social desirability bias (SDB): respondents may give socially desirable or neutral answers out of concerns for their image. This issue persists even in longitudinal panels where respondents have established trust with the data collector (Figure 1) [1] and in online contexts where their identity is anonymous [2], [3]. Failure to account for SDB in research designs can lead to inaccurate predictions and biased estimates, as demonstrated in recent studies on support for US presidential candidates and Brexit [4], [5].

Our study aims to measure the degree of SDB in online surveys and examine methods to reduce it. Our research questions are:

  • To what degree do online survey respondents exhibit social desirability bias (SDB) when answering attitudinal questions on controversial or sensitive topics?
  • To what extent does increasing confidentiality via anonymisation methods reduce SDB?
  • Which anonymisation methods are most effective in reducing SDB?

Anonymisation methods are an established and robust technique for obtaining valid estimates of population-level attitudes and behaviours, but have rarely been used for policy-relevant questions, or questions with non-binary responses [6], [7]. Anonymisation methods aim to mitigate self-image concerns by introducing random noise into responses [8]. By comparing the distribution of responses obtained via anonymisation to those obtained via direct questioning, we can estimate the extent of social desirability bias.

Our study directly benefits the Prolific community by validating methods of reducing SDB and obtaining unbiased measures of opinions on sensitive topics.

2. Methodology

The sample will be randomly divided into three groups: Direct Questioning (the control group), the Randomised Response Method (RRM) treatment, and the Item Count Technique (ICT) treatment. RRM and ICT are the two most common anonymisation methods in the literature [9].

All groups will be asked their opinions on the following issues:

  1. Suitability of women in politics (relative to men)

  2. Support for gay rights

  3. Support for immigration

  4. Support for redistributive policies

  5. Support for preferential hiring policies in favour of minority groups

We chose these topics so that answers can be compared to data from existing large-scale nationally representative surveys (the General Social Survey and the European Social Survey). These questions will have a 3-item Likert-scale: agree/in favour, neither agree nor disagree, and disagree/not in favour.

The question format will differ across groups (Figure 2):

  • (Control) The Direct Questioning group will be asked the questions directly, as in a standard survey.
  • (Treatment 1) The RRM treatment group will answer via the following randomised response method:
  1. For each question, roll a 6-sided virtual die.

  2. If the outcome is a 1, answer ‘agree’ regardless of what the question is.

  3. If the outcome is a 6, answer ‘disagree’ regardless of what the question is.

  4. For all other outcomes (2, 3, 4, 5), answer the question truthfully.

  • (Treatment 2) The ICT treatment group will be further split into two equally sized subgroups.
    • For each attitudinal question, Subgroup 1 will receive a list of 3 non-controversial statements unrelated to the topic.
    • Subgroup 2 will receive the same list as Subgroup 1 but with the attitudinal statement added.
    • Both subgroups should indicate how many statements they agree with (but not which particular statements).

We can estimate the extent of SDB for each question by using the statistical methodology of Blair et al. (2015) to compare the proportions of respondents in each group (RRM, ICT, and Direct Questioning) who agreed with the attitudinal statements.

We will then compare the degree of SDB i) across questions to identify which topics might benefit most from using the anonymised methods, and ii) across methods. Depending on the question wording, the method that produces the highest/lowest proportion of agreement is more effective in reducing SDB.

3. Sample size and costs

We plan to run our study on a sample of 4500 individuals, which gives sufficient power to detect a small effect size, defined as 0.3 of a standard deviation (see Note 1). Individuals will be divided into 4 equally sized groups according to question format: Direct questioning, Randomised Response Method, Item Count (no contentious items), Item Count (contentious items). Each group will be stratified to ensure gender balance and sufficient representation across age and country of residence.

We request £4000 to run this study. This amount covers the payment of £0.63 per participant and includes the 33% service fee plus VAT (see Note 2). Participants will be paid according to Prolific’s recommended hourly rate of £7.50 (see Note 3).

We will have IRB approval from our institutions by the time the competition results are announced.

Notes:

1. The minimum sample size required for 80% power to detect a small effect size at the 5% significance level is 4224. This calculation is based on the assumption that both of the anonymization methods inflate the variance by a factor of 3 compared to direct questioning, as suggested by existing literature [10].

2. VAT applies because we are UK-based, so the per-participant cost is approximately 1.4 times the advertised payment. The estimated cost is therefore: 0.63 x 4500 x 1.4 = £3969.00. We will cover any excess costs.

3. The survey will take approximately 5 minutes to complete, based on the median completion time across all 4 groups from a pilot study we conducted (N=100).

4. Commitment to open science

Our study is pre-registered on AsPredicted: Captcha (click on the link to view the PDF).

We will publish all outputs related to this project (questionnaire, data, and code) in relevant open-access repositories such as the Open Science Framework and provide guides on how to set up and run anonymisation studies on survey platforms.