No consent, prescreen validation, Prolific participant profiles

Hello everyone. I’m curious about other researchers’ experience here on this platform and hoping to see if there’s any learned experiences that they can share with me on issues that I’m about to discuss below.

For context: I’m a graduate student that’s using Prolific to collect data for my thesis, so efficient time use is critical for me to maintain progress. As per Prolific’s policies, I’m using a recruitment survey with specific employment-sector prescreen filters selected to identify organizational decision makers, paying everyone who completed the survey and selecting the participants that matches the criteria we’re looking for to invite into a follow-up main survey.

The challenges I’m encountering at the moment:
I’m constantly getting a handful of participants who choose to not give consent, but also is not returning their survey as instructed by Prolific, nor do they respond to messages.

Similarly, I have quite a bit of participants failing my validation of employment-sector prescreen who are also not returning their survey as instructed by Prolific, nor do they respond to messages.

These are clear cut issues where Prolific has said that if it’s a genuine mistake, the respondent should return their submission. Is there an expedited way to have Prolific manually return these submissions instead of having to wait 7+ days? Has anyone else also encountered these issues?

Finally, a matter of concern about Prolific participant profiles.
In the most extreme case, I have received an underage participant that identified himself as an IT apprentice but self-reported himself as upper/middle management. In other cases, I have participants who even self-reported as business analysts, junior managers, yet they’ve passed my prescreen filters of upper/middle management.

Are other researchers also experience issues like these where participant answers do not align with the prescreen validations directly? How exactly does Prolific verify participants’ self-reported employment-sector and positions?

1 Like

Hi Michael, good questions. A couple of thoughts.

First, I’m pretty sure that the vast majority of prescreeners are based on self-report. I’d guess that participants are mostly honest in responding to these, but people’s circumstances do change over time, and some questions may be somewhat subjective, so it’s not surprising that you get some participants that don’t match what you were really after.

Second, you are entitled to validate prescreeners, but if you use these as a basis for screening people out of the survey, you have to ask the questions exactly as they are presented in the prescreener database. E.g., the Industry Role screener asks “Which of the following best describes your role at work?” and has 13 response options, including upper management and middle management, but not IP apprentice or business analyst. So if your prescreening was just based on the Industry Role screener, your knowledge that a participant is an IT apprentice or a business analyst must’ve come through a question other than this exact screening question, and can’t be a basis for screening them out or rejecting them (though obviously you’re welcome to use it as a basis for deciding who gets invited to the follow-up study).

These are clear cut issues where Prolific has said that if it’s a genuine mistake, the respondent should return their submission. Is there an expedited way to have Prolific manually return these submissions instead of having to wait 7+ days? Has anyone else also encountered these issues?

If you’ve set up prescreener validation correctly, participants should come across the prescreener questions, exactly as they’re presented in the database, at the beginning of your survey. If they give response options that vary from the prescreening requirements they should then be automatically directed out of your survey with a message saying:

You are ineligible for this study as you have provided information which is inconsistent with your Prolific prescreening responses. Please return your submission on Prolific by selecting the ‘Stop without completing’ button.

The relevant rules are here, and a guide to how to do this in the popular Qualtrics platform here. If a participant sees that message and doesn’t return the survey, their submission will just time out once the maximum survey time is exceeded (usually some small multiple of the estimated completion time), so it’s generally a non-issue. The same applies for participants not consenting at the beginning - either they click return, or they time out shortly after, but either way it shouldn’t present as a problem on your side. Where this would go awry, though, is in two situations:

  1. The survey logic isn’t set up correctly, and the participants are able to get as far as a completion code despite giving a response that’s inconsistent with their values on the relevant prescreeners (or despite not giving consent). If this happens I’d suggest you should approve the submission.
  2. Participants are entering NOCODE submissions, possibly because they perceive you are breaking the rules around screening (researchers inappropriately screening within surveys is a major source of frustration amongst Prolific participants). If you reject any of these be very careful to make really sure that this is within the rules for rejections.

Re. an underage participant, I think Prolific does do identity verification when participants sign up and thereby check they’re 18+. So I’d guess it’s most likely the participant just made a mistake when filling out the self-report item in your survey (perhaps due to rushing).

I hope that helps a bit!
Matt

1 Like

Thanks for the reply, Matt!

Yes, I discovered how inaccurate some of the self-report can be, when I kept on seeing participants describe their job title and job role that clearly did not align with the Prolific prescreen filters I used during my first few recruitment surveys. This is what motivated me to set up a prescreen validation, exactly as instructed in Prolific’s help file, to monitor data quality. So it’s the exact same question that participants would’ve seen during their self-report with Prolific. This yielded respondents that have failed their validation of the prescreeners, unfortunately quite a few of these respondents do not respond to messages or follow instructions to return their survey.

I’ll break down my concerns and issues into point form below.

  • We’re constantly being asked by Prolific’s support team to wait seven days for participants to return submissions before they will take action and return submissions, even on clear cut issues like participants not giving consent, or failing validation of prescreener.

  • This can become extremely time consuming. What happens if I’m trying to recruit 100 people, and I have gotten over 25% of my participants either not giving consent or failing validation of prescreener? I wait seven days, only to have a handful more respondents fall into the same situation and wait another seven days more? I’m graduate student with a strict timeline to adhere to, the amount of time just spent waiting for participants to return their survey so we can open up our survey to continue collecting participants is excessive in this case, especially given these are clear cut cases where even Prolific’s own help file ultimate says genuine mistakes should result in participants returning their survey.

  • Has other researchers run into this situation too? Is it possible to work with Prolific support team to give them a list of the respondents ID falling into the scenarios above so they can manually return the submissions so researchers can continue their recruitment?

Hi Michael, thanks for the extra info. I still don’t quite understand how you’re ending up waiting 7 days. As I mentioned, if your study is set up correctly, then the only plausible outcomes from a participant failing a prescreener validation are that they return the survey or they automatically time out when they hit the maximum completion time (which for a short screening survey will probably be well under an hour). When you look at these participants in your Prolific view of the study, what status do they have? Submitted with
a completion code?

I think you’re narrowed down the problem, Matt!

Unfortunately both the respondents who didn’t give consent and the ones who failed screening validation are indeed ending with a completion code, and are in the “awaiting review” status.

I’ve been using Prolific’s completion codes, there’s one prebuilt for respondents not giving consent which only permits the automatic asking of respondents to return their survey via an automated message, and I have a custom one built similarly but for failing screening validation.

It’s clear to me now this is not the right way to build things.

What is the right way to build the study so when respondents who don’t give consent or fail validation of screening that they either time out or return the survey? Right now my initial idea, based on the new direction of thinking thanks to your guidance here, is to rip out URL redirects at the no consent and screening validation fail specific ending blocks of my survey on Prolific.

Gotcha! What platform is your survey running on? Qualtrics?

Yup, Qualtrics.

Cool. The trick then is to follow Prolific’s Qualtrics integration guide down to the letter. The sections on getting consent and validating your prescreeners are especially relevant in your case. (Yes, you would need to take out the URL redirects for those scenarios).

When you’ve made those edits on the Qualtrics side, do also use the preview mode to check that the survey flow is working as intended (e.g., if you refuse consent or fail a prescreener, do you get the right customised end of survey message?)

Btw, there is one weird issue with Qualtrics where if the language setting of the end of survey message is different from that of the main survey (e.g., US English vs UK English), the customised end of survey message will be overridden. Details here if this crops up for you.

Obviously this won’t immediately resolve things for the data you’ve collected already (where the best thing to do is probably just pay the participants and open up more spaces), but hopefully moving forward you’ll find this helps make things work more smoothly :slight_smile:

Thankfully the current situation has been resolved, albeit a painful experience.

I’ve made the changes as per your guidance and also on Prolific. My critical error at the end is having used the prebuilt custom completion codes that Prolific offered, usage of that thing did NOT align with what was indicated in the help files you provided.

Thank you so much for helping me on this issue, Matt!