🛠 #TipTuesdays - How to Optimise Your Use of Prolific

Every Tuesday, I’ll be posting a tip to help you optimize the way you use Prolific :slight_smile:

This is a ‘Wiki’ post, which means any community member can edit it. So, If you’ve got any tips or tricks the community would find useful, add it below under ‘Community Tips’!

Click ‘Edit’ in the bottom right corner to add! And comment below if you find this useful.

image

There might even be a prize for the best tip of the week :eyes:

:mantelpiece_clock: Best Times to Publish Your Study

This graph shows trends in participant activity on Prolific. Publish your study when you know that more active participants will be on the platform! For example, if you wanted the greatest exposure to all users worldwide, the best time to publish would be around 3-4pm GMT.

Community Tips

Add yours here!

  • When running a pilot, consider employing the same day of the week and the same time as that of the real experiment. This might be a plus for the reliability of your analysis if you are planning to analyse both samples together, thus merging the observations from the two sessions in a single dataset. In this way, it is more likely to have homogeneous samples in the two sessions with respect to important characteristics (e.g. gender, age). Of course this is possible only in case the pilot goes well and does not lead to any substantial change in the design or in any other key aspect of the subsequent experiment. (from @Veronica )

#TipTuesdays

5 Likes

:white_check_mark: Getting Trusted Participants

Did you know you can use our ‘Approval Rate’ filter to get participants who have a very high submission approval rating? For example, if you wanted participants who have never had their submissions rejected, you can set the filter at 100 like so:

Read more about our free prescreening filters here :slight_smile:

#TipTuesdays

4 Likes

:mag_right: Reverifying Participant Info

At Prolific, we do a lot to ensure that you get the best possible data quality. But, if you really want to be sure that your participants are who they say they are, you can run some really simple checks:

  • Re-ask your pre-screeners within your experiment. So, for example, if you’re targeting teachers aged 25, ask participants to reconfirm their age and their occupation. This allows you to confirm that your participants’ prescreening answers are still current and valid, and may reveal people who have forgotten their original answers to prescreening questions.

  • Ask difficult to answer questions based on pre-screeners. So, let’s say you’re targeting people who use a particular medicine. You could ask what brand they use, their normal dosage and at what times of day they’re supposed to take it. Then compare their answers against the accurate information about that medicine.

These aren’t foolproof, but along with the extensive work Prolific already does, they can act as extra data quality control measures.

You can read more about our free pre-screeners here

Are there any methods you use to verify participant info? Let us know below! :slight_smile:

If you found this post helpful, give it a like :heart:

@trust_level_0

#TipTuesdays

3 Likes

A post was merged into an existing topic: Sample Enquiry

Hi Josh,
I ran a study that was somewhat sensitive to language and, therefore, made English as a first language an inclusion criterion. However, when I received messages from some participants, it was quite obvious that English was not their first language. I think would need some sort of language test as a screener, if I really wanted to be sure. Fortunately, this was just pilot data.

Yours

Joe

#TipTuesdays

4 Likes

Good tip Joe!

Sorry to hear that you got participants who weren’t being honest. We now have a reporting feature which will allow you to let us know about things like this in the future :slight_smile:

#TipTuesdays

2 Likes

:video_game: How do you deal with participants who might try to game the system?

We’re super proud of the quality of our participant pool, and the quality of the data it provides. But, no vetting system is perfect! So, here’s how you can filter out the very small minority who may be attempting to ‘game’ the system.

  1. Use speeded tests or questionnaires to prevent participants from having time to google answers.
  2. Ask participants a few questions clarifying the instructions of the task at the end of the experiment (to check they understood the task properly and didn’t cheat inadvertently)
  3. Develop precise data-screening criteria to classify unusual behaviour - these will be specific to your experiment but may include:
  • Variable cutoffs based on inter-quartile-range
  • Fixed cutoffs based on ‘reasonable responses’ (consistent reaction rimes faster than 150ms, or test scores of 100%)
  • Non-convergence of an underlying response model
  • Simple as it seems, it’s been suggested you have a free-text question at the end of your study: “Did you cheat?”

If you’re interested in learning more, you can read our full blog post on improving your data quality here.

And if you find these tips helpful, drop a like :heart:

5 Likes

:mag_right: 7 Ways to Check Participant Attentiveness

At Prolific, we do a lot to ensure that you get the best possible data quality. Our pre-print even shows that our participants score higher on attentiveness measures than our competitors!

But, if you want to be extra sure that they’re paying attention, you can use the following methods:

  1. Use speeded tasks and questionnaires to prevent participants from having time to be distracted by the TV or the rest of the internet.

  2. Ask participants a few questions clarifying the instructions of the task at the end of the experiment (to check they read them properly)

  3. Collect timing and page-view data:

  • Record the time of page load page load and timestamp every question answered.
  • Record the number of times the page is hidden or minimised.
  1. Monitor the time spent reading instructions:
  • Look for unusual patterns of timing behaviour: Who took 3 seconds to read your instructions? Who took 35 minutes to answer your questionnaire, with a 3 minute gap between each question?
  1. Implement attention checks (aka Instructional Manipulation Checks or IMCs). These are best kept super simple and fair. “Memory tests” are not a good attention check, nor is hiding one errant attention check among a list of otherwise identical instructions!

  2. Include open-ended questions that require more than a single word answer. Check these for low-effort responses.

  3. Check your data using careless responding measures such as consistency indices or response pattern analysis, see Meade and Craig (2012) and Dupuis et al. (2018).

@trust_level_0

3 Likes

thanks for this tip :raised_hands:. this is useful!

2 Likes

Glad it’s helpful! Is there any particular area that you’d like tips on?

:rocket: How do you get the best out of participants? - Part 1

While participants are ultimately responsible for the quality of data they provide, you as the researcher need to set them up to do their best.

  1. Pilot, pilot, pilot your study’s technology. Run test studies, double and triple check your study URL, ensure your study isn’t password-protected or inaccessible. If participants encounter errors they will, more often than not, plough on and try to do their best regardless. This may result in unusable or missing data. Don’t expect your participants to debug your study for you! :bug:

  2. Make sure you use the ‘device compatibility’ flags on the study page if your study requires a specific (or excludes a specific) type of device. Note that currently our device flags do not block participants from entering your study on illegible devices (detecting devices automatically is somewhat unreliable and may exclude eligible participants). If you need stricter device blocking, then we recommend you implement it using your survey/experimental software.

  3. Keep your instructions as clear and simple as possible. If you have a lot to say, split it across multiple pages: use bullet points and diagrams to aid understanding. Make sure you explicitly state what a participant is required to do in order to be paid. This will increase the number of participants that actually do what you want them to! :memo:

That’s all for part 1! Next week I’ll give you 3 more tips on how to get the best out of your participants.

@trust_level_0

1 Like

:rocket: How do you get the best out of participants? - Part 2

While participants are ultimately responsible for the quality of data they provide, you, as the researcher, need to set them up to do their best.

  • If participants message you with questions, aim to respond quickly and concisely. Be polite and professional (it’s easy to forget when 500 participants are messaging you at once that each one is an individual!). Ultimately participants will respond much better when treated as valuable scientific co-creators. :slightly_smiling_face:
  • If you can, make your study interesting and approachable. Keep it easy on the eye and break long questionnaires down into smaller chunks.
  • If you can, explain the rationale of your study to your participants. There is evidence that participants are willing to put more effort into a task when its purpose is made clear, and that participants with higher intrinsic motivation towards a task provide higher quality data.

That’s all folks! Comment below if you’d like tips in a particular area :slightly_smiling_face:

@trust_level_0

2 Likes

Very interesting finding! I will surely take it into account for my next studies.
In the meantime, I added a suggestion in the “Community Tips” block that comes from my experience with piloting experiments.
Cheers!

3 Likes

I love the tip! Thanks for your contribution :grin:

1 Like

Boosting your data quality: Busting the bots

We at Prolific have banned our fair share of malicious accounts, so we’ve learned a thing or two along the way.

The list below is not exhaustive, but provides some practical advice that will boost your confidence in the responses you collect.

We’re constantly improving the quality of the pool, and ultimately you shouldn’t encounter many untrustworthy participants.

Busting the bots

  1. Include a catpcha at the start of your survey and prevent bots from even submitting answers. Equally, if your study involves an unusual interactive task (such as a cognitive task or a reaction time task), then bots should be unable to complete it convincingly.

  2. Include open-ended questions in your study (e.g., “What did you think of this study?”). Check your data for low-effort and nonsensical answers to these questions. Typical bot answers are incoherent and you may see the same words being used in several submissions (see this blog post for more information and examples).

  3. Check your data for random answering patterns. There are several techniques for this, such as response coherence indices or long-string analysis (see Dupuis et al., 2018).

  4. If you’re looking for a simpler solution: try including a few duplicate questions at different points in the study. A human responder will provide coherent answers, whereas a bot answering randomly is unlikely to provide the same answer twice.

As we’ve already said, we have technological measures in place to prevent bots so you’re extremely unlikely to find any in your dataset.

If you’ve got a tip, post below!

2 Likes

:moneybag: How should you approach rewarding participants?

One of the most important factors in determining data quality is the study’s reward. On Prolific, it’s vital that trust goes both ways, and properly rewarding participants for their time is a large part of that. So, we enforce a minimum hourly reward of 5.00 GBP.

But depending on the effort required by your study, this may not be sufficient to foster high levels of engagement and provide good data quality. Consider:

  1. The participant reimbursement guidelines of your institution. Some universities have set a minimum and maximum hourly rate (to avoid undue coercion). You might also consider the national minimum wage as a guideline (in the UK, this is currently ÂŁ8.91 for adults).

  2. The amount of effort required to take part in your study: is it a simple online study, or do participants need to make a video recording or complete a particularly arduous task? If your study is effortful, consider paying more.

  3. How niche your population is: if you’re searching for particularly unusual participants (or participants in well-paid jobs), then you will find it easier to recruit these participants if you are paying well for their time.

Next week, we’re going to talk about why paying more isn’t always a good idea!

3 Likes

:moneybag: How should you approach rewarding participants? Part 2

This week we’re discussing when it might not be a good idea to pay participants more.

Consider that:

  • Studies with particularly high rewards may bias your sample, as participants may feel ‘forced’ to choose that study when they might have gone to others. This may particularly apply to participants with a low socio-economic status

  • Bonus payments contingent on performance may make participants nervous about being paid, and lead to cheating.

That’s all folks! If you’ve got any tips, post them below :slight_smile:

4 Likes

:chart_with_downwards_trend: Minimising dropout / attrition rate in longitudinal studies

Drop out rates depend on a lot of factors (as I’m sure you’re aware); to name a few:

  • How far apart are the different parts of the study?
  • How generous are the rewards?
  • How long do the different parts take?

What’s the average dropout rate on Prolific?

We’ve had studies with 0% dropout rate and as high as 50%. A typical study would be somewhere in between these extremes. An independent study by Kothe and Ling found attrition of <25% over 1 year. Shorter longitudinal studies following best practices can expect better retention than this.

How can I minimize dropout rates?

  • You should clearly communicate your study information (i.e. expectations of participants, reward structure, time gap between the phases of your study).

  • You could also screen for those with at least 10 previous submissions, to ensure you are obtaining active and committed participants. Inexperienced participants are more likely to drop out.

  • Pay a generous reward and offering bonus incentives to participants who complete all parts of your longitudinal study is also a good way to minimise attrition - you can read about how to do this here: Bonus Payments

Got any other tips related to this? Let me know below!

3 Likes

How to do Demographic Balancing on Prolific :balance_scale:

To balance your sample across demographic groups, create as many studies as you need, of the same size, that point to the same study URL.

50% Split by Sex Example

  • First, create your basic study and then “duplicate” it via the actions menu.
  • On one of these studies you should add a male only prescreening, on the other a female only.
  • You can link these two Prolific studies to the same survey URL and use the same completion URL to return participants to Prolific. This means participants will be part of the same dataset.

You can do the above for any demographic where you want a split.

Got any other tips related to this? Let me know below!

3 Likes

Interesting tip… but doesn’t this strategy exclude some groups per default? What about third gender? How can we try to account for how many these will be?

1 Like