Every Tuesday, Iâll be posting a tip to help you optimize the way you use Prolific
This is a âWikiâ post, which means any community member can edit it. So, If youâve got any tips or tricks the community would find useful, add it below under âCommunity Tipsâ!
Click âEditâ in the bottom right corner to add! And comment below if you find this useful.
There might even be a prize for the best tip of the week
Best Times to Publish Your Study
This graph shows trends in participant activity on Prolific. Publish your study when you know that more active participants will be on the platform! For example, if you wanted the greatest exposure to all users worldwide, the best time to publish would be around 3-4pm GMT.
When running a pilot, consider employing the same day of the week and the same time as that of the real experiment. This might be a plus for the reliability of your analysis if you are planning to analyse both samples together, thus merging the observations from the two sessions in a single dataset. In this way, it is more likely to have homogeneous samples in the two sessions with respect to important characteristics (e.g. gender, age). Of course this is possible only in case the pilot goes well and does not lead to any substantial change in the design or in any other key aspect of the subsequent experiment. (from @Veronica )
Did you know you can use our âApproval Rateâ filter to get participants who have a very high submission approval rating? For example, if you wanted participants who have never had their submissions rejected, you can set the filter at 100 like so:
At Prolific, we do a lot to ensure that you get the best possible data quality. But, if you really want to be sure that your participants are who they say they are, you can run some really simple checks:
Re-ask your pre-screeners within your experiment. So, for example, if youâre targeting teachers aged 25, ask participants to reconfirm their age and their occupation. This allows you to confirm that your participantsâ prescreening answers are still current and valid, and may reveal people who have forgotten their original answers to prescreening questions.
Ask difficult to answer questions based on pre-screeners. So, letâs say youâre targeting people who use a particular medicine. You could ask what brand they use, their normal dosage and at what times of day theyâre supposed to take it. Then compare their answers against the accurate information about that medicine.
These arenât foolproof, but along with the extensive work Prolific already does, they can act as extra data quality control measures.
You can read more about our free pre-screeners here
Are there any methods you use to verify participant info? Let us know below!
Hi Josh,
I ran a study that was somewhat sensitive to language and, therefore, made English as a first language an inclusion criterion. However, when I received messages from some participants, it was quite obvious that English was not their first language. I think would need some sort of language test as a screener, if I really wanted to be sure. Fortunately, this was just pilot data.
Sorry to hear that you got participants who werenât being honest. We now have a reporting feature which will allow you to let us know about things like this in the future
How do you deal with participants who might try to game the system?
Weâre super proud of the quality of our participant pool, and the quality of the data it provides. But, no vetting system is perfect! So, hereâs how you can filter out the very small minority who may be attempting to âgameâ the system.
Use speeded tests or questionnaires to prevent participants from having time to google answers.
Ask participants a few questions clarifying the instructions of the task at the end of the experiment (to check they understood the task properly and didnât cheat inadvertently)
Develop precise data-screening criteria to classify unusual behaviour - these will be specific to your experiment but may include:
At Prolific, we do a lot to ensure that you get the best possible data quality. Our pre-print even shows that our participants score higher on attentiveness measures than our competitors!
But, if you want to be extra sure that theyâre paying attention, you can use the following methods:
Use speeded tasks and questionnaires to prevent participants from having time to be distracted by the TV or the rest of the internet.
Ask participants a few questions clarifying the instructions of the task at the end of the experiment (to check they read them properly)
Collect timing and page-view data:
Record the time of page load page load and timestamp every question answered.
Record the number of times the page is hidden or minimised.
Monitor the time spent reading instructions:
Look for unusual patterns of timing behaviour: Who took 3 seconds to read your instructions? Who took 35 minutes to answer your questionnaire, with a 3 minute gap between each question?
Implement attention checks (aka Instructional Manipulation Checks or IMCs). These are best kept super simple and fair. âMemory testsâ are not a good attention check, nor is hiding one errant attention check among a list of otherwise identical instructions!
Include open-ended questions that require more than a single word answer. Check these for low-effort responses.
How do you get the best out of participants? - Part 1
While participants are ultimately responsible for the quality of data they provide, you as the researcher need to set them up to do their best.
Pilot, pilot, pilot your studyâs technology. Run test studies, double and triple check your study URL, ensure your study isnât password-protected or inaccessible. If participants encounter errors they will, more often than not, plough on and try to do their best regardless. This may result in unusable or missing data. Donât expect your participants to debug your study for you!
Make sure you use the âdevice compatibilityâ flags on the study page if your study requires a specific (or excludes a specific) type of device. Note that currently our device flags do not block participants from entering your study on illegible devices (detecting devices automatically is somewhat unreliable and may exclude eligible participants). If you need stricter device blocking, then we recommend you implement it using your survey/experimental software.
Keep your instructions as clear and simple as possible. If you have a lot to say, split it across multiple pages: use bullet points and diagrams to aid understanding. Make sure you explicitly state what a participant is required to do in order to be paid. This will increase the number of participants that actually do what you want them to!
Thatâs all for part 1! Next week Iâll give you 3 more tips on how to get the best out of your participants.
How do you get the best out of participants? - Part 2
While participants are ultimately responsible for the quality of data they provide, you, as the researcher, need to set them up to do their best.
If participants message you with questions, aim to respond quickly and concisely. Be polite and professional (itâs easy to forget when 500 participants are messaging you at once that each one is an individual!). Ultimately participants will respond much better when treated as valuable scientific co-creators.
If you can, make your study interesting and approachable. Keep it easy on the eye and break long questionnaires down into smaller chunks.
If you can, explain the rationale of your study to your participants. There is evidence that participants are willing to put more effort into a task when its purpose is made clear, and that participants with higher intrinsic motivation towards a task provide higher quality data.
Thatâs all folks! Comment below if youâd like tips in a particular area
Very interesting finding! I will surely take it into account for my next studies.
In the meantime, I added a suggestion in the âCommunity Tipsâ block that comes from my experience with piloting experiments.
Cheers!
We at Prolific have banned our fair share of malicious accounts, so weâve learned a thing or two along the way.
The list below is not exhaustive, but provides some practical advice that will boost your confidence in the responses you collect.
Weâre constantly improving the quality of the pool, and ultimately you shouldnât encounter many untrustworthy participants.
Busting the bots
Include a catpcha at the start of your survey and prevent bots from even submitting answers. Equally, if your study involves an unusual interactive task (such as a cognitive task or a reaction time task), then bots should be unable to complete it convincingly.
Include open-ended questions in your study (e.g., âWhat did you think of this study?â). Check your data for low-effort and nonsensical answers to these questions. Typical bot answers are incoherent and you may see the same words being used in several submissions (see this blog post for more information and examples).
Check your data for random answering patterns. There are several techniques for this, such as response coherence indices or long-string analysis (see Dupuis et al., 2018).
If youâre looking for a simpler solution: try including a few duplicate questions at different points in the study. A human responder will provide coherent answers, whereas a bot answering randomly is unlikely to provide the same answer twice.
One of the most important factors in determining data quality is the studyâs reward. On Prolific, itâs vital that trust goes both ways, and properly rewarding participants for their time is a large part of that. So, we enforce a minimum hourly reward of 5.00 GBP.
But depending on the effort required by your study, this may not be sufficient to foster high levels of engagement and provide good data quality. Consider:
The participant reimbursement guidelines of your institution. Some universities have set a minimum and maximum hourly rate (to avoid undue coercion). You might also consider the national minimum wage as a guideline (in the UK, this is currently ÂŁ8.91 for adults).
The amount of effort required to take part in your study: is it a simple online study, or do participants need to make a video recording or complete a particularly arduous task? If your study is effortful, consider paying more.
How niche your population is: if youâre searching for particularly unusual participants (or participants in well-paid jobs), then you will find it easier to recruit these participants if you are paying well for their time.
Next week, weâre going to talk about why paying more isnât always a good idea!
How should you approach rewarding participants? Part 2
This week weâre discussing when it might not be a good idea to pay participants more.
Consider that:
Studies with particularly high rewards may bias your sample, as participants may feel âforcedâ to choose that study when they might have gone to others. This may particularly apply to participants with a low socio-economic status
Bonus payments contingent on performance may make participants nervous about being paid, and lead to cheating.
Thatâs all folks! If youâve got any tips, post them below
Minimising dropout / attrition rate in longitudinal studies
Drop out rates depend on a lot of factors (as Iâm sure youâre aware); to name a few:
How far apart are the different parts of the study?
How generous are the rewards?
How long do the different parts take?
Whatâs the average dropout rate on Prolific?
Weâve had studies with 0% dropout rate and as high as 50%. A typical study would be somewhere in between these extremes. An independent study by Kothe and Ling found attrition of <25% over 1 year. Shorter longitudinal studies following best practices can expect better retention than this.
How can I minimize dropout rates?
You should clearly communicate your study information (i.e. expectations of participants, reward structure, time gap between the phases of your study).
You could also screen for those with at least 10 previous submissions, to ensure you are obtaining active and committed participants. Inexperienced participants are more likely to drop out.
Pay a generous reward and offering bonus incentives to participants who complete all parts of your longitudinal study is also a good way to minimise attrition - you can read about how to do this here: Bonus Payments
Got any other tips related to this? Let me know below!
To balance your sample across demographic groups, create as many studies as you need, of the same size, that point to the same study URL.
50% Split by Sex Example
First, create your basic study and then âduplicateâ it via the actions menu.
On one of these studies you should add a male only prescreening, on the other a female only.
You can link these two Prolific studies to the same survey URL and use the same completion URL to return participants to Prolific. This means participants will be part of the same dataset.
You can do the above for any demographic where you want a split.
Got any other tips related to this? Let me know below!
Interesting tip⌠but doesnât this strategy exclude some groups per default? What about third gender? How can we try to account for how many these will be?