Hi all, Does any body know whether it’s possible to run multiple simultaneous studies on Prolific while preventing the same participant to join multiple studies? I want to for instance, run 5 experiments at the same time, while making sure a sample participant only can join one of those 5 experiments, not the other 4.
I know that Prolific allows for preventing participants from previous studies. But I guess that would work when the participants have submitted their data. I am not sure whether that would also prevent a participants from an active HIT, before completing the submission.
Any help and ideas would be much appreciated!
The two ways to prevent previous participants from joining new studies (How can I prevent specific participants from accessing my study? – Prolific) work indeed if studies are not run simultaneously. The screener “Exclude participants from previous study” works with completed studies. With the “Custom Blocklist” one manually copies the list of Prolific IDs ideally when your study 1 has finished recruiting participants (but you don’t need the study to be completed, i.e., your submissions’ review process can still be ongoing), so that from your study 2 you exclude the whole sample from study 1 and not just part of it.
When studies are run simultaneously, the only way I’ve found to prevent participants from joining them both is to select mutually excluding samples: for example, selecting females in one and males in the other one, or experienced (i.e., big number of previous submissions on Prolific) participants in one and inexperienced participants (i.e., low number of earlier submissions on Prolific) in the other one (although this is tricky because a participant can gain experience doing one of your studies, and thus gets eligible for the further study of yours… but if you set the lower-bound of one screener very far from the upper-bound of the other, maybe you can avoid this problem). Also, you can play with nationalities if you plan to have more nationalities, etc.
Anyway, in this way, you have to run the number of the versions of your study (let’s say you have study 1 and study 2, for simplicity) times 2, so in this example, 4 studies. Because I guess you want to avoid selection bias (like study 1 taken only by females and study 2 only by males). So while you can have study1 females run together with study 2 males, you’ll need to launch study1 males and study2 females in a second “round” using the “Exclude from previous participants” filter, discarding participants from the first round.
Sorry for the long reply. I hope it’s not too confusing.
hi @Shima_RM (& @Veronica ),
would it be okay if I do a TLDR?
– “nope, not really”.
but there are other options, if you use Qualtrics for example you can randomly assign participants to different survey questions.
Tim also suggested something called allocate monster the other day (here’s the link to the post), and if you use Pavlovia, Wakefield Morys-Carter has a tool for that here
Haha! Yep, indeed in my reply I didn’t investigate the purpose for such a choice, taking for granted that it has to be like that (i.e., Shima needs different studies run simultaneously). But I think Paul’s suggestion makes an excellent point! If the purpose is having different conditions/treatments/versions of the same study, it is definitely worth considering implementing some sort of random allocation within the survey tool you use to build the survey. If I may add to what Paul suggested, also oTree has this feature (if you code it).
Good point @paul
Thanks to you both for your help @Veronica @paul
Yeah, it seems there’s not an official way in Prolific that would allow us to run simultaneous studies while allowing a participant to join only one of the studies. Below is the solution that I found would address the issue from a different approach:
First, I should mention the purpose of the simultaneous studies: we want to be able to compare the behavior across these studies, and we want to ensure that time (either which weekday or what time of the day) won’t be a confound in our results. So, another approach that would solve this issue would be to combine all these multiple studies into one (gigantic) study, and then randomly assign each participant to a sub-study. This way, the data that we collect is distributed over the same duration, so it’s kind of like running simultaneous studies. It’s not very ideal, but it’s practical, I guess.
This would be easy if you’re using Gorilla as your experiment design software.
All Prolific participants would then go into one study, and Gorilla would randomize them to conditions. In each condition, you’d have one of your questionnaires. The questionnaires can either be created in Gorilla or in another service. If using another service, you’d use the redirect node.
For more about our experiment design tools check out: