I would like to assign participants randomly (or round-robin/alternating) to two different versions of the same basic Google Forms survey, making sure that no participant does both versions. Having assignment to different experimental conditions seems like something a lot of people would want to do, but after some searching I’m unable to find any information on how to do this within Prolific.
I’m aware that one can do this within the survey itself if the survey is in Qualtrics, but I’m collaborating with someone using Google Forms and it would be a big extra step to change platforms.
Any suggestions would be much appreciated, thanks!
Hello @PSR, and welcome on board!
As you correctly said, randomisation can already be done via some external survey platforms like Qualtrics or oTree. However Prolific doesn’t (yet) have survey functionality and cannot manage randomisation itself.
One possibility, however, is to create two different studies, one for each treatment condition you have. You have two options to exclude participants who were already engaged in the first condition from your second condition: either you use the ‘Custom Blocklist’ prescreener or the ‘Exclude participants from previous studies’ one (you can read more about them here).
Of course you should check with the appropriate statistical tests whether randomisation worked as expected (so that you have homogeneous groups by age, gender, etc.).
Hope this helps!
Hi PSR and Welcome
Veronica has covered all the bases but, one other thing - the custom black list is created automatically if you use the top right menu and Copy Study, so those that took the first study will not be able to take its copy.
Then just paste the link to the second version of the study in the copy.
I use Google forms too and that is what I do.
Thank you very much for the reply! This fell onto the back burner for quite a while but I’m getting back to it and I really appreciate the response.
For what it’s worth, one potential issue I’ll flag with this solution is that if a survey has any potentially time-dependent questions, waiting for completion of the first before launching the second could potentially create unintended differences. (E.g. if you asked about confidence in the economy, and some big piece of economic news happened to hit in between the two.) To be fair, that seems like a very small risk, especially given the likely fast turnaround time for a study, but I just thought I’d mention it in case others find their way to this question/response in the future and it’s relevant for them. Thanks again, and I do hope the randomization functionality will be added soon!
Thanks for that super-helpful additional information!
For others who find their way to this question, I’d just like to add an update with a different and very useful solution. The allocate.monster web site is a randomizer that allows you to specify multiple links to different surveys, or survey versions; it will then give you a URL and people going to that URL are redirected randomly to one of the surveys you specified.
I’ve piloted random assignment to multiple survey versions, which is exactly the use case I asked about in my original query, and allocate.monster worked very nicely. I’m now (literally at this moment) collecting data that way using a nationally representative sample. @Veronica and @timtak, I recommend you check this out and consider adding it to one of the Prolific blogs/FAQs!
That is really cool!
Thank you PSR.
I will be using it in my own research and recommending it to others.
Waffling on… It really is a shame that this is not a Prolific feature. It would be nice to have an “alternate” as opposed to “random” option since random can end up with unequal numbers in each condition. I will post it to the feature request thread where this is already been requested and heavily voted for.
Very interesting website!
Thank you for the suggestion.
A follow-up now that my survey is almost complete (though it’s a representative sample and it’s taking forever to get the last 20-30 participants)… It’s worth noting that if you are randomizing, it’s important to either ensure that the different versions have the same expected time requirement, or to make sure that you price your payments for the higher time expectation. This may not be an issue for some surveys, but for others it could definitely make a difference, so people make sure to pilot both/all versions to get appropriate time expectations.
Good point. I will be sure to pilot all versions before putting studies on the allocate.monster, and advise others to do the same.
If you do need a more powerful tool for randomisation, counterbalancing and branching, then checkout the Gorilla Experiment Tree functionality. You can see some examples here:
While Gorilla also provides a questionnaire, task building, multiplayer and gamification tool, you can ALSO just use the tree to redirect participants to other external services. In the gif below, green nodes are questionnaries, blue nodes are reaction time tasks, and orange nodes are control nodes controlling the flow of participants.
Nice one! I’ll share this in our next round of highlights