Scheduling time for participants to join an online session

Hi, thanks for making this forum! I am planning a group experiment, where I will run sessions of 20-25 participants at the same time for 1 hour. I think I would like to collect the time availability of participants who are eligible and schedule a time for everyone to join the online session.

How would you recommend setting this up? I was thinking I could post the study (with full payment amount) and direct them to a booking form and then reach out when I find a time that works for enough people. If they can’t join when everyone else does, I can ask them to return it and give them a smaller compensation for the time it takes to do the booking form. I am concerned people will just submit the study as complete after the booking form and if I would be justified in rejecting these. Would it be better to set up a “screener” study, and then a 2nd main study using a whitelist?

Thanks for your help!

1 Like

Hi Clint, great to have you aboard!

Here’s the key info you need:

  • If you’re planning to run interviews then, use a scheduling tool as this works best for researchers who are running multiple interviews as the participants can then sign up for the time / date of the session that suits them.

  • If you’re just running one group experiment then it would be best if you just stated a time / date for this and asked participants to confirm if they would be free for this date / time. This should be in the study description ideally and should also be displayed at the very beginning of the study. Participants who aren’t free can be redirected and asked to return the study, and participants who are free can continue.

To get a full breakdown, we have two Help Centre articles that go through the process for this here and here.

Let me know if I can help further! :slight_smile:

1 Like

Great, thanks Josh!

2 Likes

Thanks, Josh-- very helpful.

To follow up on Clint’s question, focusing on the fact that I want to recruit for and run multiple independent session of a group experiment: Suppose I want to fill four sessions, each lasting 15 minutes and with 20 participants, starting at 9:00, 9:30, 10:00, and 10:30, respectively.

The help page on screening participants says that if I want to screen participants who are available for a particular session, I could use an initial screening study which contains the question: “Are you available at this time?” and then use those responses to create a custom allowlist for the main study.

Is there a sensible way for me to do this without running a separate recruiting study for each session? I don’t want to do that because I want to recruit for the sessions simultaneously and I want to exclude people from participating in more than one session. A few possibilities I’ve considered:

  1. Present the options in the recruiting survey and allow participants to simply select one session to attend. Pro: participants can determine exactly which session to attend. Con: I wouldn’t have control over session size and I want to keep them at exactly 20.

  2. Present the options in a doodle poll and ask participants to indicate all sessions which they are willing and able to attend. Then I use this data to place participants in sessions and screen them using multiple custom allowlists. Pro: I can control the size of the sessions and make sure they are uniform. If there are more than enough participants I can simply not invite the extras to the main sessions. Con: I would have to re-contact participants to inform them to which session they have been assigned and direct them to the appropriate Prolific study corresponding to their session. This opens up room for confusion and miscommunication about which session the participant will attend.

  3. Use a doodle poll and place participants in sessions as in 2), but instead of directing partipants to a new Prolific study, send them a link to my actual study instrument, i.e., the experiment session. Participants who don’t get placed in a session will still complete and get paid for the recruiting study and I can use bonus payments for the participants who actually complete an experimental session. I’m not sure about the pros and cons of this approach relative to the other two, but it seems like being able to do everything with just one Prolific study is a pro.

Hey @zgrossman, welcome to the community! We’re glad you’re here :tada:

I think I may have found a solution for you. Calendly is scheduling software that allows you to schedule events with groups of people and set maximum group sizes for individual slots.

It should have everything you’re looking for! Let me know if it’s not, and I’ll help you find a solution :slight_smile:

I am not sure about Calendly (it looks cool) but the site that @zgrossman kindly mentions, Doodle, is cool too and allows one to schedule events with a maximum size. One of the Doodle free plan options is
“Limit the number of votes per option
First come, first served. Once the spots are filled, the option is no longer available.”

The Prolific help pages on Longitudinal Multi-Part Studies that @Josh linked to above, suggest using bonus as one way of carrying out a multipart study, so if it were me I would go with 3. above.

1 Like

Hi Lok

Welcome the community.

In addition to, or as a method of achieving Chloe’s implementation, Josh, a Prolific employee, recommends the use of scheduling software/sites such as Calendly.
Doodle is another that I can recommend and is free.

If you do decide to use scheduling, rather than hope that there are enough participants at the same time, the help page on Longitudinal Multipart studies may also be of relevance. One of the ways therein recommended is to pay normally for the first (e.g. scheduling part) and then pay a bonus to those that show up for a subsequent (experiment) part.

Tim

Hi Folks at and using Prolific

I would like to ask some questions this time.

  1. By setting prescreening questions on a demo survey, and looking at the number of active respondents in a certain demographic that correspond to that prescreener, one can effectively perform surveys for free! E.g. One can compare Students in the UK vs Students in the US and amount of time spent on the internet. How does Prolific feel about this use of the GUI!?

  2. Is there now, and has their always been a vetting of new studies prior to their release into the wild?

(The study I have just created is getting no responses. Perhaps that is because it is now 04:34 am in the UK and my survey has yet to be approved. I did not notice this lag in the past. The study I mentioned has started now. Great!)

  1. The guide on how to use Google forms suggests creating a demographics section which checks the prescreening questions. This is a very helpful idea since it enables also a sort of attention check that is instantaneous, prior to even entering the survey proper. The guide mentions " Validate your screening questionS by asking them again in your survey" with “questionS” in the plural, but the way that Google Forms works means that there can I think be only one question in a section that that branches — there is no OR logic. I think that the branching is done on the last question in the section. This means if one is to check 4 prescreening questions, then I think one will need 4 sections. Is my understanding correct, and is it okay to have 4 brief sections (e.g. nationality, age, race, and student status)?

Tim

hi @timtak , i hope other people also provide their thoughts to these interesting questions.

  1. IMO i don’t think i would trust this method – too much error. If only for the reason that #3 is important - those screener questions are not always accurate (though if you were studying prolific itself, i could see how it might be interesting)
  2. no idea, but they seem to be released very quickly
  3. what would the branching be used for? if it would be used to screen out participants, i’d think that would be less-than-ideal, since we’re not supposed to screen out participants once they start (i assumed that this meant even if we discover that they don’t meet the specified screening criteria). for the sections in general, i’d say absolutely okay to break up demographic questions however you please
1 Like

I’m not sure if I understand what you mean. Could you provide another example? :slight_smile:

This is a pretty frivolous question so please don’t give it much thought but, e.g. for simple nation to nation comparisons, there is quite a lot of data in the prescreening questions. For example setting prescreeners and looking at the number of participants
uni age students dog owners cat owners
US 16078 8550 5110
|UK 3621 1018 729

Which expressed as a percentage in a graph is
DOGCAT

When there are more Japanese participants, this sort of simple country to country comparison would be interesting enough to use in lectures.

1 Like

I see! This is an interesting use of our platform that we haven’t considered before. It’s a by-product of us offering free prescreening. However, I would concur with @paul. I’m not sure that you could do anything beyond basic demographic/population screening. Although, I would be interested to see how complex one could get without actually running a study :thinking: