I’ve read that Prolific allows rejections for submissions that are 3 SD below the computed average completion time for a study. Are there any recommendations for timing constraints at a question level?
Our proposed study will have 3 primary “tasks”. If we find that a participant completes any of these tasks 3 SD below the computed average for that task is that also grounds for rejection? Could this be treated like an attention check? With this metric we can also see if someone completes the study quickly but waits to actually submit.
I’m wondering if other researchers have had experience with using a timer to set a minimum for when the submit button is available. Do participants get frustrated being forced to stay on a page? Does this actually help the potential issue or simply hide the problem?
I don’t know of any recommendations for timing at a question level
though I have seen, as a participant, a lot of surveys that don’t show
the next button until a certain amount of time has elapsed. I did not
myself feel frustrated but I am pretty slow or rather I like to type a lot.
I am pretty sure that you cannot use question level response times as
grounds for rejection or as an attention check.
Here are some participants’ thoughts on the participant Reddit showing that they are aware that being too fast on a question is not grounds for rejection (and with one participant wishing that researchers would use timers)
I have seen data here that showed that some participants can respond,
with high test retest consistency, extremely quickly.
Does this (forcing participants to stay on a page) actually help the potential issue or simply hide the problem?
I’d be interested to know.
Thank you again for your feedback Tim!
It’s great to heard from a participants perspective!