Survey Timing and Instruction Quality

I recently used prolific for the first time to conduct a survey and have to say I’ve been very impressed! However, I also signed up as a participant to see the experience from the other end, and have found a couple issues that I would love to hear some thoughts on.

I’ve only taken 6 surveys so far as participant, so this is by no means a representative sample, however, as a researcher I was struck by two things:

  1. Survey timing
    I was not able to complete any of the surveys within the time indicated by the researcher (and I was trying). Maybe I’m a very slow survey taker, or I put in to much effort, but across all 6 surveys it took me anywhere from 30-100% more time than indicated. Combined with fairly low rewards that is not very good motivation to continue to put in real effort.

  2. Less than ideal instructions
    This is connected to issue number 1) since with unclear instructions it takes more time as a participant to figure out what the participant should be doing. This was not an issue across all surveys, but about half of the surveys so far had at least some instructions that were unclear, overly verbose or repetitive. For example: One survey outright threatened to withhold the reward if I got the questions wrong in big bold red letters above each question. This was also a survey that contained multiple attention checks, so I had to read those big bold red letters every time just to ensure that there wasn’t another attention check hidden within that warning.

As a researcher, I know that it is never possible to write 100% perfect instructions, and that estimating how long someone will take to complete a survey is tricky. Nevertheless, I take great pains to err on the side of overestimating how long people will take on average, and writing the best instructions possible, including testing and iterating on instructions multiple times before launching a survey.

All this to say that I was surprised by the experience I’ve had so far as a participant and the seeming lack of regard for participants’ time and experience with instructions that some surveys exhibit. A major concern arising out these issues is that, by having too many such surveys, we discourage a lot of people from participating that would be required to have access to a truly representative survey panel.

Would love to hear some other thoughts and experiences. Maybe I just go unlucky with the surveys I took?

Hi Ben, welcome to the Community! :boom:

Both points you’ve raised are very interesting. Let me give you my opinion on both!

On your first point, as a researcher I also tend to do tests before launching the study in order to have an idea of the estimated completion time, and I think is good practice to do that (for example with colleagues or friends). As a participant, I experienced both types of studies: those underestimating and those fairly estimating the completion time. Anyways, it really depends on how reflexive or impulsive a person is, and an estimated average time will never capture this variance! Also, related to timing, as you mentioned, there is the issue of “fair rewards”. Good news on this is that the Trust Team is working to eliminate underpaying studies.

As regards the instructions, as a researcher I do my best in order to be as clear as possible. If what I get in response depends on the comprehension that each gets from what I write, how valuable could ever me my findings? Following this reasoning, also being synthetic is a plus in my view, and you’re right (you don’t want some to read 100% and some others only 20% of what you write!). My suggestion is just keeping doing like this in your research: it will make your findings more realiable. Of course, if you as a participant experience anything unclear that you want to point out, you can always reach out the researcher via the messaging system (and as a researcher, expect also to get messages from participants if they don’t understand something!). The messaging system works really well now!

Hope it helps. :slight_smile:

2 Likes