Hi Tim and P.P
Thank you for your insights and questions. Let me ellaborate: The survey was supposed to be fast, a pre-study to scope out if there are enough individuals with relevant experiences on prolific to warrant a full study. We used a lot of pre-screeners which limited our participant pool to about 3000. We wanted this to be fast, and designed the survey that way too.
To answer your questions:
a) timtak, you did not take our survey, we have had no messages from participants at all actually!
b) We didn’t use attendance checks. Given there were only 10 questions they would stick out like sore thumbs, and from my experience from previous studies no one falls for them, especially not the random clickers or straight liners. It does make sense in longer surveys, but in my opinion not for something that takes <2min.
c) What are nonsense attendance checks?
d) We did allow mobile devices. Qualtrics shows the matrix questions in a different style on mobile devices, which I think will actually slow participants down, but I haven’t done the analysis.
e) Here are my statistics for the fastest bunch.
total speed for 9 items under 25 seconds: 16 (7.1%)
at least two responses in under 1 second each: 10 (4.4%)
average response time under 2.5 seconds: 23 (10.2%)
median response time under 2 seconds: 16 (7%)
1 person is in only one of the above four categories
10 are in 2
8 are in 3
5 are in 4 (all of them).
24 total.
1-3) the survey was on a single topic (experiences around data protection), but with fairly diverse points, asking participants to consider experiences from the last 4 years. From our pre-prolific-tests we found that participants had no problem comprehending the questions, but it took them some time to think of past experiences. Other questions were about benefits and drawbacks. In the follow up study we would ask participants to write those down, but here we just wanted to find out if prolific participants had sufficient experiences with data protection to be able to give us useful answers.
-
There are some items where we would expect responses to be on the opposite side of the spectrum, but we are not using constructs. Those questions were distributed differently, but I haven’t done an analysis of deviation from mean answer by response time (which would be interesting… perhaps cluster the answers first and then see how response times correlate with distance from the clusters? The assumption here is that participants that answer randomly don’t follow the trend of other participants)
-
Online yes, on mobile not. We are using qualtrics with a matrix question.
-
This is extremely unlikely. The questions are topic specific, and that topic has not received any academic attention as far as we know (and we have done a reasonable lit review beforehand).
-
Test retest is what we are thinking. I don’t think prolific’ is keen on rejecting participants who fail test-retest questions, but our primary worry is data quality. If we have to throw 25% of responses that would still be ok. How do we best do test-retest though? Another survey with specifically those questions? Should we be upfront with participants that this is a retest, and that everyone will be paid (but that the next stage of the study requires consistent answers), or should we be more covert?
-
Do you mean comparing it to on paper or in a controlled lab, potentially with eye-tracking? That would be an interesting study indeed. But no, I have no such data currently.
In principle I agree that speed is a poor measure of attendance. Prolific’s criteria of 3 std deviation for outliers would require participants to answer in negative time, but clearly someone answering in <1 seconds (not even to mention those that answer some items in < 0.5 seconds) are unlikely to have had enough time to read the question and think about it.
Perhaps a future survey should automatically retest participants later on in the survey if they responded very quickly early on? 
Based on the numbers in e) above we want to retest those 24 particpants (10.7%). I will report back once I have had a chance to analyse the results, but please keep the feedback coming.