Attention checks in research - Your input needed!

Hey everyone :wave:

I’m currently looking to get fully up-to-date with the latest research on the use of attention checks in research, as well as hearing researchers own thoughts about best practice and potential issues.

I would really appreciate either (a) any suggestions for resources/papers etc. to check out, or (b) hearing your opinions, or (c) both!


:exclamation: Attention Checks :exclamation:

Have you got an opinion on attention checks in general?

Have any thoughts on Prolific’s attention check policies?

Read any literature on the subject that we might find useful?

:arrow_right: Andrew would love to hear from you :arrow_left:


1 Like

I participate as a researcher and have also participated as a participant. From a researcher’s perspective, I can see the value in attention checks, (and indeed I have excluded participants in the past because they have failed attention checks). However, I have found other means (especially by looking for outliers), to spot those who are not fully engaged with the study. From the perspective of a participant I find them insulting to the intelligence. In fact I always give feedback as such to the researchers. From a psychological point of view, it’s important for the participant to be in a receptive and cooperative mood, so that they have the motivation to put maximum effort into the study. After all, they are giving up their time to help the researcher with their research. However, whenever I have been confronted by attention checks, especially where they are spattered liberally across a study, then it puts me into a negative mood. I get cross that my honesty integrity is being doubted. As a researcher, there are statistical ways to spot someone who is not paying attention (looking for patterns, outliers etc), so I for one, won’t be using attention checks any more. I would rather make my survey so interesting that participants found it a rewarding experience.


Hi @CoralMilburn , thanks for sharing your thoughts!

I would be interested to know whether you think there is a qualitative difference between the use of attention checks in online versus in-person research? My intuition is that using statistical methods to analyse attention may be more suited to controlled-surrounding experiment (i.e., those conducted in the lab), whereas the disconnect between researcher and participant in online research drives a need for the more ‘classic’ attention check methods.


I have used statistical methods in my research, but they are just as fraught with ambiguity as attention checks in my opinion. How do you tell the difference between someone who is honestly trying and made a few mistakes/got interrupted by a child/etc. and a bad actor? There will always be borderline cases. I would recommend including the kinds of checks that make the most sense in your experiment, having multiple indices of performance/attention, and giving the benefit of the doubt until it is clear that the participant was not doing the experiment as intended.