But they do not provide acceptable tools to measure this. I’ve seen the recommendation to ask the same screener question again at the beginning of the survey, but there are much better ways expressed in the literature to detect deception. Does anyone have any tips or insights on how to detect ‘Lying’ that is within Prolific’s regulations?
I don’t have much to offer but rather than forum silence, please allow my two cents.
The only truth that we know about the participants is that provided by their “About you” (Prescreener) information. To test conformance with this, Prolific stipulates that we must use exactly the same wording, as you say at the beginning of the survey.
I don’t know of other ways to detect lying in the literature. I am not sure why participants would want to lie, other than perhaps to misrepresent themselves as being multiple people, a deception that is dealt with by using IP address information. I wonder what kind of lying you have in mind.
Thank you for your response again!! Yes I am primarily talking about the about you Prescreener info and any other demographic/behavior questions. When our research questions rest heavily on the accuracy of these question it feels like we should be doing more to ensure its validity. I frequently see this in the form of control/knowledge questions (which unfortunately are not allowed for screening on Qualtrics). Also reverse scored questions that confirm a previously answered behavior question. There are a number of reasons why people might misrepresent themselves.
To name a few:
to gain access to a study,
to present themselves in a socially desirable way,
While Prolific tends to be lenient on participants (from a researcher point of view --if you look at the participants’ reddit you’ll see they don’t agree) you can use reverse scored questions, knowledge questions, and control questions (not sure what the latter are) as a basis you not using data that you will still have to pay for.
If you find, in a trial low volume study, that there is a lot of lying / subterfuge you can try reporting it here and to support to see if you can get a change in, or exception to policy. I’d be interested to know. I’d also like to see what questions you include.
Other than that I guess we just have to accept such wasteage (assuming you find it exists) as a drawback of Prolific and compare with Turk or a more expensive market survey service.
If you’re talking about pre-screener deception, there’s not much you can do except ask the questions again with same wording, we have it set up so if their answer doesn’t match the prescreened, it ends and tells them to return their link.
If you’re talking about lying otherwise, that’s just any social study. Reverse coding can be useful for checking for people who aren’t really paying attention, but isn’t specifically about lying. If you’re talking straight up deception, there’s really nothing you can “do” in most cases unless you planned on supplementing the study with some other methods that would be harder for people to lie in, like qualitative interviewing.
If you mean just biasing answers, it would depend on the nature of the bias. For example, when you talk about social desirability bias, that could appear in subtle ways especially when surveying about sensitive topics, but some researchers attempt to account for that by adding measures like the Marlowe-Crowne scale and then accounting for that in their statistical analysis.
Which is all to say that your method is going to depend on the nature of your study.