I am running a few pilot studies with small sample sizes and different participant filters for each study. My approach is that I always duplicate the previous study, change a filter, and run again with 20-50 participants.
Now the problem is, that I keep getting participant IDs that are clearly fraud as they (a) give the exact same wrong answers to an open question (same spelling, etc) as other participant IDs within the study (b) give the exact same wrong answers to an open question (same spelling, etc) as other participant IDs across studies, even though I have a filter that should exclude participants from earlier studies.
This is clearly the case for around 50% of my participants and maybe for more, but for those others I cannot be sure. Is there a way to prevent that from happening? If not, is there a way to return these participants without me having to write emails to the support team or report the IDs and then having to wait a few days until I can continue? (I need studies to be completed before I can run a new one so that I can filter IDs that have already participated in previous studies.)
From your title I read āmultiple responses by same IDā, while when going through your post I read āgive the exact same [ā¦] as other participant IDsā. So, is it the same ID sending identical responses or are they different IDs sending them?
If it is the first case, maybe there is some issue with the filter you put on. When duplicating the study from the previous one, are you selecting the āExclude participants from previous studiesā filter? Also, are you making sure to remove any Custom Allowlist, if present? If both are true, then the anomalous behavior could be due to some bug.
If itās the second case, so multiple IDs sending same replies, then itās likely the case of fraud participants having subscribed with more accounts.
For both cases (either the bug or fraud subjects) I would consider sending a request from this form. I think it should be a priority of Prolificās Team to solve the issue. You can also specify there the list of suspect IDs so that the Team can take actions. Also, to make your study āCompleteā and hence to be able to proceed towards the next one, I would consider rejecting those submissions (at least, the ācopiesā of the first submissions). I think these cases could fall under the āThe participant did not sufficiently engage in a task where the required level of engagement was clearly specified.ā reason. But itās my opinion! Also, I know it might take long, but consider sending a Bulk Report.
Is there a way to prevent that from happening?
For your next experiment you could rather considering using the filter Custom Blocklist and adding there the full list of suspect IDs to make sure none will access it.
Sorry, the title is misleading (just updated it). It is the second case, multiple IDs sending the same replies. So far, after each session I have sent an email to the support team in which I list the IDs and then they return these submissions. However, this always takes a few days in which I cannot run the next session (since I can only filter out participants from completed previous studies.) The Custom Blocklist could help here though - thanks a lot!
Rejecting does not always work since there is a limit on the maximum number of participants that you can reject, and sometimes I have as many as 50% of clear fraud subjects.
As for rejection limits, for valid reasons itās possible to increase them. However, itās not something you can do automatically and Prolific Team has to handle this kind of requests (and hence the usual form has to be sent). You can read more about the topic on this page.
@Josh is on holiday right now, otherwise Iām sure heād make sure with the Team to give priority to your requests.
Hi Markus! Sheila from Prolific here, so sorry weāve missed this post!
Can I check, have you sent through a Support Request on our website that I can have a look for? If you can DM me with the email address the request came from, Iāll be happy to escalate this with my team!