#TipTuesdays - Our Research Best Practice Guide

Hey Profs! :sunny:

From now on, our #TipTuesday posts are going to pivot slightly into giving you tips on best practice in research.

All the content that will be shared is readily available in our Best Practice Guide. But on the forum, we’ll be going through a different aspect each week. And we’d love to have a discussion about what you think about our advice!

If you like what you see, leave us a :heart:

So, let’s begin…

Prolific’s Best Practice Guide :memo:

Phase 1: Questions to ask before creating a study :thinking:

What research question do I want to answer? And why?

Asking yourself these two questions will not only influence the questions you ask and the answer formats you choose, but also the population(s) you target.

Let’s say you are trying to answer the question: How does leadership style influence burnout? You may be interested in this because burnout is a common phenomenon in many organizations, or because there are currently inconsistent findings on the relationship between these two concepts. In this example, your target group will most likely include employees of a company, and perhaps even employees in a certain type of organization (e.g., IT consultancy firms) in a certain country.

You may want to specify what type of relationship exists between leadership style and
burnout (e.g., predictive, mediating, moderating) and you’ll have to decide on how to measure these variables (e.g., using a psychometrically validated questionnaire).

Ideally, your research question will be grounded in current scientific theory: It may address a knowledge gap, or aim to replicate an existing finding in a different population to examine how generalizable it is. If you’re curious about theory development, please check out our section on theory building in this guide, where we briefly explain why theory development is a critical cornerstone of scientific research, and research in general.

That’s all for this week folks! Next week we’ll be asking the questions:

  • Am I doing exploratory or confirmatory research?
  • Why does it matter?

Am I doing exploratory or confirmatory research? Why does it matter?

Although both types of research aim for findings that are reliable and valid, there are important differences between them.

Exploratory research (sometimes called hypothesis-generating research) aims to uncover possible relationships between variables. In this approach, the researcher does not have any prior assumptions or hypotheses.

In confirmatory (also called hypothesis-testing) research, the researcher has a pretty specific idea about the relationship between the variables under investigation. In this approach, the researcher is trying to see if a theory, specified as hypotheses, is supported by data.

Depending on the type of research you are doing, your approach and research design will be different. Imagine you are doing a confirmatory study to test the hypothesis that playing table tennis at work increases the employees’ creativity. In this case, you will have precise ideas about which measurements to use (e.g., measure the time played table tennis and count the ideas generated in a subsequent brainstorming session or the scores in a creativity test). You will (hopefully) also have a good theoretical foundation to justify why there should be a connection between playing games at work and creativity in the first place, and (hopefully) also a good rationale for measuring “playing games” and “creativity” in a certain way.

If your research was exploratory, however, you would not have any hypotheses in advance, but you would still be interested in finding out what may increase employees’ creativity. You might collect extensive qualitative data through interviews with employees. When analysing these interviews, you may notice that the topics of leisure activities and games come up quite frequently. Based on this insight, you could then develop the post-hoc hypothesis that playing games in the office (and, more specifically, table tennis) might have a positive impact on creativity.

Source:

Why does it matter?

It’s dangerous to confuse these two types of research. All too often researchers treat exploratory results as confirmatory, and this hindsight bias (also known as the ‘I-knew-it-all-along effect’) can make us feel as though we had a prediction all along, even though we didn’t. For example, you might unexpectedly discover that playing table tennis at work decreases creativity, and then find post-hoc reasons for why this is plausible after all. This is called HARKing (Hypothesizing After the Results are Known) and it’s a poor research practice, for several reasons:

  • HARKing can make exploratory findings more publishable by falsely giving the impression that an unanticipated result was expected. This may lead fellow researchers to believe that your finding has been empirically tested more times than it actually has (and that it is more robust than it actually is), thus creating unwarranted confidence in a result and ultimately reducing reproducibility.
  • Valuable information about your original hypothesis might be lost
  • HARKing may promote (conscious or inadvertent) fudging of statistical analyses
  • It presents a distorted, inaccurate model of science
  • It violates a fundamental ethical principle of science: to communicate one’s work honestly and completely
  • HARKing promotes narrow (i.e. context and paradigm bound) new theory (rather than powerful, general theory)

One way to tackle HARKing is to preregister your research. Preregistration helps to distinguish analyses and outcomes that result from prediction (i.e., confirmatory, hypothesis-testing research) from those that result from postdiction (i.e., exploratory, hypothesis-generating research). At the end of the day, we need both exploratory and confirmatory research to do good science. For more information on how to preregister, check out the preregistration article in this guide.

What is my target population?

When you know what research question you want to answer and why, it’s time to think about your target population.

When you know what research question you want to answer and why, it’s time to think about your target population. Are they of a specific age? From a certain country? Only women? Or have a certain political attitude? When you’ve decided on the demographic details of your target group, you can select it by using Prolific’s pre-screening filters. There are more than 200 pre-screeners available, but if you can’t find one you want, just contact our Support team and we will do our best to help you out!

Getting a representative sample

Generalizing your findings to a national population can be difficult, especially if your sample doesn’t match the population very well! We now offer the technology to support representative sampling on Prolific, and can provide you with representative UK/US samples for your study. With this feature, you will be able to easily collect a sample with strata that reflect the general population in the US or the UK.

So what exactly does this mean? Let’s say you want to generalize your findings to the UK population. We will then take your intended sample size and stratify it across three demographics: age, sex, and ethnicity, based on the UK Office of National Statistics data. For example, your sample will have a similar proportion of 30–40 year old Asian women to the general population.

1 Like

What type of data do I need? Quantitative or Qualitative?

Catch up on the most important differences between these types of data to decide what approach fits best for your research.

When it comes to designing your study, you’ll need to make another important decision: Do you want quantitative or qualitative data, or a combination of both? It’s worth looking into the differences, advantages and disadvantages of each before you make your decision.

Quantitative data are numeric and are typically obtained en masse using tests or questionnaires delivered to a large number of people. Results are usually reported in the form of statistical analyses, which are then used to draw statistical inferences. The greatest advantage of quantitative data is that it allows the capture of a diversity of responses, at scale. Hopefully, this best practice guide will help shed light on some of the most important statistical concepts! If you’re really keen on improving your understanding of how to conduct empirical research and use statistics, we highly recommend the free online course by Daniel Läkens on Coursera called “Improving Your Statistical Inferences”.

Qualitative data are non-numeric. They is usually exploratory in nature and aims to explain a phenomenon in terms of ‘how’ and ‘why’. Methods can be anything from open-ended questions and interviews to diary studies. Qualitative data can be analyzed and interpreted in a variety of ways, for example, by letting independent raters assign numerical values to the collected documents, or by coding words and extracting themes until saturation.

Qualitative data can have some advantages over quantitative data. For example, qualitative research may allow researchers to gain unique insights into participants’ feelings, thoughts, and behaviours, thereby detecting areas and issues that may otherwise be missed in a purely quantitative approach. Qualitative data can also be complementary to quantitative data, especially when there are contradictory or ambiguous results. In conjunction, they can hint at potential (causal) relationships, and thus point to future research directions more accurately than a solely quantitative measure could.

In conclusion, when deciding which type of data to collect, you should refer back to your research question and identify the type of information needed to answer the question.

How large should my sample be?

More about statistical power, sample size, and how to determine it.

The method used to determine your sample size depends on whether you’re doing quantitative or qualitative research. With qualitative research, you want your sample to be big enough to draw valid conclusions about the population in question. Otherwise, the data you collect may not provide much evidential value (unless it’s just a pilot).

In quantitative research, your approach depends on whether your study is observational or experimental. If observational, then you need to decide on an acceptable margin of error for your primary variable of interest, in addition to the level of confidence you want in your finding. For a quick sample size determination, you can use a table like this one.

If experimental research, on the other hand, it is important to understand the concept of statistical power in order to calculate the required sample size. In brief, higher statistical power means that you’re more likely to detect an effect, if that effect actually exists. In more formal terms, power is the probability that you correctly reject the false null hypothesis when a specific alternative hypothesis is true. Consequently, an experiment with more statistical power has a better chance of detecting a true effect.

The question is, how much power do you need to detect the effect you are investigating? And consequently, how many participants do you need to recruit for your study? First, you need to turn to the existing literature or run pilot experiments to decide which effect size you expect. You should then do a power calculation to determine the sample size required to detect that effect, given the p -value threshold you intend to use.

Whilst a sample can certainly be too small (or “underpowered”), can your sample size also be too large (that is, “oversampled”)? The short answer is: no. The simple reasoning behind this is that increasing your number of participants also increases the likelihood to find the true effect size. After all, the reason you investigate only a certain amount of people is because you cannot investigate the whole population. But you still want to be confident about the conclusions you draw from your sample to the larger population. So the closer your sample size gets to the population size, the more confident you can be in the effect size you find.

Read more about the question of oversampling in our blog!

2 Likes

Preregistering your Research

If you are studying social sciences, you might be especially interested in reading about how to preregister your research.

What is a preregistration?

Preregistering a study involves writing a plan of how you intend to carry out your research, and then uploading that plan to a preregistration archive (e.g., OSF) before you begin data collection. Your plan should include as much detail as possible, such as:

  • Your hypotheses
  • Study design
  • Inclusion and exclusion criteria for participants
  • Intended sample size (and how you chose that number)
  • Study materials
  • Dependent variables (and how you measure/calculate them)
  • Procedure
  • Randomization approach
  • How you’ll clean up your dataset so it will be ready for analysis
  • How you’ll define outliers in your dataset
  • Analysis plan
  • Data management plan

This might seem like a lot! But not every aspect of the research project has to be detailed beforehand. Deviations from the initial plan are acceptable as long as you can justify them and transparently communicate them. For example, you may want to use non-parametric (rather than parametric) statistical tests because you might find out after data collection that your data are not normally distributed.

Once your plan is written, you are ready to begin your preregistration! On the OSF, you can create a project record for your upcoming study, invite collaborators and upload your protocol. When you are happy and everything is in place, you can create a preregistration - a timestamped, frozen version of your project. If you are worried about your ideas being stolen, you can embargo the registration for up to four years, but if not, you can make your project public right away.

Fundamentally, preregistration lends credibility to your scientific findings. When you make a prediction and find results to match it, your conclusions have more weight. Preregistration lets you prove that your hypotheses are ‘predictions’ and not ‘postdictions’. It lets you prove that you haven’t p-hacked: Since your analysis plan, sample size and exclusion criteria were specified in advance, you can’t fudge your analysis to achieve statistically significant results. Accordingly, preregistration helps fight publication bias because your research article is more likely to be accepted based on its methodology, and not because you found statistically significant results. On top of that, writing a thorough plan before you begin collecting data helps you to spot flaws in your methodology, and makes writing up the paper easier once the data is all in. Perhaps most importantly, it demonstrates openness, honesty, and transparency - three key characteristics of any good scientist.

Even if you are doing exploratory research, preregistration is still worth it. It indicates to others that you are interested in producing credible scientific results that you (and others) can have confidence in.

In psychology, some have called these steps towards improved science Psychology’s Renaissance. Exciting times, right? :slight_smile:

And what are Registered Reports?

There is a second approach to preregistration called the registered report (or reviewed preregistration). In registered reports, your research question and methodology are peer-reviewed prior to your data collection. Based on this review, journals agree to accept articles for publication if the authors follow the methodology specified in the preregistration (this is called ‘in principle acceptance’). Registered reports provide you with expert input on your methodology before you begin data collection. It might sting, but your study will be much better off for it! And, let’s be honest: It’s even more upsetting if you put all your effort in conducting a study and writing a paper that then gets rejected because of a small detail in the design that you overlooked, right?

Still have questions? You can find more answers to questions about preregistration here.

Sources:

2 Likes

Good topic! A site people might find a very user-friendly home for pre-registration is aspredicted.org

It’s light-weight, has nice templates and guides, supports grouping registrations into folders to gather the registrations for a paper or project together, make it public, output as pdf etc.

It’s also easy to use, and facilitates communicate between team members (you get mailed when a registration is saved and can sign off on it or go for another round of edits before sealing it). Very goal-oriented.

A second comment is that thinking “How can I use pre-reg to make my project run more smoothly and get an even better paper?” is a nice mindset - Not a chore but a chance to avoid hassles by thinking through choices and ensuring that the data collected support a great outcome.

2 Likes

Thanks for your thoughts @Tim_Bates!

Defo agree re aspredicted.org! It’s very good. We actually recommended it as the preregistration service of choice for our £10k Grant Competition.

Not a chore but a chance to avoid hassles by thinking through choices and ensuring that the data collected support a great outcome.

Agreed! I’d rather have the extra hassle of preregistering than potentially carrying out a dud of a study

Thanks! I’m definitely a fan of aspredicted.org and that’s what I tend to use.

I also wholeheartedly agree with the point about the indirect benefits that come from being forced to really think through what you’re doing before you collect data. Having to write out my main hypotheses and planned analyses early on has really saved me from some costly mistakes at least once or twice.

2 Likes

Phase 2: Things to keep in mind when designing a study

How to phrase questions and items

Some useful advice on how to formulate the questions you want to ask:

The phrasing of questions is very important when it comes to constructing your study. You should think carefully about how to phrase your questions and items in order to collect results that are as unbiased as possible. Here are some simple rules to follow:

  • Make sure that participants understand your question/item. A good way to ensure this is to pilot test your questions.

  • Avoid negative or double negative phrasing (e.g., use“I am often happy” instead of“I am not often sad”)

  • Avoid ambiguous questions. This means that you ask only one question per item. Also, make sure that your question is phrased in a neutral way. For example, the adjectives “fast ”or “nice” have a subjective meaning to every individual. Give examples for clarification when necessary.

  • Avoid conditional sentences (e.g., “I feel good if I play the piano”)

  • Avoid overly general expressions (e.g., “All children are noisy”)

  • Make sure that your question/item matches the response format. For example, if you ask an open-ended question, you cannot choose a rating scale as the response format.

Source:
https://opentextbc.ca/researchmethods/chapter/constructing-survey-questionnaires/
Peterson, R. A.(2000).Constructing effective questionnaires. Thousand Oaks, CA: Sage

1 Like

These tips are great, thanks!

For surveys, I think brevity is key. My own experience taking surveys is that it’s harder to maintain attention if you’re met with a wall of text, so, I try to have at most 3-4 sentences per paragraph and 1-2 paragraphs per survey page.

Another thing is that the above tips seem mostly aimed at reducing noise in your data, i.e. random variation with a mean of zero. But you also want make sure to phrase your questions in a way that avoid bias, i.e. variation that tends in a specific direction. I don’t have a very principled approach to this, but some things I consider are:

  • Avoiding leading questions
  • Avoiding phrases that may prime participants one way or the other
  • Randomising the order of questions whenever possible to avoid any order effects
  • Making sure that questions in different survey conditions are as similar as possible and only differ with respect to the particular thing you’re experimentally manipulating.

Curious to hear advice from others!

1 Like

Special focus: Sensitive questions

Some advice to increase the likelihood that you will get honest answers to sensitive questions from your participants.

Getting honest answers to sensitive questions from your participants is a tricky issue. There are, however, a few steps you can take to ensure that your results are less biased by participants’ social desirability.

First, make sure to mention in the briefing that participants’ responses are fully anonymous. This is especially important for questions that might ask for illegal behavior such as drug abuse. The good news is: People already feel more anonymous when filling out an online questionnaire compared to face-to-face interviews.

Second, pay special attention to how you phrase your question. You can either emphasize that the phenomenon you are asking about is perfectly normal or widely spread (e.g., by referring to a newspaper or research article about it), or by pointing out at the beginning of the question that you are asking the participants for their personal opinion.

Lastly, it might be a good idea to pilot test these sensitive questions in order to see if people would actually answer them, or if you need to rephrase them.

Depending on the type of sensitive questions, it can be useful to provide contact details of certain helplines in your debriefing. For example, if your study investigated drug abuse, providing respective helpline details can make sense.

Source: https://dism.ssri.duke.edu/survey-help/tipsheets/tipsheet-sensitive-questions

1 Like

The Order of Your Questions

Question order can have a big influence on your participants’ response behavior - here is what you should know before you start.

When constructing a study, you’ll need to decide on an order for your questions. You might think this doesn’t matter too much, but question order can make quite a difference to how your participants respond.

You should start with general questions that are easy to answer and not too sensitive or overly personal. There is a debate about whether sensitive questions should go near the start or nearer to the end of your questionnaire. On the one hand, asking sensitive questions early allows participants to drop out of the study quickly if they feel uncomfortable answering them. On the other hand, asking sensitive questions late means you can use the preceding questions to get the participant into the right mindset to answer more sensitive questions.

It is also preferable to keep questions about the same topic or area together. If you have different response formats, it might make sense to group your questions by both response format and topic area. Indicate the topic area by headings and separate the different groups of questions by page breaks.

Again, a pilot test can be very helpful to detect if the order that you have chosen is introducing any kind of confusion or bias to the data you obtain.

If you are trying to avoid biases introduced by the order of the questions, or if you simply want to test your questions, it may be a good idea to randomize the questions, or blocks of questions.

Sources:

https://surveymethods.com/blog/how-survey-question-randomization-can-be-used-for-question-testing/

https://www.qualtrics.com/support/survey-platform/survey-module/block-options/question-randomization/

1 Like

Tips for the visual design of your survey

We present three basic design tips to make your survey look great.

Did you know that visual design may influence your participants’ responses? Regardless of the tool you use to create the study you launch on Prolific, there are some basic design tips that you should follow in order to keep your participants’ motivation high throughout the study.

First, make sure that the goal of your study matches the design. Bright colours may be suitable for a study about creativity, but probably not if you are asking about alcohol consumption.

Second, ensure that your questions and instructions are visually easy to read. For example, you could use a sans-serif font like Arial, which is easy to read. However, if your participants are supposed to read a longer text, you should choose a serif font. Regarding the choice of colour, make sure you always maximise the contrast between the text and your background to enable easy reading.

Third, we recommend you have a progress bar! This is especially important for longer or monotonous studies. The progress bar can be in percent or give the exact amount of remaining questions. Don’t make the pages of your questionnaire too long, but at the same time, try to group questions about the same topic together.

Sources:

Biases in Research

Research (and personal experience!) demonstrates that people can fall prey to an enormous number of biases. No one, including researchers, is immune to biases and cognitive fallacies!

The international science journal Nature provides this really helpful overview:

Below, we highlight five common biases in research and how to mitigate their impact on your data.

Generally, bias refers to an instance where you measure a certain variable in your sample, but the measurement systematically deviates from the true value of that variable in the population you want to study. Wikipedia provides a really useful overview of these biases. In this section, we are going to discuss a small selection of biases that are particularly important to be aware of when conducting good research. Note, however, that these biases are not limited to online research only (for biases specific to online sampling, see limitations of online samples in this guide).

1. Confirmation bias (or hypothesis myopia)

This is one of the biases that concerns you as a researcher rather than the participants or your study. It means that, if a researcher has stated a hypothesis he/she believes is true, they may be using responses that confirm the bias and disregard evidence that would undermine the hypothesis. This is especially dangerous when it comes to analyzing your results, where contradictory findings are easily overlooked.

To avoid this form of bias, it is important to critically check your hypotheses and data for alternative explanations. This is often best achieved by openly discussing your research with colleagues or collaborators. Also, preregistration, and especially registered reports, are effective tools that can help combat confirmation bias.

Source:

2. Question-order bias

This bias refers to the possibility that a prior question may influence how a participant answers subsequent questions, for example due to concepts, ideas and emotions that can be activated through a certain question.

One way to avoid the question-order bias is to start with general questions that are easy to answer and not too sensitive, then subsequently going to the more specific and potentially sensitive questions. Furthermore, asking positive questions before the negative ones can also reduce this bias. Finally, counterbalancing can work wonders! Simply make sure you randomize the presentation of your survey items/questions. This will help you to rule out systematic effects of asking certain questions before others.

Source:
http://methods.sagepub.com/reference/encyclopedia-of-survey-research-methods/n428.xml

3. Method Bias

Using only one method to measure several constructs can result in the so-called method bias. This may be due to the phrasing of questions and instructions or the answer format chosen. This issue is most concerning when you are relying on self-reported data only.

Let’s say that you want to understand whether safety climate (i.e., the employees’ perception of the working environment and practices regarding safety) can predict risk behavior of employees. You measure both the predictor and the outcome variable at the same time using two questionnaires.

The results of this can be twofold. First, the construct reliability and validity cannot be estimated correctly, as the variance resulting from the method cannot be separated from the systematic variance caused by the trait of the construct. Second, the relationship between two constructs can be misinterpreted, as the method bias can suggest a stronger or weaker relationship between the constructs than there actually is. Applied to our example, this means: you cannot really be sure that you are actually measuring safety climate and risk behavior, because based on your design, you cannot tell which part of the variance in risk behavior is actually explained by safety climate, and which part of the variance is due to the fact that you measured everything at one time point using the same participants. Furthermore, if you found a very strong relationship between safety climate and risk behavior, you can still not be entirely sure that this relationship is actually this strong, because part of it may still be due to the fact that you used questionnaires only.

The method bias can however be minimized in four simple steps:

  1. Separate the measurement of the independent and dependent variable. This separation can either be through time (delay), space (physically separate the items, e.g. on different pages and placing the respective items far apart within the study to weaken the association), or psychologically (i.e., having a cover story to not suggest a relevant relationship between the variables you measure).
  2. Change the response format of the questionnaires, for example by avoiding the same response scale for all the constructs that you measure. For example, when asking about employees’ risk behavior, you could use a frequency scale ranging from 1 (never) to 3 (always). To assess safety climate, you could use a 5-point Likert scale measuring the extent of employees’ agreement with certain statements about safety climate.
  3. Try to avoid ambiguity in your items by providing examples whenever necessary. When your participants know what exactly they are being asked, they will be more likely to answer the question correctly instead of just selecting the neutral answer option.
  4. Use other sources of data for the construct you are trying to measure. For example, if you want to investigate the relationship between safety climate and risk behavior, using official accident statistics or report rates of incidents can be useful sources for these variables.

For more information:

Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P.(2012).Sources of method bias in social science research and recommendations on how to control it. Annual Review Psychology, 63, 539-569.

4. Social Desirability Bias

This bias refers to the phenomenon that participants provide answers of how they feel they should respond to a certain question, because it is socially more accepted than their true behavior or attitudes.

For example, numerical answers to the question “How often do you exercise per week?” may be much higher than they truly are, because people go for a run far less often than they are ready to admit. The social desirability bias is especially a problem when it comes to sensitive questions or questions where there is an obvious‘ right’ choice favored by society.

Please note that there may be cultural differences in what constitutes a ‘right’ choice in a certain society, for example see research on individualism vs. collectivism or research on tight vs. loose norms across cultures. Also note that there is some evidence that social desirability bias might actually be weaker in anonymous online studies - so good news for online research!

Source:
http://methods.sagepub.com/reference/encyclopedia-of-survey-research-methods/n537.xml

5. Selection bias

When a participant volunteers to take part in a study, they are deliberately choosing which study they want to take. Consequently, it is possible that the people who participate in your study differ systematically from the wider population, for example because they are particularly interested in the topic of your survey.

For an overview of the methods that can be used to reduce the selection bias, please see this article.

Pro tip: How to detect which participants don’t pay attention? Our data analyst Jim Lumsden has provided some practical advice on how to improve your data quality.

Prolific IDs, data collection and security

Walking you through the nitty gritty of collecting data and keeping it secure.

Obviously, what data you collect and how you do so will vary hugely from study to study. But there are a few important things to note about the data collection process.

Firstly, you must record the Prolific IDs of your participants. Currently, Prolific cannot link to your data-collection software, so you need to record Prolific IDs on your end in order to know who has contributed to your study. Whether you do this via a QueryString (recommended) or by using a direct question in your survey is up to you. If you don’t record Prolific IDs, you won’t be able to reject participants for submitting poor quality data, as you won’t know what data belongs to whom. Relatedly: ensure that your completion URL is working correctly, and that it submits a matching completion code back to Prolific. If you don’t do this, it becomes even harder to know who has completed your study and who hasn’t!

Secondly, You must store these Prolific IDs carefully. If you are considering releasing your data on an open data sharing platform such as OSF or a university data repository, then you should remove the Prolific IDs.

Thirdly, it’s important that you only collect data sufficient to perform your stated purpose. Don’t log IP addresses or deliver cookies to your respondents just because you can. Consider carefully what data you need, be transparent that you’re collecting it, and don’t collect more than that.

Fourthly, keep your data secure. Your level of security should be appropriate to the risk, “taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons”. In other words, if your study is a short questionnaire on personality, it’s pretty low risk and your security system doesn’t need to be Fort Knox. But, if you’re collecting data on mental health or drug usage (for example), then you need to put considerable security in place: think encryption, pseudo-anonymization, two-factor authentication, strong passwords, user access control and access logging.

Regardless of whether your study is low or high risk, you need to document your security processes and have procedures in place for monitoring the success of those processes. If you have further questions, then your institution should have a Data Protection Officer who can provide more detailed advice.

Writing the Study Description and Debriefing

Writing a good study description can improve participant motivation, instructional clarity and help you meet certain ethical requirements

It’s easy to underestimate the importance of briefing and debriefing your participants. Most participants want to know what kind of study they are taking part in and why. Not only is it the standard of good research practice to provide this information, but seeing a relevant purpose in the study will motivate your participants to put effort into responding.

During the second stage of study creation ( Description), y ou have a chance to provide information to participants so that they can make an informed decision to take part. Study descriptions can be complex to put together, so we’ve made this checklist to help you out. We think your study description should include:

  • The aim of the study
  • What the participant will be required to do
  • Any sensitive information participants will have to provide
  • Anything you think the participant might be uncomfortable doing
  • Anything unusual the participant might have to do, such as downloading software or requiring headphones
  • Anything the participant must do to avoid their submission being rejected.
  • An estimate of how long it will take to receive a reward after submission
  • If you plan to use bonus payments, or if it’s a longitudinal study with a payment schedule, then state this clearly.
  • Information on how a participant can opt out of the study (and what will happen if they do)
  • Information on whether a participant can remove their data from the dataset
  • Information on whether anonymized data will be made accessible to other researchers
  • Information on how the data will be used (publish a research study, guide government policy, etc).
  • Your contact details in case of questions.
  • If you have ethics approval, the contact details of the ethics board in question.

Debriefing:
If you are using deception or a cover story, make sure to resolve this in the debriefing. The debriefing should consist of a short thank you message as well as information about any deception that was used in the study. Note that your debriefing should be ‘inside’ your survey, on your externally hosted website, as Prolific does not current provide support for debriefing on our website.

Depending on the content of your study, it can be useful to provide contact details of certain helplines. For example, if your study investigated drug abuse, providing respective helpline details can make sense. For more information about ethical standards, please refer to: Ethical principles of psychologists and code of conduct

Before you collect data on your participants, it’s important to get their consent. But this process can be a little fiddly!

If you are a Researcher, then it is your responsibility to ensure that you have performed your legal obligations as a data controller in relation to any personal data you may receive, and, in particular, to ensure that you have provided all information required by law prior to the collection of any such personal data. You read more about how Prolific protects privacy and complies with GDPR here.

You have certain responsibilities, one of which is gaining consent from your participants. Your consent form should explicitly and transparently state:

  • What data will be collected (if you are collecting sensitive data, i.e. racial or ethnic origin, religious or political beliefs, or health status, then this should be made explicit)
  • How the data will be used
  • How the data will be stored and for how long
  • How you will maintain the anonymity of responses
  • Whether anonymized data will be made available to other researchers online at some point
  • How the participant can withdraw their consent and their data
  • The legal framework under which their data will be held.

Finally, you must ask the participant whether they understand the above information, and whether they consent to taking part in your study. The participant must then click a button or check a box, and you should record that they’ve given their consent. This should be timestamped and stored alongside their Prolific ID.

2 Likes

Wondering how much to pay your participants? Here are some useful considerations.

One of the most important elements of study setup is deciding on the study’s reward. A study of Mechanical Turk participants concluded that fair pay and realistic completion times had a large impact on the quality of data they were willing to provide. On Prolific, it’s vital that trust goes both ways, and properly rewarding participants for their time is a large part of that. We enforce a minimum hourly reward of 5.00 GBP/6.50 USD. But depending on the effort required by your study, this may not be sufficient to foster high levels of engagement and provide good data quality. Consider:

  1. The participant reimbursement guidelines of your institution. Some universities have set a minimum and maximum hourly rate (to avoid undue coercion). You might also consider the national minimum wage as a guideline.
  2. The amount of effort required to take part in your study: is it a simple online study, or do participants need to make a video recording or complete a particularly arduous task? If your study is effortful, consider paying more.
  3. How niche your population is: if you are searching for particularly unusual participants (or participants in well-paid jobs), then you will find it easier to recruit these participants if you are paying well for their time.

That said, paying more isn’t always a good idea! Consider that:

  1. Studies with particularly high rewards may bias your sample, as participants may feel ‘forced’ to choose that study when they might have gone to others. This may particularly apply to participants with a low socio-economic status.
  2. Participants sometimes share study information on external websites. If word gets out about a particularly well-paid study with niche inclusion criteria, you may attract liars.
  3. Bonus payments contingent on performance may make participants nervous about being paid, and lead to cheating.

Deciding on a reward is the last step before you can launch your study on Prolific.

1 Like

The issue is when we SHARK (i.e., Secretly HARKing). In contrast, it’s perfectly fine to THARK (i.e., Transparently HARKing).

1 Like