Grant Competition

Based on previous research on the fairness and optimal allocation of funding (e.g., Fortin & Currie, 2013; Bromham, Dinnage, & Hua, 2016; Pier et al., 2018; Avin, 2019a; Witteman et al., 2019), the main issues with allocation are found at:

  1. The researcher level – i.e., issues related to the age, gender, ethnicity, tenure, position, institutional affiliation, etc. of the researcher.

  2. The research level – i.e., lack of novel ideas, lack of diversity of ideas and tendency towards their homogenisation, selection of ideas based on present trending scientific topics while discarding those ideas that do not follow these trends, etc.

My proposal targets the allocation issues at both levels and contains as follows:

  1. Reviewers Selection
  2. Proposals Submission
  3. Proposals Selection

1. Reviewers Selection

Prolific will set one day on which the competition starts. Then, a given period of time before this date (i.e., one month), Prolific will notify its community about the competition, offering them the option to participate as contestants and/or reviewers.

Individuals can choose to be a reviewer in exchange for Prolific credit, which will be given based on the number of proposals needing to be scored.

The chosen period of time should allow for enough people to sign up as reviewers and for Prolific to pre-screen them. This can be done, for instance, if they are participants on Prolific, based on the number of previous submissions and approval rate, and if they are researchers, based on a minimum number of completed studies.

This will allow for a robust pre-screening which avoids manipulation or gaming, while ensuring the selection of individuals more suitable to score the proposals, therefore increasing the likelihood of higher submission selection.

2. Proposals Submission

All proposals will be submitted on the same day, and they will be blind, to avoid all the aforementioned issues found at the researcher level, and focus solely on the idea that the researcher proposes.

3. Proposals Selection

Once submitted, proposals will then be randomly allocated to the reviewers. Each proposal will also be scored by more than one reviewer. The number of proposals that each reviewer will have to score will depend on both the number of proposals and the number of reviewers, which Prolific can easily determine after the submission date.

Over a designated period, reviewers will mark in blind their randomly allocated proposals and submit their scores to Prolific.

Proposals will follow the same structure, as consistency will lower the likelihood of the structure affecting the evaluation of the actual content. Proposals will be relatively short (to a designated word count) to avoid over-burdening the reviewers.

Proposals will include and be scored based on three quality criteria, which address the aforementioned research level issues:

  1. Theory: Does the proposal contain a novel idea that has the potential to advance science in any way, while also being supported, at least to a certain extent, by the current scientific literature?

  2. Methodology: Does the proposal include a justification for how the idea could be tested and provide a rationale in support of this?

  3. Contributions: Does the proposal include arguments for how the idea could advance science and provide a rationale in support of this?

Proposals will be scored from 0 to 3: 0 – no criterion is met, 1 – only one criterion is met, 2 –two criteria are met, 3 – all criteria are met.

Once all reviews have been gathered, the average scores will be calculated for each proposal and the top 5 proposals will be selected. Then, the top 2 will be chosen by random allocation. The random allocation is a method successfully used and increasingly incorporated in funding applications because it reduces or fully avoids the previously mentioned issues found at both the research and researcher level (Fang & Casadevall, 2016; Avin, 2019b).

Alternatively to the random allocation, the last step can be done in the same manner as with the community review. Namely, the top 5 proposals will be randomly allocated to the internal Prolific panel, which will then choose the top 2.

Overall, the process consists of pre-screening of proposals through community review, and then final allocation decided through randomisation.

All research must be pre-registered and shared, consistent with the principles of open science.

References

Avin, S. (2019a). Centralized funding and epistemic exploration. The British Journal for the Philosophy of Science , 70 (3), 629-656.
Avin, S. (2019b). Mavericks and lotteries. Studies in History and Philosophy of Science Part A , 76 , 13-23.
Bromham, L., Dinnage, R., & Hua, X. (2016). Interdisciplinary research has consistently lower funding success. Nature , 534 (7609), 684-687.
Fang, F. C., & Casadevall, A. (2016). Research funding: The case for a modified lottery.
Fortin, J. M., & Currie, D. J. (2013). Big science vs. little science: how scientific impact scales with funding. PloS one , 8 (6), e65263.
Pier, E. L., Brauer, M., Filut, A., Kaatz, A., Raclaw, J., Nathan, M. J., … & Carnes, M. (2018). Low agreement among reviewers evaluating the same NIH grant applications. Proceedings of the National Academy of Sciences , 115 (12), 2952-2957
Witteman, H. O., Hendricks, M., Straus, S., & Tannenbaum, C. (2019). Are gender gaps due to evaluations of the applicant or the science? A natural experiment at a national funding agency. The Lancet , 393 (10171), 531-540.

Thank you so much for submitting a fantastic proposal! We’re going to review this, and get back to you in January :grin:

@AndreeaDamien