“ How does competition affect moral behavior? ” This crucial question is extremely relevant – to decide how we design markets; where we might prefer to have no or only limited market mechanisms and prices (e.g. care for the disabled, health care); where people can or cannot be trusted to act for the common good, etc.
It is one of these research questions, where the answer is not clear at all (see e.g. Sutter et al. 2020 for a discussion) and where we have conflicting evidence in the literature, starting centuries ago – for instance, while Adam Smith argued that markets (as one form of competition) would, in principle, have a civilizing effect on the behavior of market participants (Smith 1776), Karl Marx and Thorsten Veblen expected markets to be destructive and bring out the worst in human beings (Marx 1867; Veblen 1899).
With this project, labelled #ManyDesigns (see https://manydesigns.online/) we hope to learn how competition affects moral behavior, and also about the scientific process. We invite research teams (RTs) to contribute experimental research designs on the topic of “ Competition and moral behavior ”. The goal is to explore variation in research designs and outcomes on this topic. Contributing RTs program and host their design, while we, the organizing team, run (and pay) them on PROLIFIC.
We solicit as many different design proposals as possible from the community and we will then carry out 50 of these designs (randomly selected). By now we know that 102 teams registered to submit a proposal, and by June 25th 2021 we will know how many submit a proposal. With this novel crowd-sourced research approach we break new ground in experimental economics and we are very happy that so many teams have signed up.
In addition to the research question “ How does competition affect moral behavior?” we will also study the variation in design choices and results, and we will write a meta-science paper on the topic where all contributors whose design proposals were run will also be included as co-authors. The progress and time plan of the project can be seen at the website of #ManyDesigns (https://manydesigns.online/).
With successful registration, there are six stages in #ManyDesigns : registration, submission of design proposals via a pre-registration, implementation of the experiments (via PROLIFIC), peer assessment, data analysis, and writing of the paper. Details see #ManyDesigns (https://manydesigns.online/).
Our target sample size is 50 studies x 400 participants = 20.000 participants. That’s why we chose PROLIFIC as a large and established partner, as almost no other company could provide this high number of participants. We will pay 1.30 GBP fixed and an average of 1.70 GBP incentive/bonus per participant for total costs of 3 GBP per participant. Thus the total payments will be 60.000 GBP . Getting 1/6th of this, i.e. 10.000 GBP as support from PROLIFIC would help us enormously, as our budget is (over)stretched, but we scratch together each cent we find to make as many studies possible as we can.
We use power calculations to select our sample size. Accounting for sample design and clustering, we estimate the minimum detectable effect size for main outcomes. The rationale for having a sample size of at least 400 per study design is guided by both statistical and economic considerations. First, as we plan to have a sample of 50 studies, we need to make sure that the overall study is affordable in terms of resources (i.e., subject pool availability on Prolific; monetary incentives). With 50 study designs à 400 participants, we require a total sample of 20,000 participants, which we deem feasible along both dimensions. Second, a sample of N = 400 is sufficiently large to obtain reasonable statistical power to detect small to medium effect sizes in terms of Cohen’s d units for each study design in the sample. In particular, assuming an independent samples t-test, N1 = N2 = 200 gives us 90% power to detect an effect size of d = 0.411 at the two-tailed significance criterion of 0.5%; for the (two-tailed) 5% “suggestive evidence” threshold there is 90% power to detect an effect size of d = 0.324 (see Benjamin et al., 2018 for the selection of the 0.5% and the 5% significance levels). Note, however, that the main objectives of the project is to estimate the meta-analytic effect size after pooling the data from the different study designs and to estimate the heterogeneity in results across study designs.
The study has been pre-registered at the OPEN SCIENCE FOUNDATION (OSF) under https://osf.io/7cxy3. The document is currently kept confidential, but will be made fully available shortly.
As in our earlier studies on open, reproducible, and transparent science (e.g. #NARPS https://www.narps.info/, #fincap https://fincap.academy/, and several studies on replicability, e.g. Altmejd 2019, Camerer 2016), we are dedicated to publish any data and codes used.
Altmejd, A., Dreber, A., Forsell, E., Huber, J., Imai, T., Johannesson, M., Kirchler, M., Nave, G., Camerer, C. (2019) Predicting the replicability of social science lab Experiments. PloS ONE 14 (12)
Camerer, C., Dreber, A., Holzmeister, F., Ho, T., Huber, J., Johannesson, M., Kirchler, M., Nave, G., Nosek, B., Pfeiffer, T., Altmejd, A., Buttrick, N., Chan, T., Chen, Y., Forsell, E., Gampa, A., Heikenstein, E., Hummer, L., Taisuke, I., Isaksson, S., Manfredi, D., Rose, J., Wagenmakers, E., Wu, H. (2018) Evaluating the replicability of social science Experiments. Nature Human Behaviour 2: 637-644
Marx, K. (1867). Capital: Vol. I. A critique of political economy. London: Penguin Books.
Smith, A. (1776, republished 1963). Lectures on Jurisprudence. In R. L. Meek, D. D. Raphael, & P. G. Stein (Eds.), Glasgow edition of the works and correspondence of Adam Smith (Vol. 5). Cambridge: Cambridge University Press.
Sutter, M.; Huber, J.; Kirchler, M.; Stefan, M.; Walzl, M. (2020). Where to look for the morals in markets? Experimental Economics, 23 (1), p. 30-52
Veblen, T. (1899). The theory of the leisure class. London: Penguin Books.