Introduction
“Thou shalt not bear false witness.”
“First, do no harm.”
“Don’t be evil.”
From the ancient teachings of the Bible, to foundational medical ethics, to an early slogan of Google, appeals for people to act ethically abound. Given the centrality of honesty in the effective functioning of social institutions, it is perhaps not surprising that a large research literature has emerged exploring when, why, and how often people are dishonest. What is perhaps more surprising is that the majority of this work focuses on morally negative behaviors (e.g., lying) rather than the corresponding morally positive behavior (e.g., telling the truth). Indeed, the three imperatives mentioned above share this characteristic of imploring people not to do a bad act, rather than to do a good act. Building upon theories in psychology and linguistics, we suggest that merely discouraging lies might not be as effective as promoting the truth—and at times may even be counterproductive—for cultivating honesty.
Early Findings
In Study 1 (N=476), participants read two vignettes portraying a protagonist who faces a conflict between telling the truth and a private incentive to lie. After each vignette, we presented a series of five replies–ranging from patently true to patently false–that the protagonist was said to be considering. We then asked participants whether each statement followed or broke the rule “tell the truth” (in the Positive Formulation condition) or the rule “do not lie” (in the Negative Formulation condition). As predicted, those in the Positive Formulation condition rated significantly fewer responses as honest (log-odds: b = -0.06, t = -4.21, < .001).
Our theory suggests that people interpret “not lying” as merely failing to say anything that is patently false. This suggests that creating a misleading impression by leaving out critical information may not be interpreted as lying (but fails to fully tell the truth). In this spirit, Study 2 (N=454) tested for a difference between positively and negatively formulated appeals to honesty in cases of lies by omission: statements that are technically true but omit some known, pertinent information. As predicted, across four vignettes, honesty evaluations revealed that participants in the Positive Formulation condition were less likely to classify a statement as being honest than participants in the Negative Formulation condition (linear probability model: b = -0.09, t = -3.93, p < .001).
Proposal
In our first proposed study, to test the effect of framing on honest behavior, we will adapt a paradigm that has previously been used to effectively study honesty. In this study, we will use a novel, naturalistic paradigm we recently developed in which participants complete a series of tasks on Google, some of which we have surreptitiously programmed to control the outcomes of seemingly chance tasks (Choshen-Hillel, Shaw, & Caruso, 2020). In the study, participants will complete various tasks (e.g., using Google to flip a coin) and self-report their performance without knowing that their true performance is being observed. Just before participants are asked to report their performance, we will implore them to either report honestly or to not report dishonestly. We will then compare their self-reported performance against their observed performance, comparing rates of dishonesty depending on experimental condition (positively versus negatively framed appeals). Further, by capturing dishonesty at the participant level, this paradigm has the benefit of allowing us to measure a number of individual differences—such as trait-level honesty—that might moderate the effect of framing on people’s decision to be honest.
In a second proposed study, we embed this phenomenon in a social context. Here, we propose to evaluate whether people make different inferences about others’ trustworthiness when observing them violate either the entreaty to “be honest” or “don’t be dishonest.” In Phase 1, a sample of participants (P1Ps) will self-report their performance on a series of tasks where their outcomes are surreptitiously observed, as before. Phase 2 participants (“P2Ps”) will then be recruited; their job will be to select a P1P partner for an upcoming group task. P2Ps will be shown the true and self-reported performance for one randomly selected P1P. The P2Ps will also be informed that the P1P they observed was told to either “be honest” or “don’t be dishonest,” and will then decide whether they want to accept the P1P as their partner or be randomly assigned. We are interested in whether or not P2Ps are more tolerant of greater levels of dishonesty in a potential partner if that person was told to “don’t be dishonest” rather than to “be honest.” If we find this, we will conclude that people’s inferences about others are sensitive to this subtle linguistic framing of ethical rules. If, however, we find no difference between experimental conditions, this would also be interesting–it would suggest that while people’s own behavior exploits the greater “moral wiggle room” of negatively framed ethical imperatives, they are unwilling to accept that exploitation from others.
Based on power calculations using standard errors from previous research, we expect that the first study will require a sample of N=~550 to demonstrate a minimum effect size of interest (MEI). Across both stages of the second study, we will require a total of N=1300 to surpass the MEI or sharply estimate a null effect. In total, after adding a 10% increase in sample size to account for inattention (per our pre-registration plans), we are requesting $7040 to conduct this research.
As with all of our research, all studies will be pre-registered (aspredicted.org) and materials, de-identified data, and code will be publicly available (researchbox.org).
Summary
In general, we argue that telling the truth is a subset of actions that refrain from lying, and thus telling the truth imposes a higher moral standard than refraining from lies. As a consequence, ethically dubious behavior is judged to be more permissible when evaluated against the standard of not lying rather than telling the truth. This, we think, offers a promising insight for encouraging more ethical behavior in groups, organizations, and across society.