In a Hurry: How the Presentation of Web Search Results Shapes User Behavior Under Varying Time Constraints


We all face time constraints in our day-to-day lives. These constraints likely influence how we interact with the world around us. Stress arising from time constraints can affect our judgment and decision-making abilities. In today’s world, we often find ourselves sitting in front of a computer, looking for information by interacting with search engines. With an increasing percentage of the global population having access to the internet and thus a search engine, more people will experience being time-constrained during the search process.

The consequences of these time constraints have been examined in various stages of the search process. For example, in a series of decision-making tasks, time pressure was shown to shape the length and specificity of recommendations made (Crescenzi et al., 2021). A more far-reaching conclusion was drawn in a clinical setting. Here, the use of a search system decreased the ability to correctly identify relevant material (accuracy) from 32% to 6% as time pressure increased (Van der Vegt et al., 2020). These works show the evident negative effects that may arise from time constraints in the search process. Despite the importance of this issue, how the design of the search interface–or Search Engine Results Page (SERP)–may assist or hinger in time-constrained web searches constitutes a substantial and important knowledge gap that this work aims to address. As Crescenzi et al. (2016) hypothesize, considering the consequences of time-pressured search on the search process and search outcome, directing research efforts into how search engine result page design may affect those under time pressure is justified.

Therefore, we want to investigate how different time constraints influence task performance (RQ1) and search behavior (RQ2). Consequently, we aim to determine to what extent SERP interfaces are susceptible to the effects of time constraints (RQ3). We also want to know how different SERP interfaces impact on user experiences (RQ4). Moreover, we examine to what extent fondness for technology moderates the relationship between time constraint and task performance (RQ5). Due to the vast amount of people using search engines, we believe this research has the potential to beneficially impact a wide global community.

Experimental Setup and Sample Size

We aim to address these questions through a crowdsourced 4 (UI designs) × 4 (time constraints) between-subjects factorial design user study. As for the scenario, participants are asked to imagine they are a journalist working for a newspaper, who at the last-minute replaces a colleague reporting on an international discussion forum on DNA cloning. To familiarize themselves with the topic, participants are tasked with searching the web to find a list of arguments supporting or opposing the topic using the custom-made search platform BBTFind. While developing the search task, desired characteristics of exploratory search tasks by Kules and Capra (2008) (e.g. “suggest a knowledge acquisition, comparison, or discovery task”, “indicate uncertainty, ambiguity in information need and/or need for discovery”) were adhered to. Participants for the user study will be recruited using Prolific and are rewarded at a rate of £7,50/h for successfully completing the task. The required sample size is calculated using a power analysis for an ANCOVA using G*Power (Faul et al., 2007) with effect size ƒ = 0.25 (indicating a moderate effect), significance threshold α = 0.05 / 21 = 0.00238 (due to testing 21 hypotheses), and a statistical power of (1-β) = 0.8. The sample size required for each hypothesis was determined using the respective number of groups, degrees of freedom, and covariates for each hypothesis, resulting in a required sample size of 431 participants. Hence, 480 participants were recruited in total (30 per experimental condition, see below).

To examine to what extent user interfaces are susceptible to the effects of time constraints four SERP interfaces are used:

  • List interface (see Figure 1 below). Traditional interface used ubiquitously by search engines.
  • Grid interface (see Figure 2 below). Interface design in which the search results are presented in a grid (cf. Kammerer and Gerjets (2010)).
  • Snippet absence interface (see Figure 3 below). Like the list interface, but without the snippets.
  • Interrupted linear scanning pattern interface (see Figure 4 below). Interface design in which the snippet is placed to the right of other data (cf. Cutrell and Guan (2007)).

To identify proper time constraints, insights from various earlier works with search tasks of comparable nature without a time constraint were used. Consequently, we use time constraints of 2, 5, and 8 minutes and no time constraint. Hence, there will be 4 (SERP interfaces) × 4 (time constraints) = 16 experimental conditions.

To explore the effects of time constraints and user interfaces on the web search process, we use the following dependent variables:

  • Task performance metrics. Used to assess the quality of the arguments submitted by the participants. In this assessment, techniques developed by Wilson and Wilson (2013) to measure the depth of learning are applied. We will use or adapt D-Qual, D-Intrp, T-Depth, and F-Fact for our purposes.
  • Search behavior. Logged search behavior includes query rate, query length, search results, and SERP dwell time.
  • User Experience. To measure user experience, the User Experience Scale - Short Form (UES-SF; O’Brien et al., 2018) is used.

As for fondness for technology, we ask participants to fill out the Affinity for Technology Interaction Scale (ATI; Franke et al., 2019) to investigate to what extent participants like to actively approach new technological systems as a moderator variable. For descriptive and exploratory statistics, we also collect basic demographic variables, prior knowledge, topical interest, task definition, perception of time pressure, and browser window dimensions.

Study Costs

Given the current experimental setup and pay rate, completing the experiment with one topic would cost approximately £1,030. Using the currently available funding, we would be able to run the experiment with one topic, which would be a great limitation of this promising work. Ideally, we would like to run this experiment with five supplementary topics, costing an additional £5,150. The supplementary topics would unquestionably contribute considerably to the generalizability of this auspicious research.

Preregistration and Open Science

We have already published a comprehensive pre-registration of this study with the Open Science Foundation. Furthermore, we intend to make all related study material, from raw logs to code, publicly available through a project repository at the Open Science Foundation.

Figure 1. Screenshot of the list interface.

Figure 2. Screenshot of the grid interface.

Figure 3. Screenshot of the snippet absence interface.

Figure 4. Screenshot of the interrupted linear scanning pattern interface.


  1. Crescenzi, A., Capra, R., Choi, B., & Li, Y. (2021). Adaptation in Information Search and Decision-Making under Time Constraints. In Proceedings of the 2021 Conference on Human Information Interaction and Retrieval (pp. 95–105). ACM.

  2. Crescenzi, A., Kelly, D., & Azzopardi, L. (2016). Impacts of Time Constraints and System Delays on User Experience. In Proceedings of the 2016 ACM Conference on Human Information Interaction and Retrieval, CHIIR 2016, Carrboro, North Carolina, USA, March 13-17, 2016 (pp. 141–150). ACM.

  3. Cutrell, E., & Guan, Z. (2007). What are you looking for? An eye-tracking study of information usage in web search. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 407-416).

  4. Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191.

  5. Franke, T., Attig, C., & Wessel, D. (2019). A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale. International Journal of Human–Computer Interaction, 35(6), 456-467.

  6. Kammerer, Y., & Gerjets, P. (2010). How the Interface Design Influences Users’ Spontaneous Trustworthiness Evaluations of Web Search Results: Comparing a List and a Grid Interface. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (pp. 299–306). ACM.

  7. Kules, B., & Capra, R. (2008). Creating exploratory tasks for a faceted search interface. Proc. of HCIR 2008, 18–21.

  8. O’Brien, H.L., Cairns, P., & Hall, M. (2018). A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. International Journal of Human-Computer Studies, 112, 28-39.

  9. Van der Vegt, A., Zuccon, G., Koopman, B., & Deacon, A. (2020). How searching under time pressure impacts clinical decision making. Journal of the Medical Library Association : JMLA, 108(4), 564–573.

  10. Wilson, M., & Wilson, M. (2013). A comparison of techniques for measuring sensemaking and learning within participant-generated summaries. Journal of the American Society for Information Science and Technology, 64(2), 291-306.