Attention check placement and formatting

Attention check placement and formatting

Francesca Rex - April 28th, 2025

Self-administered online surveys offer respondents the flexibility to participate at their convenience and from any location. However, ensuring that this convenience does not compromise data quality is a key concern for researchers. Reliable data collection depends on respondents' authenticity and engagement. To maintain high-quality responses, online surveys often include attention checks designed to filter out inattentive or inauthentic participants. This experiment, part of our ongoing Methodology matters series, evaluates how the placement and formatting of attention checks can influence their pass rates.

Summary and takeaway

Attention checks help ensure high-quality survey responses and results by filtering out disengaged and inauthentic participants. This experiment tests how two factors – placement (early in the survey vs. late in the survey) and question formatting – affect pass rates. We ultimately find that later placement and the absence of a line break lead to higher pass rates of attention checks. Our findings demonstrate that researchers using attention-check questions in their surveys should understand how design decisions impact their measures. Our findings also suggest that “attentiveness” is not a simple binary quality of respondents. Attention waxes and wanes throughout a survey experience in ways that researchers cannot always anticipate.

Introduction and background

Attention checks serve a dual purpose: confirming that respondents are human and ensuring they are answering thoughtfully. Previous research has found that about 30% of online survey respondents are “inattentive and not suitable for the purposes of most academic research” (Thomas and Clifford, 2017; Chandler et al., 2019). Attention checks help mitigate these concerns by incorporating specific questions that assess respondent engagement.

Attention checks come in a variety of forms, and researchers are regularly adapting them to improve effectiveness. Attention checks will often ask a question with either one objectively correct answer, or an open ended response that gives more substantive information to assess whether a respondent is authentic. For example, a survey may ask: “What is a day of the week among these choices: Library, Frog, Wednesday, Mountain”. A more direct example is: “Choose the number 1 if you are paying attention”. They may also ask for open-ended answers like, “What was this survey about?”, or, “What is your favorite movie?”. In doing so, researchers can better identify and remove low-quality responses.

Given the importance of attention checks, researchers would benefit by understanding the factors that impact their pass rates. This experiment examines two variations in a common form of an attention check: where in the survey the question appears (early vs. late) and how the instructions are formatted (with or without a line break between the instructions and the question stem). Analyzing how these variations impact pass rates will provide insights into the systematic influences these questions may incur.

>View experiment's disclosure statement

Experimental setup

To assess the impact of attention check placement and formatting on pass rates, participants were randomly assigned to one of four groups:

Earlier in the surveyLater in the survey

With a line break

Group A

Group B

Without a line break

Group C

Group D

The attention check used in this study was: “We want to check if you're still reading closely. Please select 'very interested' and 'not at all interested' in response to the following question. How interested are you in politics?” Respondents were given four answer options: very interested, somewhat interested, not that interested, and not at all interested.

In the placement condition, half of the participants saw the attention check early in the survey, and the other half saw it at the end of the survey. The median survey completion time was approximately 8 minutes. For those in the early condition, the attention check appeared around the 30-second mark, whereas participants in the late condition encountered it at a median time of roughly 5.5 minutes into the survey. In the formatting condition, half of the participants saw the question text with a line break between the instructions and the substantive part of the stem (before the final question, “How interested are you in politics?”). The other half saw the instructions combined with the substantive stem.

Research hypotheses

Prior to fielding, we originally hypothesized:

  • Hypothesis 1: Attention checks placed earlier in the survey will have higher pass rates compared to those placed later in the survey.
  • Hypothesis 2: Separating the instructions from the main stem of the attention check with a line break will lead to lower pass rates compared to formatting that has no line break.

To test these hypotheses, we conducted a survey in which all respondents received the same attention check, varying its formatting and placement to assess their impact on pass rates.

Evaluating the hypotheses - Method

We measured the dependent variable as the proportion of respondents who answered the attention check correctly versus incorrectly. We then constructed a variable for pass rate and conducted regression analysis to assess how the experimental conditions influenced these rates.

Results

Hypothesis 1: Attention checks placed earlier in the survey will have higher pass rates compared to those placed later in the survey.

Contrary to our original hypothesis (which was based on the assumption that attention may lag as respondents tire during the course of a survey), attention checks placed later in the survey had significantly higher pass rates than those asked earlier on. In fact, placing the attention check toward the end of the survey increased the probability of passing by approximately 7 percentage points compared to earlier placement (p < 0.01).

Hypothesis 2: Separating the instructions from the main stem of the attention check with a line break will lead to lower pass rates compared to formatting that has no line break.

Our second hypothesis proved statistically significant in the expected direction. The absence of a line break increased the probability of passing the attention check by about 7 percentage points compared to attention checks that included a line break (p < 0.05).

Additional exploratory analyses

In an additional analysis that was not preregistered, we considered the influence of screen size — specifically, whether respondents with smaller screens are more likely to fail an attention check with a line break due to the increased difficulty of viewing the entire question at once.

We are not able to randomize screen size for respondents, but a large share of respondents in this survey (61%) took the survey on their mobile device. There are significant differences between individuals who take surveys on their phones and those that take them on larger screens – younger, lower-income, or less educated people are more likely to use their cellphones than a computer –, but we can take some measures to control for those potentially confounding factors.

In a regression analysis where the dependent variable is whether or not a respondent passed the attention check and the explanatory variables are screen size, age, race/ethnicity, gender and education, we find that removing the line break significantly improves passage rates among small screen users. This supports the hypothesis that a smaller screen size makes it hard to view instructions in their entirety. In contrast, screen formatting had a smaller impact on respondents using larger screens, likely because the entire question and instruction were visible without scrolling.

The absence of a line break had a positive impact on passage rates for people taking the survey on smaller screens, which is consistent with the idea that the screen size has prevented them from seeing the instructions. The formatting of the question mattered less on larger screens, likely because respondents did not have to scroll to see the directions in their entirety.

General discussion

Our findings show that the formatting and placement of attention checks substantially impacts the pass rate. When placed later in the survey and formatted without a line break between the instructions and question stem, respondents passed the attention check at the highest rate. These results highlight that attention is not a fixed trait that remains constant throughout a survey, or among respondents; rather, it fluctuates over time and people. A high or low pass rate may not necessarily reflect a respondent’s overall attentiveness but rather be influenced by factors such as survey fatigue, question complexity, or engagement with prior content – making it a less reliable measure of true attentiveness.

These patterns also underscore the broader importance of user experience in survey design. Factors such as screen size, scrolling requirements, and visual layout can affect whether respondents are able to process instructions and answer questions as intended. For example, mobile users may be more vulnerable to missing important instructions due to limited screen space, particularly when formatting (like line breaks) obscures information. Instead of treating attention checks as definitive indicators of data quality, researchers should consider them in conjunction with other metrics, such as open-ended responses or completion time, to gain a more holistic understanding of respondent engagement.

About the author

Francesca Rex, Program Manager, Scientific Research

Rex is a Project Manager in YouGov’s Scientific Research Group, where she supports the design, execution, and analysis of academic surveys. Her research spans a variety of political and social issues, with a particular interest in international relations and the study of societies outside the United States. Rex earned her Bachelor's degree in Economics and Political Science from the University of Colorado Boulder.

About Methodology matters

Methodology matters is a series of research experiments focused on survey experience and survey measurement. The series aims to contribute to the academic and professional understanding of the online survey experience and promote best practices among researchers. Academic researchers are invited to submit their own research design and hypotheses.