
Reliability of retrospective vote choice questions
Summary and takeaway
The results of this study suggest some caution is in order when asking retrospective vote questions during a presidential campaign. While people are typically consistent in their reports of voting, the stability of over-time responses begins to fall after about three years (coinciding with the ramp-up of the campaign). For researchers without access to panel data (where they can rely on earlier measurements of the vote choice question), there is likely to be a substantial amount of noise in retrospective vote choice in the months leading up to the presidential election.
Background
Pollsters have increasingly been using presidential vote to adjust their survey estimates. An overview of 2024 election polling showed that, compared to methods used in 2016, pollsters have increasingly begun to incorporate past vote into their weighting methods. by YouGov, past presidential vote is a key part of the weighting process (e.g. see the methodology statement on any recent poll YouGov has conducted for The Economist). In principle, this adjustment can help pollsters deal with issues of differential partisan response which can be difficult to distinguish from real swings in the race (e.g. see this analysis of 2012 election polls and this analysis of the same phenomenon in 2016).
Research questions
How stable are retrospective reports of voting behavior?
What are the factors associated with consistent (or inconsistent) reports of voting behavior?
Research design
Each election year since 2006, YouGov has sponsored the largest academic study of election attitudes and behavior: the Cooperative Election Study (CES). The CES pulls together research teams from across the world who are interested in understanding U.S. elections. Its enormous sample size (each survey includes interviews with 60,000 or more people) creates a lot of opportunities for analyses that would otherwise be underpowered.
The survey is conducted in two phases – a preelection and postelection wave. The postelection wave includes a measure of vote choice immediately after the election takes place.
Most members of our panel are not asked to recall their vote again after answering it on this survey, but there are some circumstances where people are asked one or more additional times. This study focuses on those individuals who were asked their vote choice immediately after the 2020 election in the CES and then at least one other time.
We can then look at how often these individuals who were asked to report their 2020 vote more than one time give the same answer each time they are asked.
Data
The 2020 CES interviewed 61k Americans in advance and then immediately following the 2020 elections.
This study examines the subset of 2020 CES respondents for whom we have collected subsequent survey interviews where they were asked at least once more to recall their 2020 vote (leaving ~17k panelists). In general, the panelists under consideration here match the overall demographics of the CES.
Most respondents were asked to recall their 2020 vote report one or two additional times in the four years between the 2020 CES post-election interview and the 2024 election.
The average time between the CES post-election interview and the first subsequent ask to recall vote in a later survey was about 30 months (first quartile: ~18 months, third quartile: ~42 months).
Results
Overall stability by 2020 vote choice
Up to about 3 years after the 2020 election (e.g. in November 2023), individuals in this sample were extremely consistent in their vote reports. At two years out 98% of Biden voters and 97% of Trump voters were consistent in their vote reports. In the weeks before the 2024 election, the stability of this response had declined to 77% for Biden voters and 74% for Trump voters on average.
The plot below shows the share of CES respondents who remained consistent in their vote choice. The blue area shows the estimate and uncertainty for people who reported voting for Biden in 2020. The red shaded area shows the same for those who reported voting for Trump.

Which groups are most (in)stable in their retrospective vote reports?
People whose partisanship aligned with their 2020 vote choice (e.g. in the 2020 survey, they identified or leaned toward the Democratic Party and reported voting for Biden) were more likely to remain consistent in their reports compared to people whose partisanship did not align with their vote choice. People with no political affiliation and those who voted against their reported partisanship in 2020 were substantially less stable compared to those who voted with their partisanship.
Younger voters – especially younger Trump voters – were less stable in their reports of their presidential vote compared to older voters. Compared to non-Hispanic White voters, voters of other races and ethnicities (and especially the relatively few Trump voters in this demographic) were less likely to remain stable in their vote preferences.
When it comes to education, the group with the lowest level of stability were those with a high school degree or less. Among Biden voters, stability increases with educational attainment. Among Trump voters, stability generally increases with educational attainment, but it dips back down among Trump voters with a postgraduate degree.
Conclusions
What does all this mean for pollsters who use vote reports as part of their weighting methods? Firstly, it should be noted that this is one study that focuses on one election. It is very possible that there are peculiarities of the 2024 election that wouldn’t generalize to other years. In this data (and this election), the stability was fairly symmetrical between Democratic voters and Republican voters. Consequently, this data is consistent with the idea that weighting by partisanship in 2024 likely didn’t cause significant problems.
Ultimately, the results here show the value of longitudinal data collection. A lot of the worries about differential partisan response or recollection can be alleviated by collecting vote information outside of the charged context of a presidential campaign.
About the author
Brad Jones, Ph.D., Senior Research Director, Scientific Research
Jones is a survey methodologist for the Scientific Research Group. He assists clients with research and questionnaire development. Jones holds a Ph.D. from the University of Wisconsin in political science. Prior to joining YouGov, he worked for Meta generating insights about users’ needs and expectations connected to transparency and control of the advertising delivery system. While at Meta, Jones worked with a wide range of product teams and researchers across Meta’s various platforms. Before working for Meta, he spent the first part of his career at Pew Research Center working with the U.S. Politics and Public Policy team. During his time at Pew, Jones gained extensive experience working through every stage of the survey research process from questionnaire development, to project management, analysis and reporting. He earned his B.A. in Political Science from Brigham Young University.
About Methodology matters
Methodology matters is a series of research experiments focused on survey experience and survey measurement. The series aims to contribute to the academic and professional understanding of the online survey experience and promote best practices among researchers. Academic researchers are invited to submit their own research design and hypotheses.