
Using reports of neighborhood vote to measure election outcomes
Summary and takeaway
The results of this study provide some mixed findings about the usefulness of asking people about how their neighbors voted rather than just asking them to report how they themselves plan on voting. People’s reports about their neighborhoods generally line up with actual vote returns, but there are systematic biases in their reports and not enough is known about this measure to confidently apply it in the measurement of public opinion.
Background
In the runup to the 2024 election, YouGov hosted a survey for an anonymous client who was investing heavily in the betting markets. This case received some media attention at the time, and our client ended up making tens of millions of dollars by correctly predicting the outcome of the election. For this client, we used a somewhat unorthodox approach. In addition to asking about people’s plans to vote and how likely they were to turnout (all very standard in election polling), we included a set of items asking people to report on how they thought their neighbors would be voting and reported those results in addition to the standard measures of candidate preference.
While the purpose of polling is not to predict election outcomes, it remains of interest to people, and for better or worse (mostly worse, in my view) the absolute accuracy of election polling remains a key component of how polling is judged by the general public.
The accuracy of respondent reports of their context has some inherent interest beyond predicting elections. Survey researchers have increased difficulty reaching certain parts of the population. If it is the case that we can get accurate reports of local context from individual survey respondents, perhaps we can use those to adjust our estimates of other phenomena we are interested in studying.
Research questions
How accurate are respondent reports of their neighborhood voting patterns?
Are there groups that are more or less accurate in their reporting of neighborhood voting?
Research design
In the weeks leading up to the 2024 election, we fielded a set of surveys in swing states that asked (in addition to standard measures of candidate preference): “What percentage of votes cast in your neighborhood do you think will go to Kamala Harris?” and “What percentage of votes cast in your neighborhood do you think will go to Donald Trump?”
We used precinct data and GEOjson files provided by the New York Times to create estimates of the partisanship of each individual’s ZIP code (technically ZIP code tabulation area or ZCTA) based on the share of each precinct that fell into each ZIP code tabulation area as the objective measure against which we will compare respondents’ reports.
What a person has in mind when they say “neighborhood” is different from the boundaries of the ZIP code tabulation area they live in. This is one significant limitation of our research design, but it was the best we could do to ground individual survey respondents into their political context. For the remainder of this writeup, I’ll refer to ZIP code partisanship as the actual neighborhood vote and respondent reports as perceived neighborhood vote.
Data
The data for this study come from four surveys of likely voters in Michigan, North Carolina, Pennsylvania and Wisconsin. Each survey included interviews with 1,000 likely voters (with the exception of North Carolina that had a somewhat smaller sample size of 600). The surveys were conducted between October 24 and November 4, 2024. The results for each state were weighted to be representative of the registered voter population of each state.
Results
Overall, respondent perceptions of their neighborhood vote are highly correlated with the actual vote in their neighborhood. The plot below shows a smoothed fit with the actual neighborhood share of the two party vote that went to Harris on the horizontal axis against the perceptions of the two party vote on the vertical axis.
The plot shows the results for the overall sample combined across each of the four states (the results in each state were very similar). The results for all respondents are shown in the thin black line. The thicker red and blue lines show the results for Trump supporters and Harris supporters respectively. The closer each line falls to the dotted diagonal line, the more accurate the perceptions were.

The results make several things clear. First, people are generally accurate in their reports of their local context. In this data, voters living in Democratic areas seemed to be a little more pessimistic about Harris’s chances in their areas compared to those living in places that were more divided or more Republican.
The plot also shows a large and persistent partisan gap in perceptions of local context. Apart from those Harris voters living in the most Democratic areas, Harris supporters viewed their local areas as much more Democratic than is borne out in the actual data. Similarly for Trump voters. Almost across the board, Trump voters were more likely to perceive their neighborhoods as more pro-Trump than they were in reality.
The data also show a partisan asymmetry in the degree to which people’s reports of their local context corresponds to the actual places they live. Trump supporters (and especially those Trump supporters who live in more Democratic areas) were substantially less accurate in their neighborhood reports compared to Harris supporters.
How different measures of presidential support compare
The 2024 election was very close, and it was especially close in these key states. It would take a very large sample size to confidently estimate the two-party margin with any individual poll. For the person still interested in making predictions about the outcome of the election, we can compare the “standard” approach (asking people directly who they are planning to vote for and reporting the results) to the “neighborhood” approach (asking people about their perceptions of their neighborhood vote). The results of the plot below show how these different approaches compare. The actual election results are shown at the top of each panel. The standard approach and 95% credible interval is shown in the open circle and dashed line. The neighborhood approach and credible interval is shown with the square and dotted line.

As the results show, the neighborhood approach is directionally correct in every case, but it is far more pro-Trump than the actual results in each state warranted. In two out of the four states (North Carolina and Wisconsin), the standard approach gives directionally correct prediction.
In every case, the standard approach’s credible interval contains the actual result, and in every case the actual result falls outside the neighborhood approach’s credible interval. In this case, it seems as if the neighborhood approach just happened* to get the correct answer. Donald Trump ended up winning each swing state by a small margin, and the neighborhood approach happened to be on the “right” side of the line.
Conclusions
The results of this study leave me somewhat skeptical about the utility of neighborhood vote as some kind of previously unknown ‘hack’ to accurately measure election outcomes. As I mentioned above, the real value of polling lies in its ability to give some kind of voice to the public rather than trying to guess at the outcome of an election. Other work has shown that even when polling is not entirely ‘accurate’ in nailing the exact result of a particular election, it can still give us a valid read on the public’s preferences.
Some of the findings reported here could certainly be attributed to the disconnect between our measure of “actual” neighborhood vote (the average vote of the ZCTA in which the respondent resides) and what is in our respondents’ minds when they are answering these questions. Geographic polarization has increased in recent years and to the extent that people live in areas that are more aligned with their own partisanship as contrasted with the broader area in which they live, there will be some slippage here.
The state of the evidence here is not such that I would personally be willing to stake millions of dollars on the outcome and more work should be done to understand what exactly we are measuring when we ask people to tell us about their neighbors.
* Of course, it is possible that people’s general pessimism about Harris’s chances evinced some actual non-tangible, ‘vibes’-based sense among the respondents to the survey that she was going to lose. We would need to repeat this measure in future elections to get a better understanding of whether people’s responses to this kind of question are picking up something like this.
About the authors
Alexis Essa, Research Director II, Scientific Research
Essa is responsible for leading projects in public affairs and academic research. She has over a decade of experience managing survey projects focused on public affairs, political polling, academic research and consumer packaged goods. She has extensive experience with fielding complex surveys that include experiments, conjoint and max diff designs. Additionally, Essa excels in survey instrument design and collaborates closely with clients to ensure that survey questionnaires are tailored to meet their specific needs, optimizing data collection outcomes and insights. Essa earned a B.S. in Marketing and Statistics from the University of Connecticut School of Business.
Brad Jones, Ph.D., Senior Research Director, Scientific Research
Jones is a survey methodologist for the Scientific Research Group. He assists clients with research and questionnaire development. Jones holds a Ph.D. from the University of Wisconsin in political science. Prior to joining YouGov, he worked for Meta generating insights about users’ needs and expectations connected to transparency and control of the advertising delivery system. While at Meta, Jones worked with a wide range of product teams and researchers across Meta’s various platforms. Before working for Meta, he spent the first part of his career at Pew Research Center working with the U.S. Politics and Public Policy team. During his time at Pew, Jones gained extensive experience working through every stage of the survey research process from questionnaire development, to project management, analysis and reporting. He earned his B.A. in Political Science from Brigham Young University.
About Methodology matters
Methodology matters is a series of research experiments focused on survey experience and survey measurement. The series aims to contribute to the academic and professional understanding of the online survey experience and promote best practices among researchers. Academic researchers are invited to submit their own research design and hypotheses.