Published: Nov. 1, 2024 By

Just days before the U.S. presidential election, most national polls show former President Donald Trump and Vice President Kamala Harris are virtually tied.

Jaroslav TirBut are those polls to be believed—especially considering how badly pollsters missed the mark in 2016, with former Secretary of State Hillary Clinton’s upset loss to Trump in key swing states, despite Clinton’s lead in pre-election polling?

Jaroslav Tir isn’t so sure.

“If you look at the 2016 election, the polls said one thing, but the election said another. It’s been a longstanding puzzle in political science,” says Tir, a University of babyֱapp Boulder professor, whose area of focus includes armed conflict, war, terrorism and “Rally Around the Flag,” the idea people will rally to their country when it is perceived to be under threat.

He recently published an article in the British Journal of Political Science about how governments can use language as a weapon against their ethnic minorities.

With co-author Shane Singh of the University of Georgia, Tir also recently published a blog on the , in which they argue that voter exposure to news stories about threats to U.S. security can undermine the accuracy of election-polling projections.

“We thought, maybe there is a way to shed some light on the puzzle of whether the issue of threat may come into play in terms of people essentially saying one thing in the polls but then doing a different thing when it comes to voting,” Tir explains. “Specifically, do they actually show up to vote or not? The technical term is called voting-participation or turnout.”

Skewed polling results?

To highlight how polling results can become skewed, Tir points to a survey experiment he and Singh conducted in an April YouGov poll of about 2,000 eligible voters. Some survey respondents were provided information referencing threats to U.S. security (aka “the threat treatment”) from countries such as Russia, China, Iran and North Korea, while a control group was provided with innocuous information about U.S. Geological Survey map production. Both groups were asked whether they planned to vote, and for whom, in the upcoming presidential election.

In the same poll, YouGov earlier asked those same respondents to rank themselves on a five-point scale as to how often they vote in presidential elections, from “always” to “never.” Compared to the control group, those respondents who received reports about threats to U.S. security showed a disconnect between their stated intention to vote and their actual past voter participation, by about 4 to 7 percentage points. This, says Tir, suggests that some poll participants are overstating their intent to vote, particularly those who self-identify as not always voting in presidential elections.

“Especially among these types of individuals, when suddenly they are saying, ‘I plan to vote in November,’ we’re reporting in the London School of Economics piece that there is reason for suspicion here,” Tir says. “There is reason for doubt that when these people are saying they are going to vote that they will actually do so.

“It’s not particularly meaningful for millions of people to say, ‘I will vote for Candidate A’—and then not actually vote. That will not help Candidate A, and it will really throw off polling projections.”

All of this raises the question: Why would people lie about their voting intentions?

Tir says that for people in the survey who were exposed to news about threats to U.S. interests, it may be activating a “social-desirability bias.”

“The social-desirability bias is essentially people providing answers (to pollsters) that they think are socially desirable,” he explains. “Our argument in the London School of Economics piece is that, with news that U.S. interests are threatened, there is a kind of self-pressure to provide socially desirable answers regarding voting intentions. Basically, people want to signal that they are good citizens when their country is under threat.”

Furthermore, in the same YouGov survey, Tir and Singh asked which presidential candidate the respondent prefers, Joe Biden or Donald Trump (the survey was conducted before Biden dropped out). The threat treatment had no significant effect on claimed intentions to vote for Biden. However, it did depress, by about 4 percentage points, the probability of claiming an intention to vote for Trump, which the authors say is statistically significant.

Tir says discerning why the stated intention to vote for Trump is lessened by the threat treatment is beyond the scope of the London School of Economics blog.

“That would require further studies, but it perhaps is because of a perceived social desirability bias to not support a political challenger,” Tir says. “In other words, this could mean that Trump actually had more support than April polling data revealed.”

Unfortunately, attempting to correct for the threat-activated bias is not straightforward, because pollsters cannot control whether respondents have been exposed to threatening events in the news prior to participating in a poll, Tir notes. This makes it difficult to ascertain which individuals may have been “treated” by real-world circumstances and which ones remain in a more tranquil state, he adds.

“And as we state in the London School of Economics piece, the answer is not simply to ask people, ‘Have you been watching the news about threats to the United States?’ Because that just tells them, ‘You should be thinking about this,’ so then you’re creating a problem within the poll,” he explains.

Meanwhile, another factor can potentially skew polling results, Tir and Singh say in their blog. While polling firms employ algorithms that make projections based on “likely voters,” the recruitment of reluctant voters—who were missed by pollsters in 2016—is sometimes credited with helping propel Trump to his unexpected victory in his race against Clinton, the authors note.

And in 2024, the Trump campaign is banking on being able to identify and turn out citizens who have not previously voted, they add.

Tir says accurately gauging the effectiveness of such efforts in pre-election polls is difficult, yet the accuracy of projections depends upon it.

To trust, or not trust, the polls

So, does Tir have any confidence in current polling data as an accurate estimation of voting intentions, given his own research?

“This is going to be a really tricky answer, I warn you, because if people are not subject to the threat treatment-activated social disability bias, then the projections should be pretty accurate,” he says. “The problem is, we don’t know who, or how many of the people being sampled by pollsters, are exposed to the threat treatment by reading or watching those types of news stories recently.

“I would say there is a lot more uncertainty in whatever polling projections the public has been shown. The answers provided to the pollsters by these respondents are not necessarily indicative of what we call voting behavior, meaning, will they actually vote next Tuesday?”