Image 01 Image 03

Polling Guru Nate Silver Accuses Pollsters of Cheating, Herding to Make Race Appear Tight

Polling Guru Nate Silver Accuses Pollsters of Cheating, Herding to Make Race Appear Tight

With the exception of the New York Times, Silver thinks the pollsters are “just f***ing punting on this election for the most part.”

Former President Donald Trump currently leads Vice President Kamala Harris by 0.3% in the RealClearPolitics average of national polls and by an average of 1.1% in the battleground states. Both results are well within the margins of error of the individual polls included in the averages. Although there’s no question that Trump’s poll numbers moved higher throughout the month of October, and he remains marginally better positioned than Harris, the race is still considered to be neck and neck. 

We are now four days out from the election and the lack of any clear direction from the polls seems unnatural. 

Polling guru Nate Silver has noticed this “sameness” in the polls. In the Thursday edition of the Risky Business podcast, which he co-hosts with Maria Konnikova, he blamed it on the pollsters.

In a discussion about the current state of the race, he told Konnikova, “It’s basically 55-45, Trump. Or 54-45 with a small chance of a tie. It’s been a little weird, I mean, look, it’s gradually drifted to Trump over actually a fairly long period now. Two out of every three days, Harris has lost ground in the forecast since roughly early October.

Then he turned to the polls. “I kind of trust pollsters less because they all, every time a pollster [says] ‘Oh, every state is just plus-one, every single state’s a tie.’ No! You’re f***ing herding! You’re cheating! You’re cheating! Your numbers aren’t all going to come out at exactly one-point leads when you’re sampling 800 people over dozens of surveys.

He continued, “You are lying. You’re putting your f***ing finger on the scale.”

[In a report from his days at data website FiveThirtyEight, Silver defined herding as “the tendency for polls to produce very similar results to other polls, especially toward the end of a campaign. A methodologically inferior pollster may be posting superficially good results by manipulating its polls to match those of the stronger polling firms.]

“I will not name names,” he tells Konnikova, “but some pollsters are really bad.” Silver laughs and then says, “Emerson College.”

Silver complained about “all these GOP-leaning firms” who don’t want to “go too far out on a limb.” They all show Trump up 1 in Pennsylvania.

[I would argue that most left-leaning pollsters show Harris up 1 in Pennsylvania and in many other battleground states as well, but I digress.]

“Every single f***ing time? No, that’s not how f***ing polling works!” he joked. “There’s a margin of f***ing error.”

With the exception of the New York Times, Silver thinks the pollsters are “just f***ing punting on this election for the most part.”

He pointed out, “Some of the other polls will actually publish numbers that surprise you once in a while. If a pollster never publishes any numbers that surprises you, then it has no value.”

“But look, all seven swing states are still polling within it looks like a point and a half here … [or] two points. It doesn’t take a genius to know that if every swing state is a tie, that the overall forecast is a tie.”

In a recent New York Times op-ed, Silver said he is frequently asked by readers what his gut is telling him about the outcome of the race. He wrote, “So OK, I’ll tell you. My gut says Donald Trump. And my guess is that it is true for many anxious Democrats.”

But, he is quick to tell readers he doesn’t think they “should put any value whatsoever on anyone’s gut — including mine.”

“Instead,” he notes, “you should resign yourself to the fact that a 50-50 forecast really does mean 50-50. And you should be open to the possibility that those forecasts are wrong, and that could be the case equally in the direction of Mr. Trump or Ms. Harris.”

In the op-ed, Silver dismisses the idea of the “shy Trump voter” because there’s “not much evidence” to back it up. He sees the problem as “nonresponse bias.” He adds that, “It’s not that Trump voters are lying to pollsters; it’s that in 2016 and 2020, pollsters weren’t reaching enough of them.”

Although few of us are likely surprised by Silver’s remarks about his fellow pollsters, it was interesting to hear an insider’s perspective.

[Note: I have never received a call from a pollster. I would be curious to hear in the comment section if any readers have been contacted by a pollster, by call or text – and if you participated in the survey.]


Elizabeth writes commentary for The Washington Examiner. She is an academy fellow at The Heritage Foundation and a member of the Editorial Board at The Sixteenth Council, a London think tank. Please follow Elizabeth on X or LinkedIn.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

destroycommunism | November 1, 2024 at 5:10 pm

lefty:

so we can cheat

so when we win we can say trump got complacent like Hillary did in 2016

b/c we can

lefty is a deceitful being

act accordingly

I live in Montana. Not a battleground state for the presidency, but is for the Senate.

I’ve been contacted by several pollsters. They usually try to approach like they aren’t, but you can figure it out pretty quickly. I just hang up on them. I don’t know how it would be possible to tell if this is more common among Trump/ Sheehy supporters.

For the first time in my entire life my household was polled this year.

So we did what any Republican would do.

We lied our ass off and said we supported Harrris and lied about our priorities and picked the most insane batshit crazy policies as our most important

We aren’t shy Trump voters – we just hate giving the enemy a clue as to what is going on.

Anecdotal but I can’t be the only one who thinks like this.

I like the part of the story where Nate is quoted as saying, “So OK, I’ll tell you. My gut says Donald Trump. And my guess is that it is true for many anxious Democrats.”

I received a “We missed you’ email from a poll… supposedly.

I actually found it in my spam folder.

On YouTube, Rasmussen has an in-depth report of all the questions asked and they are broken down by almost every category. In a few instances, they are contradictory by showing a voter saying one thing but then showing voting for another. Except for men v women, abortion, and black/Hispanic, Trump scored better on almost every question. When I read many of the questions and saw the answers, it became clear to me that Trump would win PA.

The Gentle Grizzly | November 1, 2024 at 5:36 pm

An idle get pulled. But even if I did, I would not give them any answer it would hang up.

First of all I don’t know who’s at the other end of the line. It could be a prankster, it could be a polling company, or it could be the authorities looking for a reason to cause me problems. Yes, and this day and age, I am that paranoid.

In the last two election cycles, the spread (margin of victory/defeat) between the GOP & DNC candidates in Nevada and Michigan were very close, usually only a couple points, at most. IOW, while the electorates look a bit difference demographically in NV & MI, they still vote very similarly. In 2020, Biden won NV by 2.5-points and won MI by almost the same, 2.5-points. So, the difference between NV & MI was nothing. In 2016, Clinton won NV also by 2.5-points and Trump won a buzzer-beater in MI by a half-point.

Today, Susquehanna published two polls, one for MI and the other for NV. They’re showing Harris +5 in MI and Trump +6 in NV…an 11-point difference between the two states. Either the electorates in the two states have changed DRASTICALLY in 4-years, or the pollsters have absolutely no idea what’s going to happen and they’re literally just making it all up.

    The big spread between results in MI and NV is exactly what Silver means when he said, “Every single f***ing time? No, that’s not how f***ing polling works!” he joked. “There’s a margin of f***ing error.” That it is how statistics works. When you have small samples like 800, you should expect results all over the lot, not almost the same in every poll.

    Elizabeth Stauffer in reply to TargaGTS. | November 1, 2024 at 8:23 pm

    I saw the Susquehanna poll. There was another crazy result in Michigan a week ago. Quinnipiac showed Harris up 4. But two weeks earlier, they found Trump up 4. An 8 pt swing in two weeks from the same pollster.

If only the matter of a fix was off the table.

Why would I trust anything pollsters tell me? I don’t know a single conservative who’ll give a pollster the time of day, and I don’t know a single progressive who won’t run her mouth out to them for hours.

This takes Silver out of context: He says the error could go in either direction, or even be hiding a Harris lead. From his substack: “…pollsters may be terrified of showing Harris leads after two cycles of missing low on Trump, and they probably won’t be criticized too much for a Harris +1 or even a Trump +1 if she wins in Michigan by, say, 3 or 4 points.”

“The exceptions, however, are from some of the best pollsters in the business, like NYT/Siena, which has given Harris some of her best numbers in Pennsylvania but some of her worst in Arizona and Georgia, consistent with a scrambled map. And from Ann Selzer, who has consistently published seeming “outlier” polls only later to be proven right — who had Harris down only 4 points in Iowa. Polls in Kansas and the 2nd Congressional District of Nebraska — where herding is less likely because these races aren’t expected to be close and they don’t get much attention — have also shown conspicuously strong Harris data. If Harris approaches the numbers the polls show in these places, she’ll probably win demographically similar states like Michigan and Wisconsin comfortably.”

That said, the most recent NYT Sienna poll had Harris-Trump tied nationally, and Atlas (the least likely to be herding per his recent substack) had Trump +2 nationally. Both of those, if true, are extremely bad for Harris (she needs about +2 nationally to win the electoral college). The most recent Atlas has Trump +1 in PA, and tied in WI and MI.

    TargaGTS in reply to dwb. | November 1, 2024 at 7:36 pm

    I legit don’t understand how/why he would call NYT/Siena ‘one of the best pollsters in the business.’ Siena (which is the actual pollster, underwritten by the NYT for their national polls), had Biden +9 in their final 2020 poll and through most of the earlier polls in that cycle, had Biden by even more than that, at one point as much as +14.

    Did Siena do any better in 2022? Not at all. In FL, for instance, their final poll had DeSantis winning by +9. DeSantis won by almost 20-points. Their final Ohio Senate poll had it tied. Vance won by 6.5-points.

      Herding is not about the accuracy of one pollster, its about groupthink.

      NYT may not be accurate in itself, but its best “methodology-wise” because essentially they are more independent and less likely to be herding.

      Put differently: Suppose NYT Sienna is indeed wrong by 6%, but all the polls are herding to their result?

      In otherwords: Id rather have 3 independent polls to average than 9 polls herding to a tenth (thats one poll)

      Once you have a feel for whose *not* herding, throw the others out. Average the not-herders. Thats most likely the answer.

        TargaGTS in reply to dwb. | November 2, 2024 at 6:56 am

        Yes, I absolutely understand the point ‘herding.’ But, the engineer in me has difficulty buying the idea that Siena (again, Siena is the pollster, NYT is the underwriter) has superior methodology when A) Their accuracy has proven to be appreciably below average and B) Over the last 5 of the last 6 election cycles, their error has favored Democrats, usually by more than 75%. That second one is really the important one. This is like saying Yugo had superior quality control, even though their cars were hot garbage. There’s something fundamentally wrong with a methodology that reliably favors a particular outcome just like there’s something fundamentally wrong with QC that routinely produces a by product.

        Give me 3 Independent Pollsters of varying accuracy with the proviso that their errors are as close to 50/50 as possible. Unfortunately, there are very, very few of those kinds of pollsters which I think underscores how unscientific polling really is.

          In terms of accuracy, I don’t think anyone has a lock on methodology. Generally I agree, polling is horseshit, which is why i look at averages and markets**. Polling is more useful to answer the question: “Who needs to vote for me to win” and target marketing.

          **And vibes. Trump campaign looks confident. Harris’s campaign manager is posting “Nothing to see here, relax we got this” videos on X. lol.

Subotai Bahadur | November 1, 2024 at 6:51 pm

When there is a strong likelihood, based on history, that the actual electoral poll has been rigged at great expense, how much cheaper is it to rig the so called predictive polls? All we can do at this point is to vote in the electoral poll. And then decide if we believe in the reported results.

Subotai Bahadur

In other words, no one knows one way or the other. It sure looks like Harris is a failure. It’s hard not to see the reality of how fake and stupid she is. She will lead to destruction and human suffering. But manipulation is part of the game and the integrity of voting quite suspect. 2022 looked like there was momentum and the result showed otherwise. These people that truly believe Trump is Voldemort will stop at nothing to prove just how virtuous they are as humans. Whether you like it or not!

    Herding is not about the accuracy of one pollster, its about groupthink.

    NYT may not be accurate in itself, but its best “methodology-wise” because essentially they are more independent and less likely to be herding.

    Put differently: Suppose NYT Sienna is indeed wrong by 6%, but all the polls are herding to their result?

    In otherwords: Id rather have 3 independent polls to average than 9 polls herding to a tenth (thats one poll)

    Once you have a feel for whose *not* herding, throw the others out. Average the not-herders. Thats most likely the answer.

I’ve never been contacted for a poll but then again I’ve also never been called for jury duty. I agree with Silver, if every poll has the same numbers I’m throwing the flag on them. They all can’t be the same.

If I received one I probably thought it was a scam or donation bait so I ignored it.

I live in AZ. I get polling calls all the time. I hang up on them because they ask too many damn questions and I value my time.

I also got a text request from a polling firm. Stated I was voting Trump and that was the end of that.

With regards to all these polls themselves, I just add +5 to any poll for Trump. That, in my view, will tell you how he’s really performing. In other words, he’s not losing any battleground state. The proof of this is how desperate the Demoncrat Party is lying on Trump to anything into an issue against him. At this point, it isn’t to stop him from winning, it’s to stop his margin in the popular vote from increasing.

Tight… Like Kamalas vag…

I have never been contacted for a poll – at least not in a decade or more.

The thing to realize is that polling is an art – NOT a science. You are taking a small sample (800 or 1,000 people out of millions) and trying to extrapolate.
How do you correct what you get in your sample for approximate the actual voting behavior of that larger population? Say in your sample of 1,000 people you get 700 women, 450 of whom are Republican? The remaining 300 men are 200 Democrats, 75 Republicans and 25 Independents. You know that this does not match up with the presumed breakdown of the actual population…. so you must adjust the value of one vote in each category in your sample. Do those 25 of a thousand men actually meant an independent candidate will get 2.5% of the vote? No? What does it mean?

Do you adjust for the time of day the poll was taken? Are more Black males working at that time of day and less likely to answer the phone?

The adjustment to variables goes on and on until you are basing a guess on the value of other guesses.

All this is before you throw in the conscious and unconscious biases of the pollsters.
TOO LONG/ DIDN’T READ VERSION:

I prefer looking at the betting market.

    Elizabeth Stauffer in reply to Hodge. | November 1, 2024 at 8:39 pm

    I think it’s getting harder and harder to conduct polls. That, and dishonesty, is probably why polls have been so wildly inaccurate for the last few election cycles. And all the adjustments and the judgment calls required along the way make them even more inaccurate. … Even things like caller ID and people ditching landlines have impacted pollsters. Tough job.

      Agree with all this 100%. I think the public’s distrust in media – which is comfortably at the lowest level it’s been in decades, maybe centuries – also adds to the inability for pollsters, even earnest pollsters, to produce quality work. Most people view polling as an element of the news media; that’s the perception irrespective if it really is or isn’t. Are you likely to participate in a process that you believe is fundamentally dishonest? Probably not.

      Also, pollsters recalibrate after every election. Unfortunately, they recalibrate to other polling, exit polling. For a variety of reasons, the reliability of exit polling is also hugely suspect which makes the utility of the recalibration process also unreliable.

Suburban Farm Guy | November 1, 2024 at 10:51 pm

Is it past the margin of cheat? That is the wild card here.

Regarding sample sizes:

A “larger” sample size doesn’t guarantee legitimate results, and a “smaller” sample size doesn’t doesn’t guarantee illegitimate results. (Read up on the infamous 1936 Literary Digest Presidential Poll.) It would be more concise to speak of sample sizes being either “adequate” or “inadequate,” depending on the desired margin of error (p-value). Various assumptions (such as the distribution of the population, how the sample is determined and collected, etc) are made in conducting polls, and any decent polling organization should make their assumptions known by disclosing their polling methodology.

A “smaller” sample size MAY be adequate if certain conditions are met – the key being if the sample is a representative sample of the population. The illustration I generally used when teaching gen ed stats was that of a cook making soup. The cook could dump all the ingredients in the stock pot, yet if he never stirred the pot (i.e., didn’t bother to get a representative sample) but simply drank an entire bowl of the stuff he might conclude (erroneously) that he needed to add more salt. On the other hand, if he first stirred the pot before tasting (pretty much ensuring a representative sample), all he would need to sample would be a small spoonful to determine whether or not he should add more salt.

I just figure the polls are providing cover for the cheating that will need to be done.

The secret I have found this time is to only discuss issues objectively. I never mention candidates names because that just puts them into defense or attack mode. What I will say is something like “I wonder how senior citizens will vote given that the 2025 SS increase is 2.5% and we all know that rent, utilities, and food at the stores is well into double digits the last 4 years.” Or, “of course there is a shortage of affordable rental units in Portland. All of the new arrivals have to live somewhere. Why are people so shocked that rents have gone up 50% in 4 years.” I just want to make sure that they know there is a real financial cost to their virtue signalling. Don’t get me started on the mandatory abortion tax, even if you are a guy or a woman who will never get one.

I participate in YouGov polls and I have since 2015.

Referencing only October polls with political inquiries, the last two polls I received which require my zip code (Georgia) have both failed to complete due to technical problems at the server level.

I’ve done this long enough to know that something is fishy. ( FYI, I live in a highly Republican County).

I worked for Dunn & Bradstreet back when they owned Nielson. D&B policy was all D&B employees were not to be part of any survey groups. That was over 25 years ago before D&B got broken apart