recent

Henna Karna, EMBA ’18

Admiral Thad Allen, SF ’89, and Frank Finelli, SM ’86

The War Between Technological Productivity and Human Skill

Alumni

Analytics

Using Statistical Modeling to Predict Election Outcomes

By

In his presentation during the October MIT Sloan Alumni Online series, Arnold Barnett, PhD ’73, (George Eastman Professor of Management Science; Professor of Statistics) used his understanding of statistical sampling to discuss the United States political system and the standards and biases in polling methodology.

More importantly, he asked, “Who’s going to win?”

Arnold Barnett, PhD ’73, George Eastman Professor of Management Science

Barnett started by looking back in time. After the failure of the polls in both 2016 and 2020, he recognized the general frustration felt by the public and its distrust in the polling system.

In 2020, polls in the swing states underestimated Donald Trump's vote share by roughly two percentage points. But was that shortfall a sign that pollsters were incompetent? Barnett noted that it was important to consider outside factors like the fact that, in 2020, there was a global pandemic happening. How many people were actually willing to go to the polls given the risks associated with contracting COVID-19? How many people were unwilling to open their doors to pollsters in the midst of the pandemic? There were also other factors, like new ways of voting via box drop-offs and mail-in ballots.

“With all those things at hand, getting it right to within two percentage points struck me as very good,” said Barnett.

A new path forward for polling

After 2016, some pollsters began re-weighting the data so that it would more accurately represent polling results to account for any discrepancies.

RealClearPolitics, a reputable polling source, takes the averages of polling results from a certain time period to make predictions. In contrast, Nate Silver’s Silver Bulletin—another respected polling source formerly known as FiveThirtyEight—takes individual polling results and adjusts them to give greater weight to polls that have historically been accurate to make predictions. Barnett compared the numbers of two polling strategies—simple averaging versus sophisticated aggregation—and discovered that the two methodologies came up with similar results. Moreover, the local polls that were averaged together by FiveThirtyEight and RealClearPolitics did very well in their own right.

Bearing this in mind, Barnett regards 2020’s polling as a “big success.”

Although no one knew exactly why Trump’s vote share was underestimated in 2020, many pollsters in 2024 re-weighted the data to avoid underestimating him again. Some pollsters effectively pushed their estimates about Trump’s support in 2024 above the actual levels among those they questioned. This was a controversial change, so some polls refused to skew the data and instead opted to keep it raw despite fears about underestimation.

What did this look like in 2024?

For the 2024 election, Silver Bulletin employed sophisticated aggregation to predict that Trump had a 51.2% chance of winning as of October 19.

However, RealClearPolitics’s method of using the simple averages of raw data generated a very different number.

The assumption was that the outcome of the election depended on what happened in the seven swing states—Arizona, Georgia, Michigan, Nevada, North Carolina, Pennsylvania, and Wisconsin—because Trump and Harris were close in the rest of the 43 states.

Barnett noted that focusing on the seven swing states would give the best indication of what the election result would be. Starting with Arizona, RealClearPolitics said that Trump was ahead in the 2024 election by 1.4 percentage points.

After going through the polling reports for all seven swing states, Barnett shared that the polling numbers showed that Trump was slightly ahead in every swing state.

“I was so surprised that as I went from Arizona to Georgia to Michigan, it's like looking at the same set of numbers again and again and again,” he said.

Adding up all the information from the swing state polls, and assuming that people would vote on November 5th as they did in the polls, these results from RealClearPolitics implied that the chance Trump would win was 96%, as opposed to the Silver Bulletin's 51.2%.

So, what do we do about the discrepancy between the aggregation and averaging methods? Both stated that Trump would win, but one expected the election to be close and the other expected a clear victory.

After considering both methods, Barnett placed the odds of a Trump victory at 62%.

Other effects of polling

“It's also true that polls not only describe what's happening, but they can shape what's happening,” said Barnett.

At the time of his talk, Barnett reminded his listeners that nothing was inevitable. He also claimed that it was safe to say the numbers were moving in a certain direction. Now we know he and the polls were right.

One last thing that Barnett suggested at the end of his talk was, “maybe we should step back for a moment from the immediate issues of the election.” He remarked that there was a lot of tension in the air this fall due to the charged political atmosphere.

In a salute to the late Lester Thurow—who once answered “MIT” when asked, “Is there anything in America that will be around in 5,000 years?”—Barnett finished his talk with a toast.

“Why don't we take a moment away from the present crisis and just quietly have a little bit of a toast to the MIT Sloan School of Management 5,000 years from now?”

MIT Sloan Alumni Online: Professor Arnie Barnett, PhD '73

For more info Andrew Husband Sr. Associate Director Content Strategy, OER (617) 715-5933