Nate Silver, creator of the FiveThirtyEight blog, made statistics cool in 2012 with his confident and correct prediction that Barack Obama would win by a wider margin than pundits expected—and especially with his uncannily accurate state-by-state predictions.

But during the current Presidential election cycle, he stumbled badly with an early assessment that Donald Trump had only a two percent chance of winning the Republican nomination. We saw on the last night of the Republican convention how that turned out.

Of course, probabilities are, by their nature, about events that are uncertain, and improbable events happen all the time. This doesn’t prove that the original assessment of its probability was wrong. But Silver’s estimate of Trump’s probability of winning still seems way off. Silver himself attributes his miss to his failure to use statistical models in early estimates. Basically, he said he was acting like a pundit, but attaching subjective probabilities to his off-the-cuff estimates.

Now Yaneer Bar-Yam and Taeer Bar-Yam, of the New England Complex Systems Institute and MIT Media Laboratory, have now published a more thorough and scholarly assessment of the two-percent affair, and they’ve concluded that Silver’s explanation is basically right, but superficial. Despite his expertise, he fell into one of the classic traps of non-statistical thinking: failure to take into account the property of dependence.

In calculating Trump’s odds of winning, Silver postulated a series of six hurdles (he called them “Stages of Doom”) that Trump had to overcome in order to get the nomination. One, for example was “heightened scrutiny,” the point at which voters start to focus on a candidate’s strengths and weaknesses; another was “Endgame,” in which party leaders might pull out all the stops to torpedo the his nomination.

Silver assigned a 50 percent probability to his surviving each stage and then multiplied the probabilities together: 50 percent times 50 percent times 50 percent, and so on—another way of saying .5 raised to the sixth power. The number that comes out is 0.0156, which is a bit less than two percent.

But Silver’s choice of six stages (not seven, not four) was purely arbitrary. And even if they were the right ones, his assumption about the probabilities was flawed. Just because an event has two possible outcomes does not necessarily mean that the two outcomes each has a 50 percent probability (unless you’re doing something like flipping coins). Finally, Silver erroneously assumed that the stages were independent of one another.  In a political campaign, success at one stage can improve the odds of success at the next stage—or diminish them. The probability of success at any stage is dependent on prior outcomes. By the time the campaign got to the “Endgame” stage, for example the probability that Trump’s nomination could be stopped by party leaders was drastically less than 50 percent. Silver wasn’t doing statistical modeling, he was doing back-of-the-envelope calculations.

Most of the time, Silver actually does use statistical models, which generally deal with the aggregation and weighting of polling data—they’re snapshots of voter opinion. Flying more under the radar, perhaps because it is less well understood by political journalists, is the role of data mining analytics, which gives candidates the tools to actually do something about voter opinion. These statistical methods first appeared on the scene in the 2004 Kerry campaign. Before then campaigns thought of targeting exclusively in terms of groups: one message might be targeted at women under thirty, another at recent immigrants, another at rural whites, and so on.

The application of data mining methods to politics means that messaging could be targeted to individuals—Irma Smith might get one message, Harold Jones another. These methods are the political cousin of the same techniques that have been around somewhat longer in business, government, military, medical and other applications. Marketers use them to decide, for example, which online ad you are most likely to click on. Insurance companies use them to guess whether a claim is fraudulent. In politics, analysts use these techniques to guess two things about you: which candidate you favor, and whether you are likely to vote. In this way persuasion messaging and get-out-the-vote efforts can be targeted where they will do the most good (and can avoid harm!).

Now you might think that one hardly needs statistics for this: just look up whether someone is registered as a Democrat or Republican, and how often they vote (both are publicly available). Indeed, just using those two variables buys you a lot of predictive power, and, in an election that is not close, you probably don’t need the added benefits of “microtargeting.”

However, in a close election, that extra bit of predictive power that you can get from bringing numerous variables into a statistical model can make a big difference.  In addition to voting behavior, consultants will consider consumer data (e.g. subscriptions to newspapers), demographic information, and census information about the neighborhood in which the voter lives.  Use of predictive statistical models is widely viewed as a key contributor to Obama’s victories in 2008 and 2012.

What about 2016? Ted Cruz was the candidate who embraced these techniques earliest and most comprehensively. And he lost in a bitter contest with Trump, who seemed to be following the model of free publicity for celebrities: whether the coverage is positive or negative matters little, as long as they spell your name correctly.

But Trump did have an analytics operation in the primary season. Its job was to identify and turn out (though not necessarily persuade) disaffected citizens who are not regular voters.

Where did Trump get the statistical models to do this? Not from the Republicans—Trump was seeking his voters from outside the GOP fold. He used statistical models created by the same firm that produced them for Kerry and Obama: HayStaqDNA, the creation of Ken Strasma. Strasma was unaware of Trump’s use of the models, which were obtained via a third-party vendor.

Strasma, who continues to do political consulting but will share his methods with anyone in his online Persuasion Analytics course at Statistics.com, says: “Both Sanders and Trump defied expectations by appealing to non-traditional voters. Predictions made based on polling of traditional Democratic and Republican primary voters, especially early in the campaign, dramatically under-estimated their potential. Through microtargeting they were both able to find the non-traditional voters who would support them if they could be motivated to turn out for the primary.”

So what will be the future role of statistics in politics?

Nate Silver’s persona and site are now deeply embedded in the political scene, and their owner, ESPN, will do its best to ensure that this continues to be the case. His misstep in this one case is not likely to greatly devalue ESPN’s asset.

Whether that means we can trust his projections is another matter. The demands of being a media property are great, and Silver must now produce “copy” on a daily basis, which does not always allow time for solid research. From a statistician’s perspective, he risks just becoming just another pundit, albeit one whose opinions are cloaked in numbers.

As for the future of data mining analytics in politics, it’s hard to imagine that it will go away. In highly energized campaigns like those of Trump and Sanders, it can help identify and turn out non-traditional voters. In a traditional contest, it will operate most often “at the margin,” where the seesaw is finely balanced. It is in those situations where accurately targeted individual get-out-the vote and persuasion efforts may tip the balance.