March Madness - Were the Markets Right?
Table Of Contents
March Madness has always been a test not just of college basketball teams, but of the world’s top forecasters. The tournament turns millions of fans into casual statisticians.
Every round becomes a running argument not just about what will happen, but how likely it is to happen. That distinction matters. When we look back at predictions in politics or economic news that people publicly got wrong, we often focus on what they missed, what information they didn’t account for. But that framing misses something important. Even if something only happens a small percentage of the time, in a single trial that outcome can still occur. Low probability does not mean impossible, it just means unlikely.
So when we ask whether prediction markets “got March Madness right,” we’re really asking something deeper: were the probabilities themselves accurate? Not whether every favorite won, but whether the odds reflected reality.
Did the Prediction Markets get March Madness Right?
If every game were priced at 90% and every favorite won, that wouldn’t mean the market was perfect, it would mean it was wrong. A 90% probability implies that upsets should happen about 10% of the time. If they never do, the model is underconfident.
So to answer whether or not the markets got March Madness right, we can’t rely on raw wins and losses. We need a way to evaluate whether the probabilities themselves made sense.
Enter the Brier Score...
What is the Brier Score?
The Brier score is a way of evaluating how good probabilistic predictions actually are. Instead of asking “did you get it right,” it asks “were your probabilities accurate over time?”
What is the Brier Score?
- A way to determine how accurate a probabilistic prediction is.
- The lower the score, the more accurate the prediction.
- A score of 0 would indicate a perfect prediction.
Think about it like predicting the weather. Imagine every day you give a probability that it will rain, 20%, 70%, 90%. Over time, those probabilities should match reality. When you say 70%, it should rain on roughly 70% of those days. When you say 20%, it should only rain 20% of those days.
The Brier score measures how well that alignment holds. For each prediction, you compare the probability to what actually happened, square the difference, and average it across many observations. Lower scores are better, like in golf.
What makes this useful is that it punishes you for being confidently wrong. Saying “90% chance of rain” on a sunny day hurts much more than saying “55%” and being slightly off. Over time, someone whose probabilities match reality, someone well calibrated, will outperform someone just making bold guesses.
That’s the key idea for March Madness analysis. The question isn’t whether the market picked every winner. It’s whether, across dozens of games, higher-probability teams actually won more often than lower-probability ones.
So were the markets right?
To answer this, I put together a dataset of all 63 games in the 2026 tournament using a combination of manual collection and automated analysis. That means there is certainly room for human error in the inputs, and this should be viewed less as a definitive measurement and more as a directional illustration of how well prediction markets perform.
For each game, I recorded the pre-game probability implied by market pricing and compared it to the actual outcome. From there, I calculated the Brier score across the entire tournament.
The result: An overall Brier score of 0.1536.
To put that into context, a completely uninformative model that assigns 50/50 odds to every game would score 0.25. A perfect forecast would score 0. So a result around 0.15 suggests that, on average, the probabilities were meaningfully better than random and broadly aligned with how often those outcomes occurred.
Breaking it down by round adds some texture:
| Round | Games | Brier score |
|---|---|---|
| Round of 64 | 32 | 0.1247 |
| Round of 32 | 16 | 0.1730 |
| Sweet 16 | 8 | 0.1856 |
| Elite 8 | 4 | 0.1694 |
| Final Four | 2 | 0.2665 |
| Championship | 1 | 0.2209 |
This pattern makes intuitive sense. Early rounds, where there are many heavy favorites, are easier to price and result in lower (better) Brier scores. Later rounds are closer matchups, with more uncertainty, and the errors increase.
The weakest stretch came in the Final Four, driven in part by outcomes like UConn beating Duke at roughly 32% implied probability and then beating Illinois at around 45%. Those aren’t impossible outcomes, but they are results that ding your score.
March Madness Odds Differences between Sportsbooks
While I did not compare the above prediction market Brier scores to sportsbooks, there’s another layer here that helps explain comparatively how prediction markets fared.
According to analysis from Citizens Capital Markets, Kalshi actually offered slightly better pricing than major sportsbooks across the tournament, with an average “vig” of 4.13% compared to roughly 4.3–4.5% for competitors.
That might sound trivial, but in a thin-margin environment like sports betting, even a fraction of a percent matters. In relative terms, that implies Kalshi’s take was about 7% lower than sportsbooks, meaning prices were slightly more efficient.
Final Verdict - Were the Markets Right?
Prediction markets are still relatively new to sports at scale, and this kind of analysis is still early. The dataset here is imperfect, the methodology is simple, and better versions of this will come.
But even with those limitations, the takeaway is fairly clear: the markets weren’t just guessing. They were, in a measurable sense, getting the probabilities right.
Prediction markets involve risk and are not suitable for everyone. While many platforms offer tools to make informed trades, outcomes are never guaranteed, and users should never risk more than they can afford to lose. Always trade responsibly. Additionally, platform availability and legal status vary by region. It is your responsibility to check local laws and verify that you are legally allowed to use a given platform before participating.
Read our full affiliate & risk disclosure.