Daily Guardian UAEDaily Guardian UAE
  • Home
  • UAE
  • What’s On
  • Business
  • World
  • Entertainment
  • Lifestyle
  • Sports
  • Technology
  • Travel
  • Web Stories
  • More
    • Editor’s Picks
    • Press Release
What's On

NodeShift Partners with Presight for Global AI Expansion

March 27, 2026

Even Realities G2’s biggest software update yet brings an app store and a meeting prep tool that changes how you work

March 27, 2026

Android 17 makes your internet controls way less frustrating

March 27, 2026

March Madness, Revisited: The AI Model Did Well. But Mad Things Still Happen

March 27, 2026

Careem shares key customer trends during Ramadan

March 27, 2026
Facebook X (Twitter) Instagram
Finance Pro
Facebook X (Twitter) Instagram
Daily Guardian UAE
Subscribe
  • Home
  • UAE
  • What’s On
  • Business
  • World
  • Entertainment
  • Lifestyle
  • Sports
  • Technology
  • Travel
  • Web Stories
  • More
    • Editor’s Picks
    • Press Release
Daily Guardian UAEDaily Guardian UAE
Home » March Madness, Revisited: The AI Model Did Well. But Mad Things Still Happen
Technology

March Madness, Revisited: The AI Model Did Well. But Mad Things Still Happen

By dailyguardian.aeMarch 27, 20266 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

(NOTE: This article is part of an ongoing series documenting an experiment with using AI to fill the NCAA brackets and see how it fares against years of human experience. The original article is as follows.)

A week ago, I wrote about entering an NCAA tournament pool with a more disciplined process than I usually use.

Instead of relying on mascots, vibes, or whatever team happened to look great on Saturday afternoon, I tried to think about the bracket the way an investor or analyst would: separate raw forecasting from expected value, build one bracket around the highest probability of success, build another around pool dynamics, and make decisions with at least some awareness of uncertainty.

That process produced two brackets. One was the “most likely” bracket, designed to maximize the odds of a strong score if the tournament followed a mostly rational path. The other was an EV bracket for a pool of roughly 70 entries — not a wild contrarian moonshot, but something designed to win a real contest rather than merely look sensible.

So how did that work out?

Pretty well, actually. Just not perfectly.

The model got 13 of the Sweet 16 teams right, which is objectively strong in a tournament designed to punish confidence and reward chaos. The overall architecture of the forecast held up. It identified most of the real heavyweights. It was directionally right about the teams most likely to survive the first weekend. It is generally understood that the shape of the field.

But as March tends to do, it also found the weak spots.

The most obvious misses were Ohio State, Wisconsin, and Florida. Ohio State lost a 66–64 game to TCU on a late layup. Wisconsin fell 83–82 to No. 12 High Point. Florida, the defending national champion and a No. 1 seed, lost 73–72 to Iowa on a go-ahead three-pointer in the closing seconds. Those were not slow, obvious collapses. They were one-possession losses, decided in the final moments, exactly the sort of outcomes that remind you that no tournament model gets to operate in a laboratory.  

That leaves two possible interpretations.

One is that the model was wrong.

The other is that the model was mostly right, but single-elimination basketball is a terrible environment for certainty.

The answer, as usual, is both.

The good news is that getting 13 of 16 Sweet 16 teams right suggests the basic framework was useful. It was not random. It was not decorative. It was not just using fancier words to arrive at the same intuitive guesses everyone else makes. At the level of identifying quality, it worked.

The less comforting news is that the misses were also informative.

Looking back, the process still leaned a little too heavily toward “the better team usually advances.” That is often true over a season. It is less true over 40 minutes in a neutral gym, especially when the underdog can create volatility. Wisconsin’s loss is the cleanest example. A stronger upset model would not necessarily have picked High Point to win, but it probably would have treated Wisconsin as more fragile than I did: more susceptible to the kind of game where an underdog gets hot from three, stretches the favorite, and turns the last two minutes into a coin flip.

Florida’s loss says something similar at a higher level. A No. 1 seed is never supposed to be “likely” to lose early, but there is a difference between being strong and being invulnerable. The model was right to respect Florida. It was probably wrong to treat Florida as safe.

Stills from NCAA games.

That distinction matters if you are trying to win a pool rather than merely defend your dignity.

This is where the exercise gets interesting. In markets, in investing, and in bracket pools, there is a big difference between being broadly correct and being correctly positioned. A forecast can be intelligent and still fail to capture where the real fragility lives. The tournament does not award style points for having the best framework if you still underprice the possibility that a live underdog starts making shots.

So what would I change?

Not the core idea. I still think the right way to approach a bracket is to separate highest-probability forecasting from expected-value strategy. Most people blend those without realizing it. They pick a champion they think can win, but then make a few arbitrary upset picks to “spice things up,” which is really just another way of admitting they have no coherent process.

What I would improve is the volatility layer.

A better version of this approach would pay more attention to which favorites are genuinely sturdy and which merely look strong in a spreadsheet. It would more explicitly measure three-point variance, turnover risk, foul trouble, dependence on a single scorer, and how often a team’s outcomes swing wildly from game to game. It would still respect top seeds. It would just be more suspicious of them.

That matters even more now because, of course, the original brackets are locked.

At this point, no one gets to claim they “would have had Iowa” unless they actually had Iowa. That is part of the beauty and cruelty of the whole enterprise. Once the games start, your brilliant framework becomes a historical document.

But that does not mean the process stops being useful.

For one thing, there may be second-chance pools. Many contests reset at the Sweet 16 or the Final Four, which is really a gift to anyone who likes process. A second-chance pool strips away the theater of pretending we know everything in advance. Now we have new information, a smaller field, and a fresh opportunity to separate the truly strong teams from the merely surviving ones.

More importantly, the exercise still offers the main lesson I was hoping to explore in this series: disciplined forecasting is not about eliminating uncertainty. It is about making uncertainty legible.

The model did well. March still had other ideas.

That is not a failure. It is the point.

And if there is a second-chance pool, I’ll be right back in it — older, wiser, and slightly less willing to trust a vulnerable favorite just because its seed says I should.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Even Realities G2’s biggest software update yet brings an app store and a meeting prep tool that changes how you work

Android 17 makes your internet controls way less frustrating

AMD’s latest Ryzen 9 9950X3D2 pushes X3D to the limit

Siri could soon support third-party AI tools in major iOS update

DJI ‘s first 360° drone offers 8K video recording and a freakishly long transmission range

It’s just $1, but Netflix is again raising the hit on your streaming wallet

Motorola leak reveals the upcoming Razr 70 Ultra, and it doesn’t want to change one bit

iPhone users can finally get live translation on their headphones through Google Translate

Artemis II crew preps for lunar orbit – and Orion’s cosmic commode

Editors Picks

Even Realities G2’s biggest software update yet brings an app store and a meeting prep tool that changes how you work

March 27, 2026

Android 17 makes your internet controls way less frustrating

March 27, 2026

March Madness, Revisited: The AI Model Did Well. But Mad Things Still Happen

March 27, 2026

Careem shares key customer trends during Ramadan

March 27, 2026

Subscribe to News

Get the latest UAE news and updates directly to your inbox.

Latest Posts

AMD’s latest Ryzen 9 9950X3D2 pushes X3D to the limit

March 27, 2026

Siri could soon support third-party AI tools in major iOS update

March 27, 2026

DJI ‘s first 360° drone offers 8K video recording and a freakishly long transmission range

March 27, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian UAE. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.