Artwork for podcast Dumbify — Get Smarter by Thinking Dumber
The Housewife Who Beat the CIA
Episode 3510th February 2026 • Dumbify — Get Smarter by Thinking Dumber • David Carson
00:00:00 00:23:01

Share Episode

Shownotes

Can a Domino’s Pizza tracker predict a military invasion better than a Pentagon analyst? In this episode of Dumbify, host David Carson investigates a reality where "confusion beats certainty" and a retired accountant in suburban Ohio can outperform the world's top geopolitical experts. We dive deep into the story of the Good Judgment Project, a government-sponsored tournament where a ragtag team of amateurs—armed with nothing but Google and open minds—humiliated intelligence agencies by predicting global events with 30% more accuracy than analysts with access to classified data.

Sign up for the Dumbify newsletter: https://www.david-carson.com/

Dumbify celebrates ideas so weird, wrong, or wildly impractical… they just might be brilliant. Hosted by David Carson, a serial entrepreneur behind multiple hundred-million-dollar companies and the go-to secret weapon for companies looking to unlock new markets through unconventional thinking. Dumbify dives into the messy, counter-intuitive side of creativity — the “dumb” ideas that built empires, broke rules, and ended up changing everything.

Transcripts

::

[mellow music] There's a Twitter account called Pentagon Pizza. Its entire purpose is to monitor how busy the Domino's Pizza is near the Pentagon. And the theory is, if analysts are working late, they're ordering pizza, and if they're ordering a lot of pizza, then something big must be happening at the Pentagon. And last month, someone used this information, this pizza delivery pattern, to predict an American military incursion into Venezuela. They placed a bet on a prediction market just hours before the news broke, and they made nearly a half a million dollars. And they didn't do that from classified intelligence. They didn't do that from years of studying Latin American politics. They did that from watching a pizza tracker. Meanwhile, there's a guy named Ian Bremmer, and Ian runs the Eurasia Group, which is one of the world's most respected geopolitical consulting firms. And Ian Bremmer has a PhD from Stanford. He advises heads of state. He's on TV constantly explaining what's going to happen next in global politics. And there's this research suggesting that there's a retired accountant in suburban Ohio with no credentials, no PhD, no access to classified information, but they can predict geopolitical events more accurately than Ian Bremmer. Not because the accountant knows more, mind you, but because she knows she doesn't know. And it turns out that that thought is kind of the whole game. Welcome to Dumbify. I'm your host, David Carson, and today we're going back to an idea we explored a few episodes ago, that confusion beats certainty. There were so many of you that emailed me about that episode, and we had such a great conversation that it, it made me want to go even deeper, because there's a lot of interest in this distinction between probabilistic thinking versus binary thinking, between saying, "I'm sixty percent sure," and saying, "I absolutely know." So today we're gonna explore why admitting you don't know something might be the most powerful competitive advantage in a world that worships certainty, and why the experts who get paid to sound confident are often the worst at predicting what happens next. So let's do that. Let's get dumb.

::

[singing] Dumbify, let your neurons dance. Put your brain in backwards pants. Genius hides in daft disguise. Brilliance wears those googly eyes. So honk your nose and chase that spark. Dumb is just smart in the dark. Dumbify. Yelling like a goose! It's thinking wrong on purpose with juice.

::

[upbeat music] So here's a phrase that will tank your credibility in almost any professional setting: "I'm about sixty percent sure." Go ahead, try it in your next meeting. When your boss asks for your assessment of the Q3 projections, say, "I think there's maybe a fifty-five to sixty-five percent chance we hit our numbers, but I could be wrong, and I'd want to update that estimate as we get more data." Just watch your boss's face. Watch them turn to whoever is willing to say, "We're gonna crush it," with complete conviction. In this country, in this world, we worship certainty. We promote people who project confidence. We watch pundits who speak in these declarative sentences: "The market will correct," "Russia will invade," "This candidate cannot win." And the problem with this is, certainty is just a performance, and the people best at performing it are often the worst at actually predicting what happens next. [mellow music] For example, in two thousand and five, a psychologist named Philip Tetlock, he published a book with a delightfully brutal title. It's called Expert Political Judgment: How Good Is It? How Can We Know? And the answer, after tracking twenty-eight thousand predictions from two hundred and eighty-four experts over two decades, was essentially not very good. Basically, [chuckles] not good at all. These experts, the, the people who appeared on television, advised governments, wrote op-eds in major newspapers, performed barely better than chance, and in some cases, they performed worse than chance. You could have out-predicted them by literally flipping a coin. So Tetlock called them hedgehogs, people who know one big thing and see everything through that lens. The China expert who predicted everything through the framework of Chinese political development, the economist who saw every trend as confirmation of their preferred theory. And these experts, you know, it's, it's not like they're stupid, but they were trapped, trapped by their own expertise. In twenty eleven, the US intelligence community did something really unusual. They admitted that they had a problem.... that their analysts, the people with security clearances, access to classified satellite imagery, intercepted communications, human intelligence from assets around the world, well, they kept getting blindsided. The Arab Spring, the fall of the Soviet Union decades earlier, just over and over, the intelligence community failed to predict major geopolitical shifts. So IARPA, the Intelligence Advanced Research Projects Activity, basically DARPA for spies, launched a forecasting tournament, and they wanted to find out if anyone could actually predict the future better than their analysts. So they invited academics, think tanks, and research teams to compete. And this guy, Philip Tetlock, entered a team called the Good Judgment Project.

::

And here's what made GJP different. They didn't recruit experts. They recruited curious people, retired engineers, insurance adjusters, a former ballroom dance instructor, people who read the news and thought carefully about probability. Then they trained them, not in geopolitics, but in thinking. How to overcome cognitive biases, or how to start with base rates, how to update estimates when new information emerge. Essentially, how to say, "I don't know," and mean it as a starting point, not an admission of defeat. And the questions were hard. Real intelligence questions that analysts were working on. Things like, will North Korea conduct a nuclear test before December thirty-first? Will Greece leave the Eurozone? Or will the President of Tunisia flee the country? And the results were, frankly, embarrassing for the intelligence community. Tetlock's amateurs didn't just beat the other academic teams, they beat analysts with access to classified intelligence by about thirty percent. They beat prediction markets. After two years, IARPA shut down funding for all of the other teams and just funded Tetlock. A woman named Elaine Rich became one of the most celebrated super forecasters, and her background? She's a pharmacist from Maryland. No political science training, no security clearance, just a newspaper subscription, some Google, and a willingness to say, "I'm seventy percent confident, but let me check that assumption." [music] So what do super forecasters have that the experts don't? Tetlock borrowed a distinction from the philosopher Isaiah Berlin, hedgehogs versus foxes. The hedgehog knows one big thing, but the fox knows many small things. Hedgehogs are great on television. They have a thesis. They can explain everything through their framework. They are confident and compelling and wrong about as often as a coin flip. But foxes are terrible on television. They say things like, "On the other hand," or, "It depends," and, "I'd estimate somewhere between forty and sixty percent." That's not exactly quotable TV. They don't write best-selling books with punchy titles. They're also dramatically more accurate. The super forecasters shared these certain traits. They thought in probabilities, not in outcomes. So instead of, "Russia will invade Ukraine," they'd say, "There's a thirty-five percent chance in the next six months." And this sounds like hedging. It is hedging, but hedging is accurate. They updated constantly. When new information emerged, they changed their estimates. They didn't defend their previous positions. They didn't explain why the new data actually confirmed what they already believed. They just updated. They also would start with base rates. So what that is, is like before asking, "Will this specific peace negotiation succeed?" They asked, "Historically, what percentage of peace negotiations actually succeed?" And then they would sort of adjust from there. They also did this thing called hunting for disconfirming evidence, and what that means is essentially, instead of reading news that confirmed their beliefs, they actively sought out information that might prove them wrong. And most importantly, they admitted what they didn't know constantly, and they did this explicitly and without shame. But the experts, meanwhile, were stuck. Their reputations depended on their frameworks. Their careers depended on their absolute certainty. Ian Bremmer can't go on CNN and say, "I don't know, this is really hard to predict. Maybe check back in a month." He'd stop getting invited to the shows. This is the trap. Expertise becomes a prison. The more you know about one thing, the more you see everything through that lens, and the more your status depends on being right, the harder it becomes to admit when you're wrong. Now, you might be thinking, this is all very nice in an academic tournament setting, but does it hold up when real money is on the line?

::

So let's talk about prediction markets.... If you haven't encountered these yet, prediction markets are exactly what they sound like. They're marketplaces where you can bet real money on future events. Will there be a government shutdown by March? Will Taylor Swift announce a new album before summer? Will the Fed raise rates? You buy shares in outcomes, and if you're right, you get paid. The biggest platforms right now are Polymarket and Kalshi, and combined, they're processing hundreds of millions of dollars in bets. And here's what makes them really fascinating for our purposes. Prediction markets are essentially super forecasters, but on steroids. Just think about how they work. A market opens knowing absolutely nothing. Maybe the odds start at fifty/fifty. Then, as information flows in, in the form of bets, someone knows something or thinks they know something, and they put a little money on it, and then that makes the price move. Someone else disagrees and bets on the other side, and the price adjusts again. That price is a probability. If a prediction market says there's a seventy percent chance of something happening, well, that's because the price is seventy cents. And if you think that forecast is wrong, if you think the real odds are higher or lower, you can always bet against it. Buy the other side for thirty cents and see if you're right. The market is just constantly updating based on new information, and it doesn't really care where that information comes from. A Pentagon analyst, a Stanford professor, a guy tracking Domino's Pizza deliveries near military bases, it just aggregates everything into one number that keeps getting refined. Sound familiar? It's the Tetlock method scaled up. Start with the base rate, update constantly, think in probabilities, not binaries, and crucially, the market has no ego. It doesn't defend its previous position. It doesn't perform confidence for the cameras. It just updates. And I guess this is why prediction markets have flipped one of our sacred financial taboos. In stock markets, insider trading is illegal. Remember Martha Stewart? She went to prison for it. But in prediction markets, insiders are actually welcome. Brian Armstrong, the CEO of Coinbase, was asked about this directly, and his [chuckles] answer was fascinating. He said, "Yeah, you want people with inside information trading in the prediction markets, because that's what makes the probability more accurate." By the way, someone once put eighty-four thousand dollars worth of bets on which words Armstrong would say during a Coinbase earnings call. Words like Bitcoin, Ethereum, blockchain. And [chuckles] Armstrong found out and thought this was pretty funny, so he pulled a prank on his own, and at the end of the meeting, he just read all of the words off that list, so everyone betting would win, which raised an interesting question about what predicting actually means when the subject of your prediction can just do whatever they want. Turns out, when you put enough money on a question, it stops being a prediction and becomes a request. Prediction markets, I guess they're the world's most complicated tip jar.

::

Time for science. Time to get unnecessarily nerdy with it, 'cause nerding out is what we do, and we're not going to apologize for it. Get ready for science. [upbeat music]

::

So what's actually happening in your brain that makes certainty so toxic to accuracy? The research is wild. It, it points to something called the earned dogmatism effect, and psychologists Victor Ottati and Erika Price published a study in twenty fifteen showing that when people feel like experts in a domain, they actually become more close-minded about that domain. That feeling of expertise, not the expertise itself, but the feeling, triggers a psychological permission slip to just stop listening. You've paid your dues, you've earned the right to your opinions, why would you update based on what some amateur thinks? There's also what Tetlock calls the cognitive style distinction. And some people naturally think in terms of possibilities, while others think in terms of probabilities. [gentle music] The first group sees the world as a series of potential outcomes. Things either will or won't happen. And the second group sees the world as a distribution, things that have various likelihoods of happening. And, you know, this sounds subtle, but it's massive. When you think in possibilities, you're prone to what psychologists call the conjunction fallacy, which is believing that specific, detailed scenarios are just more likely than general ones because they feel more vivid. When you think in probabilities, you're naturally calibrated against this. Researcher Barbara Mellers, who worked with Tetlock on the Good Judgment Project, found that super forecasters had a specific trait she called active open-mindedness, and they treated their beliefs more or less as hypotheses to be tested, not positions to be defended. And when asked to consider why they might be wrong, they didn't resist, they leaned in.... The neuroscience supports this, too. When people encounter information that confirms their beliefs, they get a dopamine hit, right? It feels good to be right. But when they encounter disconfirming information, the brain's threat response just goes berserk. Being wrong feels like being attacked. And the superforecasters had somehow short-circuited this response. So being wrong didn't feel like a threat, it felt like information, an opportunity to get more accurate.

::

Dum, dum, dum, dum, Dumb Word of the Day. Dumb Word of the Day. It's a word. It's dumb. Use responsibly.

::

[upbeat music] Oh, yeah, it's time for my favorite part of the show. It's time for Dumb Word of the Day, and today's dumb word is [upbeat jingle] zetetic, spelled Z-E-T-E-T-I-C, zetetic. Zetetic means proceeding by inquiry or seeking. A zetetic is a seeker, someone who proceeds by asking questions rather than asserting answers. It comes from the Greek "zeteen," to seek. The ancient skeptic philosophers called themselves zetetics before they settled on skeptics, because their whole approach was based on inquiry rather than conclusion. They didn't claim to have figured out the truth, they just kept asking questions. The superforecasters are zetetics. They're not in the business of knowing things. They're in the business of seeking, constantly updating, constantly questioning, constantly admitting that they might be wrong.

::

Let's use it in a sentence. [upbeat jingle] "My boss asked for my confident opinion on the market outlook, but I went full zetetic and said there was a forty percent chance I had no idea what I was talking about. Anyway, I'm updating my LinkedIn." Zetetic, add it to your vocabulary. Use it with reckless moderation. [upbeat music] Okay, here's your challenge for this week. I call it the sixty percent experiment. For the next five days, whenever you make a prediction about anything, attach a probability to it. Not, "I think it's going to rain tomorrow." Instead, "I'm seventy percent confident it's going to rain tomorrow." Not, "This meeting is going to run long." Instead, maybe, "There's an eighty percent chance this meeting goes past three PM," or, "The Sixers will beat the Knicks." Instead, just think, "I give the Sixers maybe a fifty-five percent chance." And then, and this is the important part, track whether you're right. At the end of the week, look at all the predictions you gave seventy percent confidence. Were you right about seventy percent of them? If you were right ninety percent of the time, you were underconfident. If you were right fifty percent of the time, you were overconfident. The goal isn't to be certain. The goal is to be calibrated, to have your confidence match your actual accuracy. Most people find that they're overconfident, and the stuff that they're sure about, they're wrong more often than they think, and the stuff they're uncertain about, actually about right. The sixty percent experiment isn't about learning to hedge, really. It's, it's more about learning that hedging is knowing, that admitting uncertainty isn't weakness, it's, it's the whole skill. Bonus points: pick one prediction you made with high confidence and actively search for reasons you might be wrong. Just for five minutes, see how it changes your estimate. [upbeat music] And that's our show. Thank you for getting dumb with me today. I'm David Carson, and if you want more probabilistic wisdom, subscribe to the Dumbify newsletter at david-carson.com. Every week, another counterintuitive idea that might be sixty percent brilliant. And if you could also do me a favor, if you like this episode, I'd love it if you'd give it a rating and a review. Every bit helps. Until next time, stay curious, stay calibrated, and remember, the person who says, "I don't know," is often the only one in the room who does. [upbeat music]

Chapters

Video

More from YouTube