In this episode of The Deep Dig, we explore Khayyam Wakil's provocative analysis that challenges the prevailing narrative about artificial intelligence and democracy. Rather than accepting the common panic that AI is destroying democratic institutions, Wakil argues that AI is merely the stress test revealing decades of structural decay. Using the metaphor of termites and an earthquake—where everyone blames the earthquake for the collapse while ignoring the termites that had been eating away at the foundation for 40 years—this episode traces the systematic hollowing out of three critical pillars: public trust, higher education, and journalism. Through compelling data and historical analysis, we examine how neoliberal policy choices from the 1980s onward dismantled the very institutions that could have protected us from technological disruption. The episode concludes with Wakil's prescription for rebuilding democratic resilience through structural reinvestment rather than superficial tech regulation.
Category/Topics/Subjects
Tech Industry Critique
Systemic Decay and Institutional Collapse
Democracy and Public Trust
Higher Education Crisis and Adjunctification
Journalism and the Information Ecosystem
Neoliberal Policy and Economic Philosophy
AI Ethics and Regulation Debates
Structural vs. Technological Solutions
Social Isolation and Civic Decline
Power Concentration and Monopolies
Public Goods and Infrastructure Investment
Best Quotes
"It's like blaming the thermometer for giving you a fever. The fever was there the whole time. The thermometer just gave you the number."
— On AI as diagnostic rather than cause
"In 1964, public trust in the US government was at 77%. By 2019, it had dropped to 17%. The bots aren't even talking yet, and we've already lost 60 points of trust."
— Documenting the trust cliff
"Regulating AI without fixing the institutions is like installing sprinklers in a house that's already ash."
— Khayyam Wakil
"We spent 40 years actively gutting our own public institutions. AI didn't do any of that. It just showed up and walked into the wreckage."
— On structural policy failure
"Power does not voluntarily redistribute itself, ever. You have to confront it."
— On addressing tech monopolies
"When historians look back at this moment, they won't see AI as the villain. They'll see it as the stress test that exposed what we'd spent decades denying."
— Khayyam Wakil
Three Major Areas of Critical Thinking
1. The Termite vs. Earthquake Framework: Diagnosing the Real Disease
Examine why the conventional narrative—that AI is breaking democracy—is fundamentally a misdiagnosis that allows us to avoid confronting uncomfortable truths about structural policy failures. Analyze the three pillars of institutional decay:
Trust Collapse: The 60-point drop in public trust (from 77% in 1964 to 17% in 2019) occurred entirely before AI became mainstream, creating an environment where disinformation could thrive because citizens already believed institutions were lying to them.
Education Hollowing: The 40% decline in state funding per student (1980-2020), the adjunctification crisis (tenure-track faculty dropping from 57% to 24%), and the $1.7 trillion student debt bomb created an intellectual infrastructure incapable of deep engagement—long before chatbots could write essays.
Journalism Extinction: The 82% collapse in newspaper ad revenue (2005-2020), the closure of 2,500 local papers, and the creation of 1,800 news deserts meant the watchdog was already dead when AI-generated content arrived.
Critical Questions: Why is it psychologically and politically easier to blame new technology than to confront 40 years of bipartisan policy choices? What does it mean that the "neoliberal consensus"—the belief that free markets solve everything and government is the problem—was embraced by both Reagan and Clinton, Bush and Obama? How does focusing on the earthquake (AI) allow tech companies, policymakers, and the public to avoid accountability for the termites (systematic defunding and privatization)?
2. The Rerun Thesis: AI as Amplifier, Not Inventor
Challenge the assumption that AI introduces fundamentally new harms by examining how the fears we associate with AI—opacity, isolation, manipulation—are actually "reruns" of problems that were already endemic to human systems. We just called them different names:
Black Box Algorithms: The opacity we fear in AI decision-making (loan denials, hiring, criminal sentencing) mirrors the existing opacity of prosecutorial discretion, plea bargaining (90%+ of criminal cases), and the decision not to prosecute banking executives after 2008. We normalized human black boxes under the label of "discretion."
Social Isolation: Robert Putnam's "Bowling Alone" (2000) documented the massive decline in civic participation—PTA membership cut in half, bowling leagues down 40%, union membership collapsed—decades before smartphones or AI companions. The causes were structural: car-dependent suburbs, longer work hours, economic precarity, and the unraveling safety net. AI boyfriends aren't creating loneliness; they're selling a band-aid for a wound created by policy.
Algorithmic Slop: The low-quality, churned-out content we blame AI for producing was already corporate policy when hedge funds and private equity firms bought struggling newspapers and demanded cheap, fast content to maximize profit extraction.
Critical Questions: If these harms already existed in human systems, why does adding "AI" to them suddenly make them visible and worthy of panic? What does this reveal about our capacity for denial when human institutions are the perpetrators versus technological ones? Does our focus on AI ethics allow us to avoid the harder work of confronting human accountability, corporate power, and policy failure?
3. Prescription for Structural Resilience: Rebuilding vs. Regulating
Evaluate Wakil's argument that the question must shift from "How do we stop AI from breaking democracy?" to "How do we rebuild democratic institutions strong enough to govern AI?" This requires a complete reversal of 40 years of policy:
Refund Education: Restore state funding to 1980 levels (40% increase per student), end adjunctification by creating stable tenure-track positions, and cancel student debt as restitution for a policy failure that forced individuals to bear the cost of public disinvestment.
Treat Journalism as Public Good: Fund nonprofit newsrooms, public broadcasting, and local outlets so they serve citizens rather than advertisers, ending dependence on the clickbait economy that destroyed investigative capacity.
Break Up Tech Monopolies: Use real antitrust enforcement to structurally separate Google, Facebook, and other concentrated powers that dominate information flow and advertising revenue, rather than relying on performative ethics panels.
Address Inequality Through Progressive Taxation: Tax wealth and capital gains at rates comparable to wages, using revenue to rebuild starved public goods—libraries, parks, community centers, public transit—that create the physical infrastructure for civic life.
Critical Questions: Is Wakil's prescription politically feasible in an environment where both parties have embraced market fundamentalism for decades? What would it take to generate the political will for such a fundamental reversal? If we continue to focus regulatory energy on AI while ignoring institutional decay, what happens when the next technological shock arrives? Beyond trust, education, and journalism, what other systems (healthcare, infrastructure, climate response) are currently being "eaten hollow by termites" and waiting for their own earthquake to expose the rot?
Final Provocation
Wakil leaves us with a challenge: AI isn't the villain—it's the mirror. It reflects back the consequences of decades of choices we made to defund, privatize, and commodify public goods. The real work isn't regulating algorithms; it's confronting the uncomfortable truth that we systematically dismantled our own civic immune system, and now we're shocked that we're vulnerable to infection. The question isn't whether we can control AI—it's whether we have the courage to rebuild what we destroyed.
For A Closer Look, click the link for our weekly collection.
::. \ W03 •A• AI Didn't Break Democracy. We Did. Four Decades Ago. ✨ /.::