Shownotes
AI in education is evolving at a pace that often overwhelms teachers, school leaders, and policymakers. New tools appear weekly. Research lags behind practice. Hype fills the gap.
So how do we make good decisions when certainty is impossible?
In this episode of Education Futures, Svenia is joined by Chris Agnew, who leads the AI Hub for Education at the Stanford Accelerator for Learning.
Chris brings a rare perspective to the AI conversation. With a background in environmental and experiential education, from outdoor classrooms to apprenticeship-based learning, he has spent decades trying to bridge relevance, rigor, and access. Today, his role is to translate cutting-edge AI research into practical guidance for superintendents, state leaders, and education systems making decisions right now.
In our conversation, we explore:
- Why the biggest challenge is not innovation, but sense-making
- How the speed of AI creates noise, confusion, and decision paralysis
- The persistent research-to-practice gap, and why it’s even harder with AI
- What current evidence actually tells us (and doesn’t) about AI in K–12
- Why most research today shows promise, not certainty
- How leaders can think in short-cycle experiments instead of long-term predictions
- The difference between using AI for efficiency, outcomes, and reimagining school
- Why personalization has too often turned into isolation, and how AI could help reverse that
- A vision of future schools built around collaboration, real-world learning, and apprenticeship-like experiences
Chris also shares why banning AI from schools is unrealistic, but blindly adopting it is equally risky, and why adult judgment, not student technical skill, will matter most in the years ahead.
This episode is not about finding definitive answers, it’s about building the capacity to learn, adapt, and decide well, even when the future remains uncertain.
Learn more about the hub here: https://scale.stanford.edu/ai