In this episode of The Quiet Cost, a series within the Digital Dominoes podcast, Angeline Corvaglia explores the “invisible gap” between evidence of safety and users’ actual safety on tech platforms, arguing that companies optimize for compliance metrics that show evidence that they are working toward user safety rather than making users actually safe. Using steps Instagram has taken against sextortion since 2024, she describes Meta’s responses: blurred nudity in DMs, warning prompts, teen DM restrictions, automatic nudity protection, suspicious-account limits, and sextortion reporting flows, while noting these measures often don’t stop coercion or manipulation.
She extends the pattern to OpenAI’s 2026 autonomous agent safety framework failing to prevent unauthorized real-world actions, and to Google’s AI principles and YouTube policies that coexist with engagement-driven recommendation harms. She cites Meta’s oversight board and a Brazil case where disinformation reached 400,000 users in six hours, emphasizing the need for outcome metrics, community trust, longitudinal tracking, and regulation that demands measurable protection, not just documentation.
00:00 Safety Claims vs Reality
00:47 Instagram Sextortion Fixes
02:28 Why Features Fall Short
03:53 Safety Evidence Playbook
04:22 Autonomous Agents Risks
05:35 YouTube Radicalization Loop
06:39 Compliance Metrics Trap
06:59 Oversight Without Accountability
08:10 Safety Is User Experience
09:36 Real World Costs
10:57 Measuring Outcomes Is Hard
12:33 Way Forward and Conclusion
Follow Angeline on LinkedIn: https://www.linkedin.com/in/angeline-corvaglia/ or check out her website: https://corvaglia.me/
Music: “Burough by Molerider” by Blue Dot Sessions, licensed under CC BY‑NC 4.0