Artwork for podcast AI Builder Daily Brief
LLM Agents Weaponized: Attacking AI Recommender Systems
12th May 2025 • AI Builder Daily Brief • Ran Chen
00:00:00 00:02:22

Share Episode

Shownotes

This episode explores the vulnerabilities of AI-powered recommendation systems to attacks leveraging large language models (LLMs), based on a recent post from AIModels.fyi. We discuss how LLMs can be weaponized to undermine these systems and introduce the 'CheatAgent' framework. • Are LLM-powered recommendation systems as secure as we think? • How can attackers manipulate these systems in a 'black-box' environment? • What role do prompt templates play in these attacks? • Can user profiles be altered to skew recommendations? • What is the 'CheatAgent' framework, and how does it work? • What are the implications of LLMs being used as attack agents? • How can we better protect these systems from sophisticated attacks? • Where can I find this post from AIModels.fyi to read more?

Links

Chapters

Video

More from YouTube