The AppSec Insiders

LLM Vulnerabilities and Prompt Injection: AppSec News Deep Dive | The AppSec Insiders Podcast Ep.19

Farshad Abasi Season 1 Episode 19

In this episode, we explore the emerging security risks of AI and LLMs in modern applications. Iman shares real-world experiences bypassing AI guardrails like LlamaGuard and OpenAI Shield, while the team discusses prompt injection attacks, system prompt exposure, excessive agency vulnerabilities, and data poisoning. Learn about the OWASP Top 10 for LLMs, why AI usage policies are critical, and how attackers are exploiting everything from calendar invites to resume processors with hidden prompts.