Agentic AI and LLMs are shipping into production faster than most security teams can threat model them. Guardrails look solid in a demo, then collapse under real users, hostile inputs, and messy data. This session takes a clear look at how attackers are already abusing AI systems and what it actually takes to defend them.
We will walk through current attack patterns, from prompt injection that turns copilots into phishing interns, to jailbreaks that tore through DeepSeek’s R1 safety layer with a 100 percent success rate, to agents quietly pivoting across internal tools once they get a single risky permission.
We will examine AI specific supply chain risk, including poisoned or malware laden models on open platforms like Hugging Face, as well as backdoored fine tunes and unsafe LoRA adapters that slide past traditional SDLC controls. We will cover model theft and exfiltration research that shows how production LLM components and Vertex AI hosted models can be stolen or abused, with direct impact on both IP and embedded training data. Side channel work like Microsoft’s recent Whisper Leak findings will be used to highlight why “encrypted” is not the same as “safe” once traffic patterns are in play.
Attendees will leave with a concrete AI threat model template, example abuse cases they can red team against their own stacks, and a short, opinionated control set focused on isolation, permissions, logging, and monitoring for agentic AI. The goal is simple. Give security leaders enough detail to sit down with engineering next week and start removing real AI risk so they can enjoy a cold one at the Old Triangle in peace.