Appearances
Conference talks, panels, podcasts, and other public appearances on AI security, LLM protection, and machine learning safety.
Real patterns and failures from building specialized agents with shared infrastructure, covering capability bounding, prompt injection detection, memory isolation, and OAuth device flow
LLMs Will Never Be Fully Secure
podcastDiscussion on malicious MCP servers, recurring security mistakes in AI tooling, prompt injection persistence, and practical strategies for safe AI deployment
Security audit of MCP servers and their OAuth implementations, demonstrating that 90% of vulnerabilities reflect longstanding security principles amplified by AI agents
A three-layer defensive framework for catching security mistakes introduced by LLM-assisted code generation before they reach production
Panel discussion on how AI-driven threats evolved in 2025 and what defenders should prepare for in 2026
Technical deep-dive on implementing LLM security controls at scale using Lakera Guard for Dropbox's AI features