[un]prompted · March 2026 · San Francisco, CA

Real patterns and failures from building specialized agents with shared infrastructure, covering capability bounding, prompt injection detection, memory isolation, and OAuth device flow

AI SecurityAgentic SystemsPrompt InjectionMCP
The Secure Disclosure Podcast · March 2026 · Mesa, AZ

Discussion on malicious MCP servers, recurring security mistakes in AI tooling, prompt injection persistence, and practical strategies for safe AI deployment

AI SecurityMCPPrompt InjectionLLM Security
CactusCon · February 2026 · Mesa, AZ

Security audit of MCP servers and their OAuth implementations, demonstrating that 90% of vulnerabilities reflect longstanding security principles amplified by AI agents

MCPAI SecurityOAuthPrompt InjectionVulnerability Research
Dropbox (Internal Tech Talk) · February 2026

A three-layer defensive framework for catching security mistakes introduced by LLM-assisted code generation before they reach production

AI SecurityLLMDevSecOpsPre-commit HooksCI/CD
Lakera · December 2025 · Online

Panel discussion on how AI-driven threats evolved in 2025 and what defenders should prepare for in 2026

AI SecurityLLMThreat Intelligence
Dropbox Tech Blog · September 2024

Technical deep-dive on implementing LLM security controls at scale using Lakera Guard for Dropbox's AI features

LLM SecurityAI SecurityProduction SecurityLakera Guard