Brooks McMillin
Infrastructure Security Engineer at Dropbox
I lead a team focused on AI agent security, LLM development tooling, and securing production AI systems. We build the frameworks that help engineers ship AI features safely.
Building Secure Agentic Systems: The Six Layers
Six layers of security architecture for running LLM agents as daily drivers — every design decision with production stats and companion code.
Read ArticleCurrent Focus
AI Agent & Infrastructure Security
Making sure AI agents don't do things they shouldn't, and giving engineers the tools to ship AI features without creating new attack surface.
- Sandboxing, permissions, and runtime controls for autonomous AI agents
- LLM security tooling that fits into existing developer workflows
- Threat modeling for MCP, multi-agent systems, and tool-use patterns in production
- Identity, access control, and data protection for AI/ML infrastructure
Featured Projects
Agent Framework
productionFramework for building LLM agents with MCP, OAuth 2.0 with PKCE, persistent memory, and extensible tool architecture. Runs 19 agents in production.
- Runs 19 specialized agents as daily drivers
- Full OAuth 2.0 with PKCE and dynamic client registration
TaskManager
productionTask management platform with a full OAuth 2.0 auth server, Python SDK, and MCP server so my AI agents can manage tasks too.
- Full OAuth 2.0 authorization server with PKCE support
- Security testing suite with Vitest
SMS Communications Suite
productionSend and receive SMS through GSM modems. Implementations in both Go and Python, with CLI tools for testing and operational use.
- Cross-platform GSM modem interface (Go + Python)
- Interactive CLI chat interface
ReMarkable Research Toolkit
productionTools for managing research papers on reMarkable tablets, with AI-powered classification and automated organization.
- AI-powered research paper classification
- Zero-config rmapi binary management
Recent Appearances
Real patterns and failures from building specialized agents with shared infrastructure, covering capability bounding, prompt injection detection, memory isolation, and OAuth device flow
LLMs Will Never Be Fully Secure
podcastDiscussion on malicious MCP servers, recurring security mistakes in AI tooling, prompt injection persistence, and practical strategies for safe AI deployment
Security audit of MCP servers and their OAuth implementations, demonstrating that 90% of vulnerabilities reflect longstanding security principles amplified by AI agents
A three-layer defensive framework for catching security mistakes introduced by LLM-assisted code generation before they reach production
Let's Connect
Interested in AI security, collaboration, or speaking? I'd like to hear from you.
More About Me