Skip to main content
Brooks McMillin
  • Home
  • About
  • Projects
  • Appearances
  • Blog

AI Security Research Blog

Deep dives into AI security, adversarial machine learning, LLM vulnerabilities, agent safety, and cutting-edge ML defense research.

Browse by Category

All Posts Agentic SystemsDevOpsLLM Securityengineeringmcp

mcp-authflow: OAuth 2.0 for Production MCP Servers

April 30, 2026 12 min read

Open-sourcing mcp-authflow and mcp-authflow-resource: an RFC-compliant OAuth 2.0 framework for MCP servers, plus a one-command example server. Why MCP deployments need real auth, what the two packages do, and three non-obvious gotchas from production.

#mcp#oauth#security#open-source#starlette#python
Read article →

The MCP stdio Problem: Why I Rebuilt My Auth Proxy as a Persistent HTTP Service

April 9, 2026 6 min read

Claude Code silently kills stdio MCP servers during idle periods, forcing manual reconnection. How I converted a fragile stdio bridge into a persistent Starlette HTTP reverse proxy — and the obscure SDK crash that followed.

#mcp#claude-code#oauth#starlette#systemd#devtools
Read article →

Building Secure Agentic Systems: The Six Layers

March 24, 2026 19 min read

Six layers of security architecture for running LLM agents as daily drivers — every design decision with production stats and companion code.

#security#AI#agents#MCP#prompt-injection#SSRF#observability
Read article →

A Beginner's Guide to Safe LLM-Assisted Development

March 11, 2026 20 min read

A complete beginner's guide to setting up every safety layer from the Coding Safer with LLMs post: pre-commit hooks, local review agents, CI workflows, and CLAUDE.md — starting from scratch.

#security#AI#LLM#ci-cd#pre-commit#code-review#claude-code#tutorial
Read article →

Does Your System Prompt Actually Stop Prompt Injection? We Tested 10,000 Times to Find Out

February 26, 2026 13 min read

An empirical study of 10,080 prompt injection attempts across 8 models, 6 defense strategies, and 7 attack types. The results challenge common assumptions about prompt-level defenses.

#security#AI#LLM#prompt-injection#ai-security#benchmark
Read article →

Defense in Depth for AI-Assisted Development: Pre-commit Hooks, Review Agents, and CI That Catch LLM Mistakes

January 28, 2026 14 min read

Practical strategies for safer AI-assisted development: automated review agents, layered security checks, and context management that prevents catastrophic mistakes.

#security#AI#LLM#ci-cd#pre-commit#code-review#MCP
Read article →

The Call is Coming from Inside the House: When your Agentic Coder Writes Dangerous Code

September 7, 2025 4 min read

An introduction to the flaws in security testing for AI-generated code.

#security#AI#LLM#vibe-coding#ai-security
Read article →

Get new posts by email

No spam, unsubscribe any time.

Or follow via RSS

© 2026 Brooks McMillin