Defense in Depth for AI-Assisted Development
Internal tech talk presenting a layered defense strategy for AI-assisted development. Covers real examples of LLM-generated security mistakes — plaintext OAuth tokens, removed CSRF protection, committed credentials — and walks through three defensive layers to catch them: pre-commit hooks with static analysis, AI-powered code review agents on pull requests, and CI workflows enforcing security scanning as a merge gate.
Based on my blog post Defense in Depth for AI-Assisted Development.