LLMs Will Never Be Fully Secure
Live from CactusCon, I joined The Secure Disclosure podcast to break down why we’re back in the “wild west” — only this time, the apps can be social engineered at machine speed. We covered malicious MCP servers, why we’re repeating the same security mistakes (broken access control, eval vulnerabilities), and why prompt injection probably isn’t going away.
Topics included practical guidance on what to lock down, how to roll out AI tooling safely starting read-only with reduced blast radius, and why “AI lipstick” doesn’t change the underlying enterprise risk game. We also dug into tool poisoning, alert fatigue, and data exfiltration risks from malicious MCP servers.