December 9, 2025
The Shoggoth of AI Risk
We often discuss "AI Risk" as if it were a single, shapeless shoggoth. But the truth is that risk comes from specific sources, each requiring a different defense. This article dismantles the monolith, categorizing the six distinct vectors of danger: Adversaries, Unforced Errors, Misaligned Goals, Dependencies, Societal Impact, and Emergent Behavior. Learn to distinguish between these threats so you can move from panic to precise preparation.
Read more
October 27, 2025
Beyond Code Exploits: Red Teaming the New AI Attack Surface
Your cybersecurity playbook is obsolete. In the age of AI, the greatest risks aren't traditional code exploits but unpredictable model behaviors—from prompt injections and data leakage to emergent misuse. Drawing on insights from live red teaming exercises with members of Congress, NATO, and the UK Parliament, AI security expert David Campbell explains why we must treat the model itself as the new attack surface. This post unveils an enterprise playbook for proactive AI red teaming, moving beyond static checks to continuously test systems like an adversary. Learn how to map, score, and measure AI risks to get ahead of the threat before an incident occurs.
Read more
August 4, 2025
The AI Risk Matrix: Evolving AI Safety and Security for Today
The shift from reactive models to agentic systems fundamentally alters the AI risk landscape, making frameworks that focus only on user intent and model output incomplete. To address this gap, we've evolved the AI Risk Matrix by adding a crucial third dimension: Model Agency. This article breaks down agency into three tiers—Tools, Agents, and Collectives—using concrete examples to illustrate how complex failures can now emerge from the system itself. We argue that this systemic view, which connects model behavior to traditional AppSec vulnerabilities, is essential for building the next generation of safe and reliable AI.
Read more