← Back to blog

Lessons from Building AI Security Tools

March 20, 2026

The Journey

Building AI-powered security tools isn’t like building a typical SaaS product. The stakes are higher, the feedback loops are longer, and the adversaries are constantly adapting.

After spending 20+ years in cybersecurity — from MITRE to the Federal Reserve Board to founding Qerberos — here’s what I’ve learned about building AI tools that actually work in production.

Lesson 1: Start with the Analyst, Not the Algorithm

The biggest mistake I see in AI security startups is building the model first and figuring out the user experience later. The best AI in the world is useless if a SOC analyst can’t act on its output in under 30 seconds.

At Qerberos, we started by shadowing analysts. We watched how they triaged alerts, what made them trust or distrust a tool, and where they wasted the most time. The AI came second.

Lesson 2: False Positives Kill Adoption

In security, a false positive isn’t just annoying — it erodes trust. If your AI flags 100 things and 95 are noise, analysts will start ignoring it within a week. We optimized relentlessly for precision over recall, which is counterintuitive in a field obsessed with “catching everything.”

Lesson 3: Explainability Isn’t Optional

When a security tool says “this is malicious,” the analyst needs to know why. Black-box models don’t work in security. Every detection in QR Trust comes with a confidence score and a breakdown of contributing factors.

What’s Next

The intersection of AI and security is still in its early innings. We’re moving toward agentic systems that don’t just detect — they investigate, triage, and respond. The human stays in the loop, but the loop gets tighter.

If you’re building in this space, I’d love to connect. Find me at [email protected].