Enterprise Security

AI + Security & Safety

Don Bosco Durai, creator of Apache Ranger, reveals why security—not functionality—is the real production blocker for enterprise AI agents. Learn the three-layer defense framework that helped a top-3 credit bureau identify critical deployment barriers.

The Production Paradox

"They built a lot of agents, but the biggest challenge right now is to take it to production for them."

— Don Bosco Durai, on a top-3 credit bureau (00:04:37)

Watch key moment
18 min
Duration
Don Bosco Durai
Speaker
Apache Ranger Creator
Credibility
Priv
Company

The Production Paradox

Enterprises are building AI agents at a furious pace. They're getting agents to work. They're solving complex business problems. But they're not going to production.

"Most of the agent frameworks today run as a single process. What that really means is the agent, the task, the tools - they are in the same process. If the tool needs access to database, it needs to have the credentials. Those credentials are generally service user credentials with super admin privileges. Since they're all in the same process, one tool can technically access some other credentials."

Don Bosco Durai, revealing the fundamental architectural flaw (00:02:10)

130

The Hidden Danger

Single-process architecture violates zero trust principles: Tools can access each other's credentials in memory. Third-party libraries can read sensitive prompts. There are no security boundaries between components. Any compromised component jeopardizes the entire system.

The Single-Process Security Flaw

Most AI agent frameworks share a fundamental architectural flaw that makes them fundamentally insecure for enterprise use.

Single-Process Architecture

  • • Credentials in shared memory
  • • No isolation between components
  • • Tools can access each other's data
  • • Third-party libraries read prompts
  • • Zero trust violation

Gateway/Proxy Architecture

  • • Isolated components
  • • External credential management
  • • Zero trust enforcement
  • • Policy-based access control
  • • Runtime protection

Why This Matters

  • Credential Leakage: Tools can access each other's credentials in memory
  • Prompt Exposure: Third-party libraries can read sensitive prompts
  • No Security Boundaries: Any compromised component jeopardizes the entire system
  • Privilege Escalation: Super admin credentials unnecessarily exposed
Watch explanation (00:02:51)

Real-World Production Blocker

Don shared a sobering story from a recent conversation with a customer: one of the top three credit bureaus (likely Experian, Equifax, or TransUnion).

"They built a lot of agents, but the biggest challenge right now is to take it to production."

A top-3 credit bureau has working AI agents that solve real business problems, but they cannot deploy to production due to security and compliance gaps.

Watch (00:04:30)

The Regulatory Maze

CCPA

California Consumer Privacy Act - Residents' data requires explicit consent

GDPR

General Data Protection Regulation - EU data has strict access controls and residency requirements

FCRA

Fair Credit Reporting Act - Federal regulations on credit data access

Data Residency Laws

Various state-level data protection and residency requirements

The Problem: Treating Agents Like Software

Key Insight: The credit bureau treats AI agents like traditional software. But they should treat them like human employees.

When they onboard a human employee: they go through training programs, learn regulations and compliance requirements, are granted access to specific data based on need-to-know, and are subject to audit and oversight. AI agents need the same treatment.

The "Unknown Unknowns" Challenge

AI agents are autonomous by definition—they reason, they create their own workflows, they decide which tools to call. This creates security challenges that traditional software doesn't face.

"Agents, by definition, are autonomous. That means it will call their own, make up their own workflow depending upon the task. This brings in another set of challenges which we call in security 'unknown unknowns.' You really don't know what the agent is going to do, so it's very non-deterministic."

Don Bosco Durai, on why traditional security doesn't work for agents (00:03:12)

192

Non-Determinism

Same input can produce different outputs. Agents can take unexpected paths.

Attack Vector Increase

Because of autonomy, attack vectors in agents are much higher than traditional software.

Unauthorized Access Risk

Can lead to unauthorized access, data leakages of sensitive information.

Three-Layer Defense Framework

There's no silver bullet for AI agent security. The best approach is multiple layers of defense working together.

"There's no silver bullet. The best way is to have multiple layers."

Don introduces his three-layer defense framework: evaluation, enforcement, and observability.

Watch (00:05:40)
1

Security-Focused Evals

Establish criteria to promote agents to production with risk scoring

Five Security Eval Categories:

  • Use Case & Baseline Testing
  • Third-Party LLM Safety
  • CVE Scanning (Vulnerabilities)
  • Prompt Injection Testing
  • Data Leakage Testing
Watch (00:06:05)
2

Gateway Architecture

Real-time security controls during agent execution

Zero Trust Principles:

  • Don't trust any component, even internal ones
  • Isolate components from each other
  • Gateway intercepts all requests/responses
  • Policy engine makes authorization decisions
Watch (00:11:05)
3

Production Monitoring

Monitor real-world usage and react to unexpected behaviors

Three Pillars:

  • Logging: What did agent do, what data accessed?
  • Monitoring: Real-time health, anomaly detection
  • Auditing: Compliance reporting, forensics
Watch (00:06:58)

How Layers Work Together

Each layer is necessary. None is sufficient alone. Evals establish promotion criteria. Enforcement provides runtime protection. Observability enables continuous improvement and compliance. Together, they create a defense-in-depth approach tailored for autonomous, non-deterministic AI systems.

EvalsEnforcementObservabilityFeedback Loop

Treat Agents Like Employees, Not Software

Don's most powerful insight is conceptual: Treat AI agents like human employees, not software. This paradigm shift changes everything.

"They consider an AI agent as similar to a human user. When they onboard a human user, they go through training and have lot of regulations."

Don Bosco Durai, on the credit bureau's approach (00:04:42)

282

Traditional Software Mindset

  • • Code is static
  • • Behavior is deterministic
  • • Testing covers most scenarios
  • • Security = perimeter defense
  • • Deploy when features work

Agent-as-Employee Mindset

  • • Agents are autonomous
  • • Behavior is non-deterministic
  • • "Unknown unknowns" exist
  • • Security = governance + training + oversight
  • • Deploy when "trained" and "compliant"

If You Treat Agents Like Employees, You Need:

  • Onboarding Process: Define role, grant appropriate access, set expectations
  • Compliance Training: Teach regulations (GDPR, CCPA), data handling best practices
  • Oversight & Auditing: Log all actions, review performance, investigate anomalies
  • Access Controls: Role-based permissions, data boundaries, temporal restrictions

Top 7 Quotes

Direct quotes from the YouTube video with timestamped links for verification.

"Most of the agent frameworks today run as a single process. What that really means is the agent, the task, the tools - they are in the same process. If the tool needs access to database, it needs to have the credentials. Those credentials are generally service user credentials with super admin privileges."

Don Bosco Durai (00:02:10)

Single-process security flaw

130
"They built a lot of agents, but the biggest challenge right now is to take it to production for them."

Don Bosco Durai (00:04:37)

Production blocker

277
"Agents, by definition, are autonomous. That means it will call their own, make up their own workflow depending upon the task. This brings in another set of challenges which we call in security 'unknown unknowns.'"

Don Bosco Durai (00:03:12)

Agent autonomy challenge

192
"There's no silver bullet. The best way is to have multiple layers."

Don Bosco Durai (00:05:40)

Three-layer framework

340
"We talk about evals, but most of them we only talking about evals for how good your models, how good your responses is. But you also need to have evals which are more security and safety focused."

Don Bosco Durai (00:06:05)

Security-focused evals

365
"You need to have a gateway which sits in front of your agent, intercepts all requests, responses. It talks to your policy engine to make decisions. The agent doesn't need to know about policies."

Don Bosco Durai (00:11:05)

Gateway architecture

665
"Observability is particularly important in the world of agents because there's so many variables involved. You cannot really catch all of them during development or initial testing. You have to keep track of how it is used in real world."

Don Bosco Durai (00:06:58)

Observability necessity

418

Key Takeaways

Actionable insights for engineers, security professionals, and enterprise leaders.

For Engineers

  • Security, not features, is the production blocker
  • Audit your agent architecture for isolation
  • Add security evals to your pipeline
  • Implement gateway architecture
  • Deploy observability before production

For Security Pros

  • Learn AI-specific threats (prompt injection, model poisoning)
  • Adopt existing frameworks (NIST AI RMF, OWASP LLM Top 10)
  • Bridge the gap: Partner with AI teams early
  • Apply zero trust within agent processes
  • Create reusable security patterns

For Leaders

  • Recognize the shift: agents are employees, not software
  • Invest in governance (security tooling, hiring)
  • Start now (don't wait for a breach)
  • Learn from the credit bureau example
  • Security is a competitive differentiator

Source Video

AI + Security & Safety

Don Bosco Durai • Co-founder & CTO of Priv • Creator of Apache Ranger • AI Engineer Conference

AI Security
Enterprise AI
Zero Trust
Agent Safety
Apache Ranger
Video ID: G7aSH6N7qY4Duration: ~18 minutes
Watch on YouTube

Analysis based on Don Bosco Durai's talk 'AI + Security & Safety' at AI Engineer Conference. All quotes and timestamps verified from the full transcript. Don is the Co-founder & CTO of Priv and creator of Apache Ranger (data governance framework used by AWS, GCP, and Azure).