Governing the Rise of Agentic AI

The ACR Framework™

A six-pillar governance model for securing, controlling, and auditing autonomous AI agents in real enterprise environments.

Built for systems that can take actions, use tools, persist memory, and pursue goals over time — not static chatbots.

Join the Mission
AutonomousControl full logo

Quick Overview of the Six Pillars

1

Identity & Purpose Binding

Every agent is attributable to a unique identity and explicitly bound to an approved purpose and scope.

2

Behavioral Policy Enforcement

Runtime guardrails enforce what agents can and can’t do — especially around tools, data, and permissions.

3

Autonomy Drift Detection

Detects deviations from expected behavior, role, or operating boundaries before the drift becomes impact.

4

Execution Observability

Decision trails, tool calls, and state transitions are visible and reviewable for audit and forensics.

5

Self-Healing & Containment

Fault isolation, rollback, and kill switches reduce blast radius when agents misbehave or are compromised.

6

Human Oversight

Humans retain decision rights, escalation paths, and override authority — especially for high-impact actions.

The Six Pillars of Autonomous Control & Resilience

The ACR Framework™ defines foundational mechanisms to govern the safe, responsible, and auditable operation of agentic AI systems. Together, they create a layered architecture for control, containment, and human decision authority.

Scope (What ACR Applies To)

ACR applies to autonomous or semi-autonomous AI agents that can take actions, use tools/APIs, access data, persist memory, or pursue goals across sessions. It is designed to complement (not replace) existing security, risk, and compliance programs by making agentic systems implementable and auditable.

1

Identity & Purpose Binding

Every agent must be attributable to a unique identity and explicitly bound to an approved purpose, scope, and owner — preventing unauthorized repurposing and unclear accountability.

Objective: Every action is attributable and purpose-bound (who did what, on whose authority, and for what approved intent).

Mechanisms:

  • Agent registry (owner, approved purpose, allowed tools/data, risk tier)
  • Strong workload identity (signed manifests, key management, attestation where possible)
  • Purpose-bound credentials and tool scopes (least privilege by task and context)

Evidence:

  • Exportable agent inventory + approval history
  • Trace samples showing identity → action correlation IDs
  • Access reviews for agent permissions and tool scopes
Fundamental to prevent goal hijacking, role confusion, and unowned “shadow agents”
2

Behavioral Policy Enforcement

Dynamic runtime policies define what an agent is allowed to do — and enforce it continuously at execution time (not just in documentation).

Objective: High-risk behaviors are blocked or gated in real time, even when the model “wants” to do them.

Mechanisms:

  • Tool gateway: allow/deny, parameter constraints, rate limits, spend limits
  • Data policy: PII/PHI handling, allowed destinations, redaction and DLP checks
  • Approval gates for defined action classes (e.g., payments, prod changes, outbound comms)

Evidence:

  • Policy ruleset versions + change history
  • Denied/gated action logs with reasons
  • Demonstrable separation between “model output” and “executed action”
Real-time enforcement at the point of execution (where impact happens)
3

Autonomy Drift Detection

Detects when an agent begins to deviate from its intended role, constraints, or operating patterns — enabling early alerts and corrective actions before harm occurs.

Objective: Detect “it’s going off-script” early — not after an incident report.

Mechanisms:

  • Baselines for normal tool usage, data access, and action types by agent/purpose
  • Signals for policy pressure, jailbreak attempts, repeated denials, and escalation behavior
  • Automated response: throttle, require human approval, isolate, or rollback

Evidence:

  • Documented drift indicators + thresholds
  • Alert history and incident linkage (what drift preceded what impact)
  • Tests demonstrating drift triggers and containment behavior
Proactive anomaly detection beyond “errors” and “bad outputs”
4

Execution Observability

Ensures complete transparency into what the agent did, when, and why — including tool calls, policy decisions, and state transitions. Observability is the foundation of auditability and trust.

Objective: Reconstruct an agent’s decision and execution path end-to-end (prompt → tools → actions → outcomes).

Mechanisms:

  • Structured traces with correlation IDs across model calls, tools, and downstream systems
  • Immutable logging for high-risk actions (append-only where possible)
  • Retention and access controls (sensitive prompts/data handled appropriately)

Evidence:

  • Sample traces showing the “why” behind actions (policy decisions + context)
  • Logging coverage metrics (which tools/actions are fully traced)
  • Audit-ready exports (for investigations, assurance, and regulators)
Decision trails and state history you can actually audit
5

Self-Healing & Containment

Agents must recognize faults, isolate compromised behavior, revert to safe states, or shut down. Containment reduces blast radius when risk is detected — whether from bugs, misuse, or compromise.

Objective: When things go wrong, the system fails safely and limits impact.

Mechanisms:

  • Kill switch independent of the agent runtime (human + automated triggers)
  • Isolation: sandboxing, network egress controls, scoped tool permissions
  • Rollback: checkpoints, safe-mode fallbacks, and incident playbooks

Evidence:

  • Documented containment paths (who can kill, how fast, how verified)
  • Testing records (tabletops, chaos tests, kill-switch drills)
  • Post-incident reports linking failures to improved containment controls
Recovery mechanisms and emergency protocols that reduce blast radius
6

Human Oversight

Human operators retain the ability to monitor, intervene, and override. Oversight defines decision rights: what must be reviewed, who approves, and how exceptions are handled.

Objective: Humans keep authority over high-impact outcomes, with clear escalation and accountability.

Mechanisms:

  • Action tiers: low-risk auto, medium-risk gated, high-risk human approval required
  • Review queues and escalation paths (timeouts, fallback decisions)
  • Break-glass processes with logging and after-action review

Evidence:

  • RACI / decision-rights documentation for agent actions
  • Approval logs and exception reports
  • Periodic oversight reviews (sampling of actions + outcomes)
Maintains human decision authority in the loop — where it matters

Interested in contributing to the development of the ACR Framework™?

Join the Contributor Network

Why the ACR Framework?™

AI is entering a new phase — one where agents can take actions, use tools, access data, and operate across time. This power requires governance that works at runtime, not just at review time.

Traditional security and compliance programs assume software is deterministic and bounded. Agentic systems are not: they are goal-seeking, probabilistic, and capable of drift. ACR defines practical mechanisms to keep these systems controlled, auditable, and aligned to human decision authority.

ACR is designed to be implemented in real enterprise environments — with controls that can be tested, measured, and evidenced.

Core Benefits

  • Built for autonomous, tool-using, goal-driven agent systems (not static prompts)
  • Operable controls: objectives, mechanisms, and evidence — not just principles
  • Reduces blast radius with containment and tested intervention paths
  • Maintains human decision rights while enabling responsible autonomy
Portrait of Adam DiStefano, creator of the ACR Framework

Adam DiStefano

Creator of the ACR Framework™

Adam is a cybersecurity leader focused on AI governance, alignment, and resilience. He created the ACR Framework™ to help organizations implement agentic AI with controls that are practical, testable, and auditable.

For Organizations

Want to pressure-test your agent roadmap against a concrete governance model? ACR is designed to be implementable with existing security and GRC programs — and to produce evidence leaders can defend.

Help Shape the Future of AI Governance

The ACR Framework™ is just the beginning. We’re building a community of engineers, researchers, and leaders who care about safety, accountability, and implementable control of autonomous systems.

Join the Contributor Network