Anthropic Shipped the Runtime. Who Ships the Governance?
On April 8, 2026, Anthropic launched Claude Managed Agents into public beta --- a cloud-hosted runtime where Claude operates as an autonomous agent: reading files, executing shell commands, searching the web, and connecting to external tool providers through MCP servers, all inside sandboxed containers that Anthropic manages.1 The blog post promised “10x faster” agent deployment. Notion is delegating coding and content production to parallel agent sessions. Rakuten shipped specialist agents across product, sales, marketing, and finance in under a week. Asana built AI Teammates that pick up assigned tasks inside projects and return deliverables.2
The velocity is real. So is the liability.
Every action a Managed Agent takes --- every file it reads, every command it runs, every customer interaction it remembers --- is an action the deployer authorized. Under the Restatement (Third) of Agency, an agency relationship requires “one person” to “manifest assent to another person” --- and “person” means an entity with “legal capacity to possess rights and incur obligations.”3 AI has none of these attributes. It is not a legal agent. It is a product the deployer configured and released into production. And the deployer who configured it without a governance framework is not operating in a regulatory gray area. The deployer is building a plaintiff’s case --- in a market where the EU AI Act’s high-risk deployer obligations take effect on August 2, 2026,4 California’s privacy risk assessment rules are live today,5 and three federal financial regulators have already imposed $187 million (Schwab), $920 million (JPMorgan), and $250,000 (Wealthfront) in penalties against algorithmic systems that map directly to agent-enabled transactions.6
This article applies the Know Your Agent (KYA) framework --- a five-pillar governance model for AI agent deployment --- to the specific product surface of Claude Managed Agents. Each pillar maps to a Managed Agents feature. Some features make the pillar easier to satisfy. Some make it harder. And one feature --- the persistent Memory tool --- creates a data-mapping challenge that most deployers have not yet considered.
Key Takeaways
- Managed Agents does not change the underlying law. It changes what you can no longer credibly claim you did not know was possible --- including sandboxed tool execution, persistent sessions, and cross-session memory.
- The KYA Five Pillars (Identity, Authority, Monitoring, Incident Response, Compliance) each map to a specific Managed Agents product feature --- giving deployers a concrete checklist rather than an abstract governance aspiration.
- Session persistence is a spoliation trap. If the only copy of your agent’s decision history lives in Anthropic’s containers, subject to Anthropic’s retention policies, you do not have a litigation-ready record under FRCP 37(e).
- The Memory tool becomes a personal-data processing activity the moment the agent stores anything identifying a natural person --- and because the agent decides what to store, the deployer may not know when that threshold is crossed.
- Kill switches are now API calls, not policy documents. EU AI Act Article 26(5) requires deployers of high-risk AI systems to suspend use when risks emerge. Managed Agents gives you the product primitive. The question is whether your organization has defined who makes that call and how fast.
Why Does Managed Agents Change the Compliance Conversation?
Managed Agents does not change what the law requires. It changes who is building agents --- and how little time they have to encounter the governance questions before their product is live.
Before April 8, deploying an autonomous AI agent in production required building the agent loop from scratch: orchestration logic, sandboxed execution, credential management, state persistence, tool authentication, and error recovery. The infrastructure cost was a natural filter. Companies that invested months of engineering had time to encounter at least some of the governance questions along the way.
Managed Agents compresses that cycle to near zero. The runtime costs $0.08 per session-hour, metered to the millisecond, accruing only while the session is running.7 A developer can define an agent, configure an environment with pre-installed packages, start a session, and have an autonomous system executing shell commands and browsing the web --- all in an afternoon.8 The infrastructure that previously took months now takes hours.
This is a governance problem, not a developer-tools story. When the barrier to deploying an autonomous agent drops by an order of magnitude, the population of deployers changes. The companies shipping agents after April 8 include teams that have never considered agent authority boundaries, decision logging, incident response workflows, or compliance documentation --- because they never had to build the infrastructure that would have surfaced those questions.
| Framework | Status | Key deployer obligation |
|---|---|---|
| California CCPA/CPRA | Privacy risk assessments effective January 1, 2026; ADMT implementing regulations effective January 1, 2027 | Risk assessment for significant ADMT uses; pre-use notice (2027) |
| SEC / CFTC / FinCEN | Actively enforced | Registration, fiduciary duties, AML/KYC for agents making financial decisions |
| EU AI Act Article 26 | High-risk deployer obligations effective August 2, 2026 | Risk management, human oversight, FRIA, incident reporting |
| NIST NCCoE agent identity standards | Comment period closed April 2, 2026 | Emerging best practice for agent identification and authorization |
The regulatory calendar did not adjust for the product launch.
If you deployed your first Managed Agent this week without reviewing these four frameworks, your compliance gap is already open.
What Are the KYA Five Pillars?
KYA is a five-pillar framework for AI agent governance that maps deployment architecture to regulatory obligation.9 Each pillar asks one question about the agent you deployed. Together, the five questions form the minimum viable governance surface for any autonomous system.
KYA-ID (Identity): Who deployed this agent, and can anyone prove it? The governing standard is the NIST NCCoE concept paper on software and AI agent identity, published February 5, 2026, which asks industry how organizations should “identify, authenticate and control software and artificial intelligence agents that can access enterprise systems and take actions with limited human supervision.”10
KYA-AUTH (Authority): What is this agent permitted to do, and where are the boundaries? The OWASP Top 10 for Agentic Applications establishes the “least agency” principle as the controlling design standard: grant only the minimum permissions necessary for the task.11
KYA-MON (Monitoring): What did this agent actually do, in a record you can produce in court? Under Federal Rule of Civil Procedure 37(e) and Zubulake v. UBS Warburg LLC, parties who fail to preserve electronically stored information face adverse inference at trial.12
KYA-IR (Incident Response): When something goes wrong, can you stop the agent and notify the right people fast enough? EU AI Act Article 26(5) requires deployers to “suspend the use” of high-risk AI systems when risks emerge and to inform providers and market surveillance authorities “without undue delay.”13
KYA-COMP (Compliance): Is the control environment documented well enough to survive an examiner? The EU AI Act’s deployer obligations for high-risk systems go into full effect on August 2, 2026, including conformity assessment, post-market monitoring, incident reporting, and --- for deployments processing personal data in sensitive sectors --- a Fundamental Rights Impact Assessment under Article 27, stacked with a GDPR Article 35 Data Protection Impact Assessment.14
How Does KYA-ID Apply to Managed Agents?
Managed Agents gives you a runtime identity for your agent --- an Agent configuration tied to a Session running inside a container --- but it does not give you a legal identity for your deployment. That gap is the one regulators care about.
The NIST NCCoE concept paper, whose comment period closed April 2, 2026, asks the foundational question: how should organizations attribute actions taken by software agents to the legal persons responsible for deploying them?10 Managed Agents provides API-key authentication and scoped permissions. It does not provide attribution from the runtime identity to a real-world human who authorized the deployment and is accountable for its outcomes.
Van Loon v. Department of the Treasury (5th Cir., November 2024) confirmed that deployers remain the proper enforcement target for agent-facilitated violations --- strict liability, regardless of intent.15 The NIST concept paper frames the problem precisely: organizations need solutions for “identification, authorization, auditing and non-repudiation of AI agents, as well as controls to prevent and mitigate prompt injection techniques.”10 Financial institutions solved the identity problem for human customers decades ago with CIP. The agent identity problem is structurally identical, and the standards work is underway.
If you cannot trace every action taken inside your Managed Agents sessions back to a named human at your company who authorized the deployment, KYA-ID is open.
What Does “Least Agency” Mean for Managed Agents Tool Scoping?
Managed Agents gives you narrower tools than raw API access, and that narrowness is a compliance asset --- if you use it deliberately.
The supported tools are Bash (shell commands in the container), file operations (read, write, edit, glob, grep), web search and fetch, and MCP servers for external tool providers.1 Each tool can be scoped at configuration. Every tool you enable expands the agent’s operational surface --- and with it, the deployer’s liability surface.
The OWASP Top 10 for Agentic Applications “least agency” principle provides the controlling standard of care: grant only the minimum permissions necessary for the specific task.11 California AB 316, effective January 1, 2026, codifies the other side of this equation: it prohibits any defendant from asserting that “AI autonomously caused the harm” as a defense.16 Every action a Managed Agent takes through a tool the deployer enabled is an action the deployer authorized.
The design obligation under Liriano v. Hobart Corp. (N.Y. 1998) extends this further: manufacturers are liable even when third parties misuse a product in foreseeable ways.17 Prompt injection is a known, cataloged attack vector.11 If your agent has web access and an attacker injects a malicious instruction through a fetched page, the question is not whether the agent was tricked. The question is whether the deployer designed against a foreseeable attack.
Every tool you enable on a Managed Agent is a tool you will have to justify if the agent misuses it. Start from zero and add only what the use case requires.
Why Is Session Persistence a Spoliation Trap?
Anthropic’s promise that Managed Agents sessions persist through disconnections is a productivity feature --- and a litigation trap if you do not independently log what the agent did.
Sessions run autonomously for minutes or hours, executing multiple tool calls, and progress persists even through disconnections. Event history is stored server-side and can be fetched in full.1 This is useful for long-running tasks. It is dangerous for litigation readiness.
Under FRCP 37(e), a party that fails to take reasonable steps to preserve electronically stored information faces sanctions --- including adverse inference at trial, where the court instructs the jury to presume the missing evidence was unfavorable.12 Zubulake established that the duty to preserve attaches when litigation is “reasonably anticipated,” and it extends to information in the party’s custody or control.
Managed Agents session state lives on Anthropic’s infrastructure, subject to Anthropic’s retention policies --- not the deployer’s. If your litigation hold policy does not explicitly cover Managed Agents session logs exported to your own retention system, you have a gap. And if the only copy of your agent’s decision chain exists in someone else’s containers, you do not control whether that record survives until trial.
The practical play is straightforward: pull the server-sent events stream into your own SIEM or audit log infrastructure in real time. Do not rely on a third-party platform as your system of record for decisions your company will have to defend.
If the only copy of your agent’s decision history lives in Anthropic’s containers, you do not have a decision history. You have a dependency.
How Should Your Organization Handle Kill-Switch Authority?
Before April 8, “kill the agent” was a policy you wrote on paper. After April 8, it is an API call --- which means your policy has to specify who can make it and how fast.
Managed Agents exposes session steering and interruption as runtime capabilities: you can send events to redirect a running agent or terminate its session entirely.1 That operationalizes incident response in a way that was not previously possible for most deployers building their own agent loops.
EU AI Act Article 26(5) requires deployers of high-risk AI systems to “monitor the operation of the high-risk AI system on the basis of the instructions for use” and, when the system presents a risk, to “inform the provider or distributor and the relevant market surveillance authority and suspend the use of that system” without undue delay.13 “Suspend the use” now has a literal product primitive. The regulator can ask why you did not use it.
The governance question is not technical. It is organizational: Who in your company has the authority to terminate a Managed Agents session? Is that authority assigned by role, or does someone need to find a manager? What is the maximum acceptable response time between identifying a risk and executing the kill? Is the response time documented, rehearsed, and auditable?
If the board does not know the answers to these questions, that is a Caremark issue --- the duty of loyalty requires directors to implement and actively monitor reporting systems for mission-critical risks, and AI agent governance qualifies for companies deploying agents at scale.18
The governance question is not whether you can stop the agent. It is whether your organization knows who has the authority to stop it, and how many seconds that takes.
What Documentation Must Be Ready by August 2, 2026?
The EU AI Act’s deployer obligations for high-risk systems take full effect on August 2, 2026, and Managed Agents deployments in sensitive sectors will likely qualify.
The obligations are comprehensive: risk management systems, data governance, technical documentation, record-keeping, transparency to affected persons, human oversight, accuracy, robustness, cybersecurity, conformity assessment, post-market monitoring, and incident reporting.4 For deployers processing personal data in high-risk use cases --- creditworthiness assessment, insurance pricing, hiring, public-sector decision-making --- two separate impact assessments are required before deployment: a Fundamental Rights Impact Assessment under AI Act Article 27 and a Data Protection Impact Assessment under GDPR Article 35.14
The penalty structure stacks. The AI Act imposes fines up to 35 million euros or 7% of global annual turnover.4 GDPR adds up to 20 million euros or 4% of global turnover separately.19 A single Managed Agent processing personal data in Europe can trigger violations under both frameworks simultaneously.
California’s regulatory calendar runs in parallel. Privacy risk assessments for significant automated decision-making uses are required starting January 1, 2026. The ADMT implementing regulations --- including the detailed pre-use notice requirements for automated decision-making technology --- take effect January 1, 2027.5 AB 316 design-defect liability is already live.16
August 2, 2026 is not a planning deadline. It is the date your examiner expects the binder to be complete.
How Do the KYA Pillars Map to Claude Managed Agents?
| KYA Pillar | Managed Agents surface | What the deployer must do |
|---|---|---|
| KYA-ID (Identity) | Agent/Session identity, API-key auth, scoped permissions | Map each Session back to a named human; document the authorization chain |
| KYA-AUTH (Authority) | Bash, file ops, web search/fetch, MCP servers --- each scoped at configuration | Enable only the tools the use case requires; document why each is necessary |
| KYA-MON (Monitoring) | Session persistence, SSE event stream, in-container file system | Mirror the event stream to your own SIEM; preserve under your litigation hold |
| KYA-IR (Incident Response) | Session steering, interruption, and termination via API | Assign kill-switch authority by role; define max response time; rehearse |
| KYA-COMP (Compliance) | Documented beta terms, concept-paper-aligned primitives | Produce EU AI Act Article 27 FRIA + GDPR Article 35 DPIA before go-live |
What Should You Do Before Your First Production Session?
Before you run a paying customer’s task through Managed Agents, complete these seven items:
- Map every tool to a business justification. For each of the four built-in tool categories (Bash, file operations, web search/fetch, MCP servers), document why the use case requires it and who authorized enabling it.
- Mirror the event stream. Pull the SSE stream into your own audit log infrastructure in real time. This is your litigation-ready record, not the session state on Anthropic’s servers.
- Assign kill-switch authority. Define by role --- not by name --- who can terminate a session. Document the maximum acceptable response time.
- Run the EU AI Act Article 27 FRIA if any EU-based user or data subject is in scope for a high-risk use case.
- Run the GDPR Article 35 DPIA if any personal data is processed.
- Conduct the CCPA privacy risk assessment if any California consumer’s data is processed through automated decision-making technology.
- Document the KYA Five Pillars as your baseline governance framework and present it to the board before the deployment is reviewed.
What Happens to Companies That Deploy Claude Managed Agents Without Governance?
Anthropic did not create AI agent liability on April 8, 2026. Courts have been holding AI systems to products liability standards since Garcia v. Character Technologies in May 2025.20 The SEC has been enforcing against automated advisory platforms since Wealthfront’s $250,000 penalty in 2018.21 The Caremark oversight duty has applied to mission-critical compliance risks since Marchand v. Barnhill in 2019.18
What Anthropic created is the runtime that makes the liability easy enough to incur that the universe of deployers just expanded by an order of magnitude. The KYA framework now has product surfaces to map against, not just hypotheticals to discuss. Every pillar has a concrete Managed Agents feature to audit. Every gap has a specific remediation.
The companies that deploy Managed Agents with a governance framework in place --- before the first production session, not after the first enforcement action --- will have two advantages: lower regulatory risk and a documented control environment that becomes a competitive moat as the market matures.
The ones that ship first and govern later will be building the case studies that the next wave of articles cites.
Next in this series: The Memory tool creates a novel data-mapping problem for privacy compliance. Read You Cannot Delete What You Do Not Know the Agent Stored for the right-to-delete failure mode and the remediation architecture deployers need.
Disclaimer: This article provides general information for educational purposes only and does not constitute legal advice. AI agent governance regulation is evolving rapidly. Consult qualified legal counsel for advice on your specific situation.
Footnotes
-
Anthropic, “Claude Managed Agents overview,” Claude Platform Documentation, https://platform.claude.com/docs/en/managed-agents/overview (accessed April 10, 2026). ↩ ↩2 ↩3 ↩4
-
Anthropic, “Claude Managed Agents: get to production 10x faster,” Claude Blog (April 8, 2026), https://claude.com/blog/claude-managed-agents. Early adopters include Notion (parallel coding and content production), Rakuten (specialist agents across product, sales, marketing, and finance via Slack and Teams), Asana (AI Teammates inside projects), Vibecode (prompt-to-deployed-app), and Sentry (debugging agents that write patches and open PRs). ↩
-
Restatement (Third) of Agency Sec. 1.01 (2006). For extended analysis of why AI systems are not legal agents and why this creates more deployer liability, not less, see Chante Eliaszadeh, “Not an Agent. Not a Defense: Seven Doctrines That Already Hold AI Deployers Liable,” Astraea Counsel (March 18, 2026). ↩
-
Regulation (EU) 2024/1689 (EU Artificial Intelligence Act). Full deployer obligations for high-risk systems effective August 2, 2026. Article 26 (deployer obligations): https://artificialintelligenceact.eu/article/26/. Penalties: up to 35 million euros or 7% of global annual turnover. ↩ ↩2 ↩3
-
California Privacy Protection Agency, Final Regulations on Privacy Risk Assessments, Cybersecurity Audits, and Automated Decision-Making Technology (finalized October 9, 2025). Privacy risk assessments effective January 1, 2026. ADMT implementing regulations (pre-use notice requirements) effective January 1, 2027. See Wiley Rein LLP, “California Finalizes Pivotal CCPA Regulations on AI, Cyber Audits, and Risk Governance,” https://www.wiley.law/alert-California-Finalizes-Pivotal-CCPA-Regulations-on-AI-Cyber-Audits-and-Risk-Governance. ↩ ↩2
-
For the three-framework analysis (SEC, CFTC, FinCEN) applied to AI agents in DeFi, see Chante Eliaszadeh, “AI Agent Liability in DeFi: Who’s Responsible When the Bot Trades?,” Astraea Counsel (April 5, 2026). ↩
-
Anthropic, “Pricing,” Claude Platform Documentation, https://platform.claude.com/docs/en/about-claude/pricing (accessed April 10, 2026). Session runtime billed at $0.08 per session-hour, metered to the millisecond, accruing only while the session status is “running.” Idle time (waiting for user input or tool confirmation) does not count. ↩
-
Anthropic, “Claude Managed Agents overview,” supra note 1. The four core concepts: Agent (model, system prompt, tools, MCP servers, skills), Environment (configured container template), Session (running instance), and Events (messages exchanged via SSE). ↩
-
The KYA (Know Your Agent) framework was developed by Chante Eliaszadeh at Astraea Counsel to provide a systematic governance model for AI agent deployment. For the foundation of KYA in deployer liability doctrine, see Eliaszadeh, “Not an Agent. Not a Defense,” supra note 3. ↩
-
National Institute of Standards and Technology, National Cybersecurity Center of Excellence, “Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization” (concept paper, February 5, 2026). Published at https://csrc.nist.gov/pubs/other/2026/02/05/accelerating-the-adoption-of-software-and-ai-agent/ipd. Public comment period closed April 2, 2026. ↩ ↩2 ↩3
-
OWASP Foundation, “OWASP Top 10 for Agentic Applications,” https://owasp.org/www-project-top-10-for-large-language-model-applications/. The “least agency” principle requires that AI agents be granted only the minimum permissions and access necessary to complete their assigned tasks. ↩ ↩2 ↩3
-
Fed. R. Civ. P. 37(e); Zubulake v. UBS Warburg LLC, 229 F.R.D. 422, 431-32 (S.D.N.Y. 2004) (establishing the duty to preserve electronically stored information when litigation is reasonably anticipated and the framework for adverse inference sanctions when preservation fails). ↩ ↩2
-
Regulation (EU) 2024/1689, Article 26(5). Deployers “shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, when relevant, inform providers in accordance with Article 72.” Where the system presents a risk, deployers shall “without undue delay, inform the provider or distributor and the relevant market surveillance authority and suspend the use of that system.” ↩ ↩2
-
Regulation (EU) 2024/1689, Article 27 (Fundamental Rights Impact Assessment); Regulation (EU) 2016/679 (GDPR), Article 35 (Data Protection Impact Assessment). For high-risk AI systems processing personal data in sensitive use cases, both assessments are required before deployment. ↩ ↩2
-
Van Loon v. Department of the Treasury, No. 23-50669 (5th Cir. 2024) (confirming that deployers remain the proper enforcement target for agent-facilitated violations under IEEPA, with strict liability regardless of intent). ↩
-
California AB 316, effective January 1, 2026, codifying that no defendant may assert that AI autonomously caused harm as a defense to liability. ↩ ↩2
-
Liriano v. Hobart Corp., 92 N.Y.2d 232 (1998) (holding manufacturer liable for injuries caused by foreseeable product misuse, including modification by third parties, when the manufacturer failed to warn against known risks). ↩
-
In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996); Marchand v. Barnhill, 212 A.3d 805 (Del. 2019) (establishing that directors have a non-delegable duty to implement and actively monitor reporting systems for “mission critical” compliance risks, with liability sounding in duty of loyalty, not care --- business judgment rule does not apply). ↩ ↩2
-
Regulation (EU) 2016/679 (GDPR), Article 83 (penalties up to 20 million euros or 4% of global annual turnover for the most serious violations). ↩
-
Garcia v. Character Technologies, Inc. (M.D. Fla., May 2025) (holding that an AI chatbot is a “product” for products liability purposes and that design defect claims are actionable for the absence of safety guardrails). ↩
-
U.S. Securities and Exchange Commission, In the Matter of Wealthfront Advisers LLC, Investment Advisers Act Release No. 5086 (December 21, 2018) ($250,000 penalty for false claims about tax-loss harvesting monitoring). ↩