AI Agents in DeFi: The Infrastructure Is Live -- and So Is the Liability
In February 2026, Coinbase launched Agentic Wallets -- the first wallet infrastructure purpose-built for AI agents to transact on-chain.1 AI agents can now trade on centralized and decentralized exchanges, stake assets, provide liquidity, and rebalance portfolios without human intervention. The liability implications for deployers are immediate, compounding, and largely unexamined.
"Very soon there are going to be more AI agents than humans making transactions," Coinbase CEO Brian Armstrong wrote in March 2026. "They can't open a bank account, but they can own a crypto wallet."2 The x402 protocol has processed over 119 million transactions on Base alone, at least nine platforms offer agent wallet infrastructure, and on-chain identity registries went live on Ethereum mainnet in January 2026.3
The risks are keeping pace. In January 2026, Step Finance lost $27-30 million after AI trading agents with excessive permissions amplified a treasury breach -- a properly constrained system would have blocked the unauthorized transfers.4 Anthropic's SCONE-bench study tested AI agents against 405 exploited smart contracts and found agents independently discovered exploits worth $4.6 million, including two previously unknown zero-days.5 MIT's 2025 AI Agent Index found that of 13 agent systems with frontier autonomy, only four disclosed any safety evaluations.6
No regulatory framework exists for machine-to-machine financial transactions. The March 17, 2026 SEC/CFTC joint interpretive release -- 68 pages establishing a five-category token taxonomy and classifying 16 tokens as digital commodities -- did not mention AI agents once.7
The absence of a framework designed for AI agents does not mean the absence of liability. It means the opposite. Every transaction these agents execute is already governed by existing securities law, commodities regulation, and AML requirements. Courts are holding that AI systems are products. Deployers bear design obligations. "The bot did it" is not a defense. And the deployer who ships without safeguards is not operating in a gray area -- the deployer is building a plaintiff's case.
Key Takeaways
- AI agents are executing autonomous financial transactions in DeFi now -- over 119 million transactions on one network alone, with no human oversight.
- Courts hold that AI systems are "products" whose deployers bear design obligations, including the duty to design against foreseeable misuse like jailbreaking and prompt injection.
- Three financial regulatory frameworks apply simultaneously: SEC securities law, CFTC commodities regulation, and FinCEN AML/KYC requirements.
- The standard of care is being defined now by industry practice (OWASP, NIST, Coinbase's architecture) and enforcement precedent (Knight Capital $12M, Schwab $187M, JPMorgan $920M).
- These theories compound: products liability, financial regulation, and deployer liability doctrines create multiplicative -- not additive -- risk.
Are AI Systems "Products"? Courts Say Yes
In Garcia v. Character Technologies, Inc. (M.D. Fla., May 2025), Judge Anne Conway held that an AI chatbot is a "product" for products liability purposes when the defect arises from system design rather than content.8 Design defect claims were actionable for the absence of safety guardrails, age verification, and reporting mechanisms. Architectural choices, not speech. Character.AI and Google settled in January 2026.9
Trial verdicts in March 2026 confirmed this. In KGM v. Meta Platforms, a California jury found Meta and YouTube negligent in platform design ($6 million).10 In New Mexico v. Meta, a jury imposed $375 million for design choices violating consumer protection law.11 These are not content cases. They are design defect verdicts. For AI agent deployers: what permissions your agent has, what boundaries constrain it, and what kill switches exist are design choices -- and they create or foreclose liability.
In Mobley v. Workday, Inc. (N.D. Cal. 2024), Judge Rita Lin held that an AI vendor can be directly liable for its system's outcomes, distinguishing passive tools from autonomous AI. A word processor does not create agent liability. But an AI system that "perform[s] a traditional [function]... through the use of artificial intelligence" does -- because "nothing in the language of the federal anti-discrimination statutes... distinguishes between delegating functions to an automated agent versus a live human one."12
California AB 316, effective January 1, 2026, codifies this: it prohibits any defendant from asserting that "AI autonomously caused the harm" as a defense.13 The statute covers the entire AI supply chain. Deploy an autonomous system, own its consequences.
Under the Restatement (Third) of Torts, Section 2, a manufacturer's liability extends to foreseeable product misuse and modification.14 In Liriano v. Hobart Corp. (N.Y. 1998), the court held a manufacturer liable even when a third party removed a safety guard, because the removal was foreseeable.15 Prompt injection is a known attack vector cataloged by the OWASP Top 10 for Agentic Applications.16 Jailbreaking is documented. Excessive permissions are the default. If someone can misuse your agent in a way that is studied, published, and quantified -- you have a duty to design against it.
Do AI Trading Agents Need SEC Registration?
An AI agent managing DeFi positions can trigger investment adviser registration under existing law. Section 202(a)(11) of the Investment Advisers Act requires registration for any person who, for compensation, engages in the business of advising others on securities.17 An AI agent generating revenue for its deployer, continuously selecting yield strategies, and making decisions involving digital securities under the March 2026 taxonomy satisfies all three prongs.
The SEC has enforced this against automated platforms. Wealthfront paid $250,000 in 2018 for false monitoring claims -- the first robo-adviser enforcement.18 Schwab paid $187 million in 2022 for concealing that its robo-adviser held up to 29.4% of client assets in revenue-generating cash while advertising "no advisory fees."19 In 2024, Delphia ($225,000) and Global Predictions ($175,000) were penalized for claiming AI capabilities they lacked.20 By 2025, enforcement turned criminal: Nate Inc.'s founder faced parallel SEC and DOJ charges for raising $42 million on fabricated AI claims.21
The fiduciary standard cannot be delegated to an algorithm. Commission Interpretation IA-5248 establishes that the duty of care is non-waivable.22 The SEC's 2026 Examination Priorities target "Black Box AI" -- algorithms whose decision-making is opaque.23 Opacity is not a defense to fiduciary breach.
If an AI agent trades digital securities, the deployer faces full registration. The February 2024 expanded dealer rule captures automated strategies providing liquidity by "regularly expressing trading interests at or near the best available prices on both sides of the market."24
If your agent selects yield strategies, rebalances portfolios, or recommends trades involving digital securities -- evaluate investment adviser registration before launch, not after.
What Happens When the Bot Moves Commodity Markets?
The same taxonomy creates CFTC jurisdiction. The joint release classified 16 tokens as digital commodities.25 AI agents trading them operate in commodity markets under the Commodity Exchange Act.
The classification of those 16 tokens -- and the broader five-category token taxonomy established in the joint release -- determines which regulatory regime governs each transaction an AI agent executes. Knight Capital is the canonical precedent. On August 1, 2012, an automated system sent over four million orders in 45 minutes due to a deployment error. Result: $460 million in losses, $12 million penalty -- the first Market Access Rule enforcement.26 The missing controls -- capital limits, kill switches, pre-deployment testing -- map directly to AI agents in DeFi, where there are no circuit breakers and execution is irreversible.
Spoofing liability is acute. In United States v. Coscia (7th Cir. 2017), the court held that spoofing intent can be proven through algorithm design: if the code was built to place and cancel orders, the architecture establishes intent.27 JPMorgan paid $920.2 million in 2020 for automated spoofing.28 For AI agents that learn trading strategies through reinforcement learning, the deployer bears the liability the algorithm's behavior creates.
The CFTC has enforced against DeFi directly. In CFTC v. Ooki DAO (N.D. Cal., June 2023), the court held a DAO is a "person" under the CEA ($643,542 penalty, permanent bans).29 Uniswap Labs paid $175,000 in 2024.30
A regulatory gap compounds the risk. The CFTC withdrew Regulation Automated Trading in 2020 after industry opposition, and its replacement covers only exchanges, not market participants.31 Chair Selig's Innovation Task Force (March 2026) is developing AI-driven trading guidance, but until it materializes, deployers face obligations without specific compliance rules.32
If your agent trades BTC, ETH, SOL, or any of the 16 classified digital commodities -- it is operating in a CFTC-regulated market right now. Build the controls Knight Capital lacked: capital limits, kill switches, and pre-deployment testing.
Can AI Agents Comply with Anti-Money Laundering Law?
Knowing your customer breaks down when the customer is software.
FinCEN's 2019 guidance is technology-neutral: if software transmits value, the deployer bears money transmitter obligations.33 But CIP requires verification of every customer's name, date of birth, and government ID.34 An AI agent has none of these. The travel rule requires originator and beneficiary identification for transfers over $3,000.35 In an AI-to-AI transfer, there is no person to identify on either side.
The GENIUS Act sharpens this. Section 4(a)(5) makes every Permitted Payment Stablecoin Issuer a BSA "financial institution" with full CIP, SAR, and OFAC screening obligations.36 If AI agents transact in stablecoins at scale, PPSIs must determine who the "customer" is: the agent, the deployer, or the end user. Deadline: July 18, 2026.
Van Loon v. Department of the Treasury (5th Cir., November 2024) held that immutable smart contracts are not sanctionable "property" under IEEPA -- but confirmed that deployers remain the proper enforcement target.37 OFAC operates on strict liability: an AI agent interacting with a sanctioned address exposes its deployer regardless of intent.
Industry is filling the gap faster than regulators. NIST published a concept paper on AI agent identity in February 2026; Mastercard's Verifiable Intent framework and Google's AP2 protocol (60+ organizations) represent emerging standards.38 No binding guidance exists. Compliance infrastructure must be built now, against standards still forming.
If your agent transmits stablecoins or interacts with any protocol touching sanctioned addresses -- your BSA and OFAC exposure is live today, and strict liability means intent is irrelevant.
Which Regulator Has Jurisdiction Over Your AI Agent?
| If your AI agent does this... | Primary regulator | You likely need... |
|---|---|---|
| Selects yield strategies or rebalances portfolios | SEC | Investment adviser registration |
| Provides liquidity on both sides of the market | SEC | Broker-dealer or dealer registration |
| Trades BTC, ETH, SOL, or other digital commodities | CFTC | CPO/CTA evaluation; pre-trade controls |
| Transmits stablecoins or cryptocurrency | FinCEN | MSB/money transmitter registration |
| Operates with California or Colorado users | State regulators | DFAL license (CA) or CAIA compliance (CO) |
How AI Liability Theories Stack Against Deployers
These frameworks compound. In a prior analysis, we identified seven doctrines holding AI deployers directly liable: board oversight under Caremark, spoliation, negligent enablement, products liability, trade secret exposure, regulatory enforcement, and direct liability in regulated domains.39 Those are the base layer. Financial regulation adds three more.
A single AI agent can trigger all of them simultaneously. Your agent rebalances a portfolio with digital securities -- investment adviser obligations. It trades digital commodities without controls -- CFTC exposure. It transacts in stablecoins without CIP -- BSA violations. And you have no decision logs -- spoliation adverse inference. Each failure feeds the next theory.
Because AI agents lack intentions, the law holds the people behind them to objective standards of care.40 Liability targets the least-cost avoider: the deployer who controlled the design.41
The cost of governance is measured in engineering hours. The cost of its absence is measured in enforcement actions.
What Safety Controls Must AI Agent Deployers Build?
The standard of care is emerging from three sources.
Regulators. FINRA's 2026 report defines AI agents as "systems capable of autonomously performing and completing tasks on behalf of a user, with the ability to plan, make decisions and take action."42 Any such system must be incorporated into supervisory frameworks with defined authorized actions, escalation points, and supervisory triggers. The CFTC's Technology Advisory Committee recommended codifying the NIST AI Risk Management Framework.43 Treasury's FS AI Risk Management Framework specifies 230 control objectives.44
Standards bodies. The OWASP Top 10 for Agentic Applications catalogs known attack vectors -- goal hijacking, tool misuse, privilege abuse, memory poisoning -- and establishes "least agency" as the core principle: minimum autonomy, minimum tool access, minimum credential scope.45 NIST's AI RMF is becoming the de facto legal standard; pending legislation in Washington state would create a presumption of conformity for developers who follow it.46
Market leaders. Under the Restatement (Third), Section 2, a product is defective if foreseeable risks could have been reduced by a reasonable alternative design. For AI agents in DeFi, that design is commercially deployed:
- Enclave isolation. Coinbase's Agentic Wallets manage keys in AWS Nitro Enclaves -- never exposed to the agent's prompt, the LLM, or Coinbase's infrastructure.47
- Authority boundaries. Transaction limits, contract allowlists, session caps, and KYT screening enforced at the infrastructure layer.
- Decision logging. Every transaction decision recorded at execution -- spoliation defense, regulatory audit trail, and governance evidence in one.
- Kill switches. Immediate halt capability, enforced below the application layer.
These controls establish the standard. A plaintiff suing a deployer whose agent had unrestricted wallet access can point to Coinbase's architecture, OWASP's specifications, and FINRA's requirements -- and ask why the defendant's product lacked them.
Benavides v. Tesla ($329 million, September 2025) shows how fast design standards harden into verdicts.48 NHTSA's recall found a "critical safety gap between drivers' expectations of the L2 system's operating capabilities and the system's true capabilities, which led to foreseeable misuse."49 Substitute "users" for "drivers" and "AI agent" for "L2 system," and the standard applies unchanged.
How to Comply Before Deploying an AI Agent in DeFi
Before Deployment
1. Registration analysis. Determine whether the agent's activities trigger SEC, CFTC, or FinCEN registration. The token taxonomy means classification determines jurisdiction. Complete this analysis before the first transaction.
2. Authority boundaries. Enforce limits at the infrastructure layer -- spending limits, contract allowlists, transaction caps -- cryptographically enforced so even a compromised agent cannot exceed scope. Follow the OWASP "least agency" principle.50
3. Decision logging. Record every trade and transaction decision at execution time. Spoliation defense, audit trail, and governance evidence in one.
4. Kill switches. Halt capability, enforced at the infrastructure layer, tested before the agent goes live.
Near-Term Monitoring (Q2-Q3 2026)
- GENIUS Act implementation (July 18, 2026): AML program, CIP, and OFAC screening obligations for stablecoin transactions.
- CFTC rulemaking on AI-driven trading: Chair Selig's Innovation Task Force is developing guidance on AI trading system registration.
- California DFAL licensing (July 1, 2026) and Colorado AI Act (June 30, 2026): state-level digital asset and AI compliance obligations.
- The AI LEAD Act (S. 2937): bipartisan federal bill classifying AI systems as "products" with federal design defect liability.51
The Bottom Line
The deployer who builds safeguards now is building a legal defense. The deployer who ships without them is building a plaintiff's case.
The tools exist. The standards are emerging. The case law is building. The only question is whether you design governance into your architecture before launch -- or discover your obligations in an enforcement action.
Disclaimer: This article provides general information for educational purposes only and does not constitute legal advice. AI agent regulation and liability law are evolving rapidly. Consult qualified legal counsel for advice on your specific situation.
Footnotes
-
Coinbase, "Agentic Wallets" (February 11, 2026), available at https://www.coinbase.com/developer-platform/discover/launches/agentic-wallets. ↩
-
Brian Armstrong (@brian_armstrong), X post (March 9, 2026), available at https://x.com/brian_armstrong/status/2031021867973194172. ↩
-
x402 transaction data from Sherlock, "x402 Explained" (2026), available at https://sherlock.xyz/post/x402-explained-the-http-402-payment-protocol; ERC-8004, Ethereum Improvement Proposals, available at https://eips.ethereum.org/EIPS/eip-8004. ↩
-
CoinDesk, "Solana-Based DeFi Platform Step Finance Hit by $30 Million Treasury Hack" (January 31, 2026), available at https://www.coindesk.com/business/2026/01/31/solana-based-defi-platform-step-finance-hit-by-usd30-million-treasury-hack-as-token-price-craters. ↩
-
Anthropic Red Team, "Smart Contracts" (2025), available at https://red.anthropic.com/2025/smart-contracts/. ↩
-
MIT AI Agent Index (2025), available at https://aiagentindex.mit.edu/. ↩
-
Securities and Exchange Commission, "Application of the Federal Securities Laws to Certain Types of Crypto Assets," Release Nos. 33-11412, 34-105020 (March 17, 2026), available at https://www.sec.gov/files/rules/interp/2026/33-11412.pdf. ↩
-
Garcia v. Character Technologies, Inc., No. 6:24-cv-01903-ACC-UAM (M.D. Fla. May 21, 2025) (order on motions to dismiss). ↩
-
CNN, "Character.AI and Google Settle Teen Suicide Lawsuit" (January 7, 2026), available at https://www.cnn.com/2026/01/07/business/character-ai-google-settle-teen-suicide-lawsuit. ↩
-
KGM v. Meta Platforms, Inc. (Cal. Super. Ct. March 25, 2026) ($6 million verdict). ↩
-
New Mexico v. Meta Platforms, Inc. (N.M. Dist. Ct. March 24, 2026) ($375 million in civil penalties). ↩
-
Mobley v. Workday, Inc., 740 F.Supp.3d 796 (N.D. Cal. 2024). ↩
-
Cal. Assembly Bill 316, effective January 1, 2026, available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260AB316. ↩
-
Restatement (Third) of Torts: Products Liability, Section 2 (1998). ↩
-
Liriano v. Hobart Corp., 92 N.Y.2d 232 (1998). ↩
-
OWASP, "Top 10 for Agentic Applications" (2026), available at https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/. ↩
-
15 U.S.C. Section 80b-2(a)(11). ↩
-
Securities and Exchange Commission, In re Wealthfront Advisers, LLC, Release No. IA-5086 (December 21, 2018), available at https://www.sec.gov/files/litigation/admin/2018/ia-5086.pdf. ↩
-
Securities and Exchange Commission, In re Charles Schwab & Co., Inc., Release No. IA-6047 (June 13, 2022), available at https://www.sec.gov/newsroom/press-releases/2022-104. ↩
-
Securities and Exchange Commission, In re Delphia (USA) Inc. and In re Global Predictions, Inc., Release Nos. IA-6573 and IA-6574 (March 18, 2024), available at https://www.sec.gov/newsroom/press-releases/2024-36. ↩
-
Securities and Exchange Commission, SEC v. Nate, Inc. and Albert Saniger, Litigation Release LR-26282 (April 9, 2025), available at https://www.sec.gov/enforcement-litigation/litigation-releases/lr-26282. ↩
-
Securities and Exchange Commission, Commission Interpretation Regarding Standard of Conduct for Investment Advisers, Release No. IA-5248 (June 5, 2019), available at https://www.sec.gov/rules-regulations/2019/06/ia-5248. ↩
-
Securities and Exchange Commission, Division of Examinations, 2026 Examination Priorities (November 17, 2025), available at https://www.sec.gov/files/2026-exam-priorities.pdf. ↩
-
Securities and Exchange Commission, Further Definition of "As a Part of a Regular Business" in the Definition of Dealer, Release No. 34-99477 (February 6, 2024), available at https://www.sec.gov/newsroom/press-releases/2024-14. ↩
-
See note 7. ↩
-
Securities and Exchange Commission, In re Knight Capital Americas LLC, Release No. 34-70694 (October 16, 2013), available at https://www.sec.gov/files/litigation/admin/2013/34-70694.pdf. ↩
-
United States v. Coscia, 866 F.3d 782 (7th Cir. 2017). ↩
-
Commodity Futures Trading Commission, Press Release No. 8260-20 (September 29, 2020), available at https://www.cftc.gov/PressRoom/PressReleases/8260-20. ↩
-
Commodity Futures Trading Commission, CFTC v. Ooki DAO, Press Release No. 8715-23 (June 8, 2023), available at https://www.cftc.gov/PressRoom/PressReleases/8715-23. ↩
-
Commodity Futures Trading Commission, In re Universal Navigation Inc. d/b/a Uniswap Labs, Press Release No. 8961-24 (September 4, 2024), available at https://www.cftc.gov/PressRoom/PressReleases/8961-24. ↩
-
Regulation Automated Trading; Withdrawal, 85 Fed. Reg. 42,755 (July 15, 2020), available at https://www.federalregister.gov/documents/2020/07/15/2020-14383/regulation-automated-trading-withdrawal. ↩
-
CoinDesk, "CFTC Chair Highlights Wide Crypto Agenda Including Rules on DeFi, Prediction Markets" (March 10, 2026), available at https://www.coindesk.com/policy/2026/03/10/cftc-chair-highlights-wide-crypto-agenda-including-rules-on-defi-prediction-markets. ↩
-
Financial Crimes Enforcement Network, Guidance FIN-2019-G001, "Application of FinCEN's Regulations to Certain Business Models Involving Convertible Virtual Currencies" (May 9, 2019), available at https://www.fincen.gov/resources/statutes-regulations/guidance/application-fincens-regulations-certain-business-models. ↩
-
31 C.F.R. Section 1020.220. ↩
-
31 C.F.R. Section 1010.410. ↩
-
GENIUS Act of 2025 (Guiding and Establishing National Innovation for U.S. Stablecoins Act), Pub. L. No. 119-27, Section 4(a)(5), available at https://www.congress.gov/bill/119th-congress/senate-bill/1582/text. ↩
-
Van Loon v. Department of the Treasury, No. 23-50669 (5th Cir. November 26, 2024), available at https://www.ca5.uscourts.gov/opinions/pub/23/23-50669-CV0.pdf. ↩
-
NIST NCCoE, "Accelerating the Adoption of Software and AI Agent Identity and Authorization" (February 5, 2026), available at https://csrc.nist.gov/pubs/other/2026/02/05/accelerating-the-adoption-of-software-and-ai-agent/ipd; Google, "Agents to Payments (AP2) Protocol" (September 2025), available at https://ap2-protocol.org/. ↩
-
Chante Eliaszadeh, "Not an Agent. Not a Defense: Seven Doctrines That Already Hold AI Deployers Liable," Astraea Counsel (March 18, 2026). ↩
-
Ian Ayres & Jack M. Balkin, "The Law of AI is the Law of Risky Agents Without Intentions," University of Chicago Law Review Online (2024), available at https://lawreview.uchicago.edu/online-archive/law-ai-law-risky-agents-without-intentions. ↩
-
See Maarten Herbosch, "Liability for AI Agents," 26 North Carolina Journal of Law & Technology 391 (2025), available at https://scholarship.law.unc.edu/ncjolt/vol26/iss3/4/. ↩
-
FINRA, 2026 Annual Regulatory Oversight Report, Generative AI and Agentic AI section (December 9, 2025), available at https://www.finra.org/rules-guidance/guidance/reports/2026-finra-annual-regulatory-oversight-report/gen-ai. ↩
-
Commodity Futures Trading Commission, Technology Advisory Committee, "Responsible AI in Financial Markets: Opportunities, Risks & Recommendations," Press Release No. 8905-24 (May 2, 2024), available at https://www.cftc.gov/PressRoom/PressReleases/8905-24. ↩
-
U.S. Department of the Treasury, Financial Services AI Risk Management Framework (February 2026), available at https://home.treasury.gov/news/press-releases/sb0395. ↩
-
See note 16. ↩
-
NIST, AI Risk Management Framework (AI 100-1) (January 2023), available at https://www.nist.gov/itl/ai-risk-management-framework. ↩
-
AWS, "Powering Programmable Crypto Wallets at Coinbase with AWS Nitro Enclaves," available at https://aws.amazon.com/blogs/web3/powering-programmable-crypto-wallets-at-coinbase-with-aws-nitro-enclaves/. ↩
-
Benavides v. Tesla, Inc. (Miami-Dade County, September 2025) ($329 million verdict). ↩
-
NHTSA Recall 23V-838, "Autopilot Software Recall" (December 2023). ↩
-
See note 16. ↩
-
AI LEAD Act, S. 2937, 119th Congress (September 29, 2025), available at https://www.congress.gov/bill/119th-congress/senate-bill/2937/text. ↩