Things Are Not As They Seem
The technology industry calls them agents. The word carries weight --- legal weight. Agency is one of the oldest doctrines in common law, governing the relationship between a principal and the person authorized to act on its behalf. It comes with a body of settled law: duties of loyalty, apparent authority, vicarious liability, scope of employment, and a century of defenses that limit when a principal can be held responsible for an agent's conduct. When companies deploy what they call "AI agents," they inherit the connotation of that entire legal framework. The implication is that the AI acts on behalf of the company, within a relationship the law recognizes and regulates.
It does not. When the technology industry says "agent," it means something the law does not recognize. And the distinction matters more than most deployers realize.
Under the Restatement (Third) of Agency --- the authoritative synthesis of American agency law --- an agency relationship arises "when one person (a 'principal') manifests assent to another person (an 'agent') that the agent shall act on the principal's behalf and subject to the principal's control."1 The operative word is "person." The Restatement does not leave this term undefined. It specifies: a person is "an individual," "an organization or association that has legal capacity to possess rights and incur obligations," a government entity, or "any other entity that has legal capacity to possess rights and incur obligations."2
Every prong of that definition requires legal capacity --- the ability to hold rights, to bear obligations, to be a party to a legal relationship. An AI system has none of these attributes. It cannot consent to a fiduciary relationship. It cannot bear obligations. It cannot be sued. No court has ever held that an AI system is a "person" for purposes of agency law, and the Restatement's own definition explains why: agency is a relationship between legal persons, and AI is not one.
The industry calls them agents. The law calls them tools.
This article examines why that distinction does not protect the companies deploying AI --- and why, in most cases, it makes their legal exposure worse. The doctrines that apply when AI causes harm are not the structured, defense- rich framework of agency law. They are negligence, products liability, trade secret misappropriation, spoliation, regulatory enforcement, direct liability for AI decision-making, and board-level oversight obligations --- doctrines that impose direct liability on the deployer without the intermediary that agency law would provide.
If agency law applied, deployers would have defenses. It does not, and they do not.
The Pivot: Why "Not an Agent" Is Bad News
Most companies hear that their AI is not a legal agent and feel relief. They should not.
Agency law is old, well-litigated, and --- critically --- full of defenses. If AI agents were legal agents under the Restatement, deployers would inherit a framework that centuries of common law developed to limit a principal's exposure.
The frolic-and-detour defense allows a principal to argue that an agent departed from its assigned duties for personal purposes, taking the agent's conduct outside the scope of employment and relieving the principal of liability. The scope-of-employment limitation allows a principal to draw boundaries around what the agent was authorized to do and disclaim responsibility for conduct beyond those boundaries. The independent-contractor classification allows a principal who controls the result but not the method to avoid vicarious liability entirely.3
None of these defenses are available when the actor is not an agent. They are not available because they presuppose a relationship between two legal persons --- a principal and an agent --- in which the agent has independent legal existence, capacity for volition, and the ability to depart from instructions for its own reasons. AI has none of these attributes. It is not a person who can go on a frolic. It is not an agent who can exceed its scope. It is not a contractor who controls its own methods.
It is a tool. And the entity that built, configured, deployed, and equipped that tool is not a principal managing an agent. It is an operator responsible for everything the tool does --- directly, without the intermediary that agency law would provide.
When a human employee causes harm, the employer has a structured body of law that allocates liability between principal and agent, that provides defenses based on scope and authorization, and that has been refined across thousands of cases. When an AI system causes harm, there is no allocation. There is no intermediary. There is the deployer, its tool, and the damage.
The doctrines that fill this space are not more lenient than agency law. They are less.
The Doctrines That Do Apply
Seven existing legal doctrines create direct deployer liability for AI agent conduct --- none of which require a court to treat AI as a person or an agent.
A. Board Oversight: Caremark Meets AI
In 1996, the Delaware Court of Chancery established in In re Caremark International Inc. Derivative Litigation that corporate directors face personal liability for a "sustained or systematic failure to exercise oversight" --- a failure so fundamental that it constitutes bad faith.4 For two decades after Caremark, these claims were nearly impossible to win. Directors could point to the existence of compliance programs, however perfunctory, and survive dismissal.
Then the Delaware Supreme Court changed the calculus.
In Marchand v. Barnhill (2019), the court held that Blue Bell Creameries' directors breached their oversight duty by utterly failing to implement any reporting system for food safety --- the company's "essential and mission critical" compliance risk. Marchand established a two-part test that every board must satisfy for its central compliance risks: first, the board must implement a reporting system; second, the board must actually monitor what that system reports. Blue Bell's directors did neither. The court found that sufficient to state a claim for bad faith.5
The Delaware Supreme Court reinforced this standard in In re Boeing Co. Derivative Litigation (2021), where the Court of Chancery denied a motion to dismiss oversight claims against Boeing's board after the 737 MAX crashes. The board had delegated safety oversight to management committees --- but delegation alone was not enough. The court held that aircraft safety was "mission critical" to Boeing's business, and the board's failure to monitor its own delegation satisfied Marchand's second prong.6
Two features of this doctrine make it particularly significant for AI agent governance.
First, Caremark claims sound in the duty of loyalty, not the duty of care. The Delaware Supreme Court confirmed this in Stone v. Ritter (2006): a knowing failure to implement oversight constitutes bad faith, which is a breach of loyalty.7 This distinction matters because the business judgment rule --- the presumption that protects directors from duty-of-care claims --- does not protect against breaches of loyalty. A director cannot invoke business judgment to excuse a sustained failure to monitor a risk the board knew or should have known was mission critical.
Second, directors must inform themselves before making decisions. Under Smith v. Van Gorkom (1985), the business judgment rule only applies when directors act on an informed basis. A board that authorizes AI agent deployment without reviewing the publicly available risk literature --- the OWASP Top 10 for Agentic Applications, the documented breach history, the regulatory enforcement trajectory --- has not made an informed decision. Van Gorkom strips the presumption of protection from uninformed decisions, even those made in good faith.8
For companies deploying AI agents at scale, agent governance is becoming a central compliance risk under the Marchand framework. A board that deploys agents in production without any governance reporting system --- no agent inventory, no monitoring, no incident reporting pathway to the board --- fails Marchand's first prong. A board that has some AI governance structure on paper but never reviews its output, never asks management about agent incidents, and never adjusts deployment based on risk findings fails the second.
No court has yet applied Caremark to AI agent oversight. The doctrinal extension is natural, and it is a question of when --- not whether --- a plaintiff's firm makes this argument. When that happens, directors who cannot answer three questions --- "What AI agents are operating in our company? What are they authorized to do? How do we know they are doing it?" --- face the same exposure that Blue Bell's directors faced.
And that exposure is personal. Because Caremark claims sound in loyalty, D&O insurance policies that exclude coverage for bad faith conduct may not cover the loss. The director is personally liable and potentially uninsured.
B. The Evidentiary Trap: Spoliation, Adverse Inference, and the Cost of Not Logging
Every legal dispute involving an AI agent --- regulatory investigation, customer lawsuit, employment discrimination claim, breach notification --- will begin with the same demand from opposing counsel: produce the agent's decision chain. What inputs did it receive? What tools did it call? What outputs did it produce? What was the sequence? Can you prove it?
For most companies deploying AI agents today, the answer to that last question is no. And that creates a compounding problem that is procedural, not substantive --- meaning it applies regardless of the underlying theory of liability and regardless of whether the agent actually did anything wrong.
Under Federal Rule of Civil Procedure 37(e), when electronically stored information "that should have been preserved in the anticipation or conduct of litigation is lost because a party failed to take reasonable steps to preserve it," the court has two options. It may order measures to cure the prejudice. Or, upon finding intent to deprive, it may presume the lost information was unfavorable, instruct the jury accordingly, or dismiss the action entirely.9
The preservation duty is not triggered by litigation itself. It is triggered by the reasonable anticipation of litigation. Judge Scheindlin's framework in Zubulake v. UBS Warburg (S.D.N.Y. 2003) established the seminal standard: a party must preserve all relevant documents --- including electronically stored information --- when it "reasonably anticipates litigation." The duty includes an affirmative obligation to implement a litigation hold and prevent the destruction of relevant evidence.10
For companies deploying AI agents, the preservation duty is already triggered. Any company using AI in employment screening reasonably anticipates discrimination litigation after Mobley v. Workday.11 Any company processing personal data of EU residents reasonably anticipates regulatory investigation after August 2, 2026. Any company that has experienced or read about AI security incidents reasonably anticipates breach litigation. The universe of companies deploying AI agents for which litigation is not reasonably foreseeable is vanishingly small.
The question, then, is whether the company took "reasonable steps" to preserve the relevant information. For AI agent decision chains, this analysis is devastating for companies without logging infrastructure. Agent actions are ephemeral by default. Context windows are overwritten. Tool calls go unrecorded unless logging is affirmatively implemented. And unlike financial transactions, which leave traces in ledgers and blockchains, AI agent decisions are non-reproducible --- the stochastic nature of large language model outputs means the same inputs will not produce the same outputs. If you did not log it contemporaneously, it is gone.
The "reasonable steps" standard is informed by what is available and what it costs. Open-source agent logging frameworks provide action-level monitoring with negligible performance overhead at reasonable cost. A court evaluating whether a company took reasonable steps will ask why it did not implement logging --- and the answer "we did not think we needed to" will be measured against an industry that published specific monitoring standards in the OWASP Top 10 for Agentic Applications and the NIST AI Agent Standards Initiative.12 The cost of agent logging is trivial relative to the consequences of its absence.
But the spoliation problem is only half of it. The other half is narrative control. Without contemporaneous records, the plaintiff controls the story. They characterize the agent's actions in whatever light serves their case, and the company has no evidence to contradict them. Every gap in the record is filled by the worst plausible inference --- not because the court is hostile, but because the absence of evidence is itself evidence of the absence of controls.
This dynamic is not theoretical. In the FTX bankruptcy, the trustee spent months reconstructing transactions from fragmentary data because FTX maintained no coherent audit trail. Every gap in the record was filled by the worst plausible inference. The same dynamic will apply to AI agent disputes, with one critical difference: FTX at least had the blockchain as a partial record.13 An AI agent that was never logged has no record at all.
The practical implication is straightforward: agent logging is not a technical feature. It is litigation infrastructure. The company that cannot produce its agent's decision chain will not necessarily lose because the agent acted wrongly. It will lose because the company cannot prove the agent acted rightly.
C. Negligent Enablement: Deploying Without Proportionate Controls
When a company deploys an AI agent with access to customer data, payment systems, and communication platforms --- but without monitoring, without authority boundaries, and without a kill switch --- and that agent causes harm, the legal analysis is ordinary negligence. No novel theory is required. The elements are duty, breach, causation, and damages, applied to facts that are new but to a framework that is not.
The duty exists wherever harm is foreseeable. A company that deploys an AI agent owes a duty of care to every person foreseeably affected by the agent's actions: customers whose data it processes, counterparties with whom it transacts, employees whose applications it screens, and third parties who interact with it or are affected by its outputs. The foreseeability of harm from AI agents operating without adequate controls is no longer a matter of speculation. It is a matter of public record.
The breach is deploying an agent with capabilities that exceed your governance controls' ability to constrain them. The standard of care is informed --- though not established --- by the accumulating body of industry standards that describes what reasonable AI agent governance looks like. The OWASP Top 10 for Agentic Applications, the NIST AI Agent Standards Initiative, and the Singapore IMDA Agentic AI Framework each identify specific controls: agent identity verification, authority boundaries, real-time monitoring, kill switches, incident response protocols.14 These frameworks are not binding law. But they are the kind of evidence a plaintiff's expert will present to a jury as the standard of care --- and the kind of evidence a defendant will struggle to explain away if it implemented none of them.
In jurisdictions where statutes establish specific standards of conduct, the analysis sharpens further. The Colorado AI Act (SB 24-205), effective June 30, 2026, requires deployers of high-risk AI systems to implement risk management programs, conduct impact assessments, and report algorithmic discrimination to the Attorney General. Violations are enforceable as unfair trade practices under Colorado consumer protection law, carrying penalties up to $20,000 per violation.15 Violation of a statutory standard of conduct can constitute negligence per se --- negligence as a matter of law, without the need for a jury to determine reasonableness. As additional states enact AI governance requirements, the floor of acceptable conduct rises.
Causation connects the absence of controls to the harm. If authority boundaries would have blocked the harmful action, if monitoring would have detected the anomaly, if a kill switch would have terminated the agent before the damage propagated --- the absence of those controls is the but-for cause of the harm.
A defendant will argue that the intervening act of an attacker --- a prompt injection, a supply chain compromise --- breaks the causal chain. Under established tort law, an intervening act breaks the causal chain only when it is both unforeseeable and extraordinary.16 Prompt injection attacks, supply chain vulnerabilities, and emergent agent behavior are neither. They are the exact risks that governance controls are designed to mitigate and that public frameworks like the OWASP Top 10 specifically document. An attacker exploiting the absence of controls you should have implemented does not break the causal chain. It completes it.
This is not a theoretical framework awaiting its first application. Regulators are already imposing liability on this basis.
In 2013, the SEC penalized Knight Capital Americas $12 million after the firm lost $460 million in 45 minutes when dormant trading code activated and generated millions of erroneous orders. The SEC did not find fault with the algorithm itself. It found that Knight deployed automated systems without adequate technology controls, without procedures to prevent activation of obsolete code, without second-technician review, and without automated alerts for deployment discrepancies. The penalty was for the absence of controls, not for the malfunction.17
In 2023, the FTC banned Rite Aid from using facial recognition technology for five years after the company deployed AI-powered surveillance in its stores without assessing the system's accuracy or potential for bias. The system generated thousands of false matches, leading to wrongful accusations, detentions, and police calls --- disproportionately in stores serving communities of color. The FTC ordered Rite Aid to destroy all data, models, and algorithms derived from the program, a remedy known as algorithmic disgorgement. Rite Aid did not develop the facial recognition system. It deployed it. The liability was for reckless deployment without proportionate safeguards.18
In 2019, the National Transportation Safety Board found that Uber's "inadequate safety risk assessment procedures," "ineffective oversight of vehicle operators," and "lack of adequate mechanisms for addressing automation complacency" were contributing causes of a fatal autonomous vehicle crash in Tempe, Arizona. Uber had disabled the vehicle's built-in automatic emergency braking system. The NTSB's findings centered not on the autonomous technology itself, but on the deployer's failure to supervise it.19
The pattern across these enforcement actions is consistent: the entity that deployed the automated system bears liability for deploying it without controls proportionate to its capabilities. The technology is not on trial. The governance decision is.
D. Products Liability: AI as Product, Not Person
If AI is not a legal agent --- not a person --- then what is it? One answer is emerging from federal courts: it is a product. And products liability doctrine carries consequences that agency law does not.
Under traditional products liability, a manufacturer or seller of a defective product faces strict liability for harm caused by that defect. The plaintiff does not need to prove negligence. The plaintiff needs to prove that the product was defective in design, that the defect made the product unreasonably dangerous, and that the defect caused the harm. The standard is strict: if the product is defective, the manufacturer is liable regardless of the care it exercised.
In May 2025, a federal court in the Middle District of Florida became one of the first to hold that an AI chatbot is a "product" for purposes of strict product liability.20 In Garcia v. Character Technologies, Inc., the court drew a distinction that may prove foundational: the content an AI generates --- its outputs, its words --- is not a product. But the design of the AI system itself --- the choices about how it processes inputs, what guardrails it includes, what behaviors it permits --- is. Design defect claims against an AI system are actionable under products liability.
The court's reasoning addressed and rejected Character.AI's First Amendment defense, holding that AI systems lack the human traits of intent, purpose, and awareness that are central to traditional free speech protections. The court also held that Google could face liability as a "component part manufacturer" for providing the underlying large language model on which the chatbot was built --- extending the liability chain beyond the deployer to the model provider.
Garcia is at the motion-to-dismiss stage, not a final judgment. The court held that the claims can proceed; it has not yet ruled on their merits. But the holding is significant: a federal court engaged in sustained analysis of whether AI systems fit within products liability doctrine and concluded that they do. The reasoning --- that design choices are actionable even when outputs are not --- provides a framework that other courts can follow.
The Northern District of California reached a similar conclusion in the social media addiction litigation, allowing design defect and failure-to-warn claims against Meta, Google, ByteDance, and Snap to proceed. The court evaluated specific algorithmic design features --- content recommendation algorithms, age verification systems, engagement optimization --- and found them analogous to product design choices subject to products liability analysis.21
Federal legislation is moving in the same direction. The proposed AI LEAD Act would explicitly classify AI systems as "products" under federal law, creating a federal cause of action for AI-related product liability including claims for defective design, failure to warn, and strict liability.22
The implications for AI agent governance are direct. If an AI agent is a product, then its design choices --- what tools it can access, what authority boundaries constrain it, what monitoring observes it, what kill switches can terminate it --- are all potential design defect claims. An agent deployed without authority boundaries is an agent with a design defect. An agent deployed without monitoring is an agent with a failure-to-warn problem. The governance framework is not merely a compliance exercise. It is the product safety architecture that products liability will evaluate.23
E. Trade Secret Exposure: The Defend Trade Secrets Act
The previous doctrines address liability arising from an AI agent's actions --- its decisions, its outputs, its malfunctions. Trade secret exposure is different. It arises from what the agent knows, not from what it does. And for companies whose AI agents process confidential business information --- which is to say, nearly every enterprise deployment --- the exposure is architectural.
The Defend Trade Secrets Act (18 U.S.C. Sections 1831-1839) creates federal civil and criminal liability for misappropriation of trade secrets.24 The statute defines misappropriation to include both the acquisition of a trade secret by improper means and the disclosure of a trade secret without consent. That second prong --- disclosure without consent --- does not require intent. It requires that the disclosure occurred and that it was unauthorized. The mechanism is irrelevant.
AI agents create three distinct patterns of trade secret exposure.
The first is cross-client contamination. An agent processes Client A's confidential business information, retains fragments in context or memory, and later surfaces those fragments in outputs visible to Client B. In multi-tenant retrieval-augmented generation systems and shared tool server environments, this is not a hypothetical risk --- it is an architectural characteristic of how context windows and vector databases function. The information moves between client environments not because anyone chose to disclose it, but because the system's architecture does not prevent it. Under the DTSA, the result is the same: disclosure without consent.
The second is intra-organizational leakage. A multi-agent system passes information between agents operated by different departments with different confidentiality obligations. Research data flows into a sales agent's context. Merger analysis surfaces in a customer-facing tool. In industries that rely on information barriers --- financial services, law firms, healthcare --- this cross-contamination violates the ethical walls that protect confidential information. The AI agent does silently what no human in the organization would be permitted to do.
The third is exfiltration through prompt injection. An agent with access to confidential information processes a poisoned input that causes it to include trade secret data in an externally visible output. The trade secret was acquired by the agent through authorized access and disclosed to a third party through unauthorized output. The statutory elements are satisfied.
The most immediate practical exposure comes from the DTSA's injunctive relief provision. Under Section 1836(b)(3)(A), a court may issue an injunction to prevent "actual or threatened misappropriation"25 --- misappropriation that has not yet occurred but is architecturally likely given the deployer's system design. A trade secret owner whose information is processed by your AI agents can seek injunctive relief on the theory that your architecture --- absent context isolation, absent data classification, absent access controls --- constitutes threatened misappropriation.
The practical scenario is concrete: your counterparty's outside counsel files a DTSA claim and a TRO motion arguing that your AI agents process their client's trade secrets under an NDA but lack the access controls necessary to prevent cross-contamination. They point to the absence of context isolation, the absence of data classification, and the documented risk of context leakage in multi-tenant AI systems. They demand an injunction requiring you to implement agent-level data classification, context isolation, and logging --- immediately, under threat of contempt.
You now have to build governance infrastructure under a court order, on an emergency timeline, with opposing counsel reviewing your architecture.
The deployer's obligation runs in two directions. Under the NDA and under the Restatement's requirement that trade secret owners take "reasonable precautions" to maintain secrecy, the deployer who processes trade secrets has an affirmative duty to maintain confidentiality through adequate technical controls. Choosing an architecture that predictably leaks confidential information across client boundaries is a breach of that duty --- not because anyone intended the leak, but because the deployer selected a system design that made it foreseeable.
F. Regulatory Enforcement: The No-Exemption Principle
Three federal regulators --- the FTC, the SEC, and the CFPB --- have each articulated the same principle from their respective statutory mandates: there is no AI exemption from existing regulatory obligations. Automation does not dilute the standard of care. If anything, it concentrates scrutiny on the design choices the deployer made.
The FTC. In September 2024, the Commission launched Operation AI Comply, bringing enforcement actions against five companies under Section 5(a) of the FTC Act for AI-related unfair and deceptive practices --- from a company marketing an "AI Lawyer" service that employed no attorneys and never tested its outputs, to a company selling AI tools designed to generate fake consumer reviews.26 But the more significant enforcement signal is the remedy. In the Rite Aid action discussed above, the FTC did not merely penalize a bad outcome --- it ordered algorithmic disgorgement, unwinding the deployment itself by requiring the destruction of all data, models, and algorithms derived from the program.27 For companies deploying AI agents in consumer-facing contexts, the message is that the decision to deploy without governance controls may itself constitute the unfair practice, independent of any specific harm the agent causes.
The SEC. The Commission has been enforcing the no-exemption principle against automated advisory platforms for nearly a decade. In 2017, the SEC's Division of Investment Management confirmed that robo-advisers are subject to the same fiduciary duties as human advisers under the Investment Advisers Act --- the duty of care and the duty of loyalty apply in full.28 In 2019, the SEC issued a formal interpretation confirming that the federal fiduciary duty cannot be waived, including by automated systems.29
The enforcement actions that followed demonstrate these are not abstract obligations. In 2018, the SEC penalized Wealthfront $250,000 for falsely claiming it monitored all client accounts for wash sale violations when it did not --- wash sales occurred in over 31 percent of enrolled accounts for three years.30 In 2022, the SEC imposed a $187 million penalty on Charles Schwab after finding that its "Intelligent Portfolios" robo-adviser told clients their cash allocations were set by a "disciplined portfolio construction methodology" seeking "optimal returns," while internal analyses showed the high cash allocations would produce lower returns under most market conditions. Schwab had swept client cash to an affiliate bank and profited nearly $46 million from the interest rate spread.31
The CFPB. The Bureau has stated directly that creditors cannot justify noncompliance with the Equal Credit Opportunity Act by claiming their AI is too complex or too opaque to explain. There is no "black box" exemption from fair lending obligations.32 Creditors must provide specific, accurate adverse action notices regardless of whether the decision was made by an algorithm or a human loan officer.
The common thread across all three regulators: each assessed the deployer's governance architecture, not the AI's output. The question was not "did the AI make the right decision?" It was "did the deployer build the infrastructure to ensure it could?"
G. Direct Liability for AI Decision-Making
The previous sections address liability for how AI agents are deployed --- the governance architecture, the controls, the oversight. This section addresses liability for what AI agents do --- the decisions they make, the recommendations they issue, the people they screen out.
In every regulated domain where courts have examined AI decision-making, they have reached the same conclusion: the deployer bears the same standard of care as if a human performed the function. Automation does not reduce the obligation. It increases scrutiny, because automated systems apply their biases at a scale and speed no individual human can match.
Employment. In Mobley v. Workday, Inc. (N.D. Cal. 2024), a federal court held that Workday --- an AI vendor providing algorithmic hiring tools to employers --- could face direct liability for employment discrimination under Title VII, the ADA, and the ADEA.33 The court's analysis turned on whether Workday functioned as a statutory "agent" of the employers it served --- a definition specific to employment discrimination law and distinct from the Restatement of Agency. The court found that because Workday's AI was "participating in the decision-making process by recommending some candidates to move forward and rejecting others," it operated as an agent of the employer and could be sued directly.
In May 2025, the court granted preliminary collective certification of age discrimination claims. Workday's own filings disclosed the scale: approximately 1.1 billion applications had been rejected through its system.34 The EEOC filed an amicus brief supporting the theory that AI vendors are covered entities subject to federal anti-discrimination law.35
The implications extend beyond the vendor. Employers who delegate screening decisions to AI tools remain independently liable for discriminatory outcomes. The liability is not delegable --- the employer cannot point to the vendor, and the vendor cannot point to the employer. Both are exposed.
Housing. In Louis v. SafeRent Solutions, LLC (D. Mass.), the court rejected SafeRent's argument that it could not be liable under the Fair Housing Act because it did not make final housing decisions. SafeRent's algorithm scored rental applicants but did not account for the financial benefit of housing vouchers, resulting in disparate impact on Black and Hispanic applicants. The court held that the algorithm "automated human judgment" and that the deployer bore liability for its design choices. The case settled in November 2024 for $2.275 million, and SafeRent was required to stop issuing approve-or-decline recommendations unless its system was validated for fairness by independent civil rights experts.36
The pattern is consistent. Where courts have examined AI decision-making in regulated domains, the standard tracks the activity, not the actor. The deployer who automates a function governed by anti-discrimination law, fiduciary duty, or consumer protection does not escape the standard by automating it. The standard was designed for the decision, not for who --- or what --- makes it.
The One Exception: Electronic Agents Under UETA and E-SIGN
There is one body of law that does contemplate non-human actors in a legal context --- and it does not help deployers. It binds them.
The Uniform Electronic Transactions Act, adopted in 49 states, defines an "electronic agent" as "a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual."37 That definition is broad enough to encompass AI systems.
UETA Section 14 provides that a contract may be formed by the interaction of electronic agents "even if no individual was aware of or reviewed the electronic agents' actions or the resulting terms and agreements."38 The Electronic Signatures in Global and National Commerce Act (15 U.S.C. Section 7001) confirms this at the federal level: a contract may not be denied legal effect solely because its formation involved the action of electronic agents, "so long as the action of any such electronic agent is legally attributable to the person to be bound."39
The practical consequence is that your AI agent can bind you. If it commits to a price, accepts terms, executes a purchase order, or confirms a transaction, that commitment is legally enforceable against your organization. The scope of what your agent can transact defines the outer boundary of your transactional exposure. Every API key is a grant of transactional capability. Every tool permission is a representation of authority. Every database connection is a scope definition that a court can examine if the transaction goes wrong.
A company might attempt to disclaim its agent's transactional authority in terms of service. But a disclaimer that contradicts the agent's observable capabilities faces a fundamental credibility problem. If you deploy an agent with access to your payment system, your contract management platform, and your customer communication tools, and a counterparty interacts with that agent in a commercial context, the agent's capabilities are a louder statement about its authority than the fine print in your terms of service. Courts evaluating the totality of circumstances will weigh what the deployer showed the world against what the deployer buried in a clickwrap agreement.
The Open Question: Deterministic and Stochastic Systems
UETA was drafted in 1999 for a different kind of electronic agent. The systems its drafters contemplated were deterministic: shopping carts, automated purchase order systems, vending machines. Systems that execute pre-programmed logic on pre-defined inputs and produce predictable, reproducible outputs. A shopping cart that adds an item to an order does the same thing every time.
AI agents are stochastic. They interpret ambiguous inputs. They exercise something that functions like judgment. They produce outputs that are non-reproducible --- the same inputs will not generate the same outputs twice. Whether UETA Section 14 extends to non-deterministic AI systems that exercise independent judgment in forming contractual commitments is an open question that no court has resolved.
The statutory text is broad enough to cover them. A court seeking to hold a deployer to a commitment its AI agent made has a clear pathway through Section 14. But a court skeptical of extending a 1999 statute to a 2026 technology could find that UETA's drafters contemplated a fundamentally different kind of automation --- and that stochastic AI agents fall outside the statutory framework until the legislature updates it.
E-SIGN adds its own layer of uncertainty. The statute's requirement that electronic agent actions be "legally attributable to the person to be bound" raises the question of when a stochastic AI output --- one the deployer did not specifically program or anticipate --- is "attributable" to the deployer. The answer is probably yes in most cases: the deployer chose to deploy the system, configured its parameters, granted its access, and set it loose on transactions. But "probably yes" is not the foundation on which companies should build their compliance strategy.
This uncertainty is not an argument for inaction. It is the opposite. Open legal questions are resolved by courts, in litigation, after the harm has occurred. A company that deployed an AI agent without governance controls and now argues to a court that UETA does not apply to stochastic systems is not making a legal argument. It is making an admission that it deployed a system it did not understand into a legal framework it had not analyzed. That is not a defense. It is the plaintiff's case.
Why the Exposure Compounds
If these were seven independent risks, a general counsel could triage. Handle spoliation first. Address trade secrets next quarter. Get to products liability when the budget allows.
They are not independent. They are a system --- and the system's most dangerous property is that each governance gap makes every other doctrine's case easier for the plaintiff.
Start with the evidence you never created. A negligent enablement plaintiff argues you deployed without proportionate controls. Your defense is that you had controls --- authority boundaries, monitoring, kill switches. But if you never logged your agent's actions, you cannot prove those controls existed or functioned. The products liability plaintiff argues your AI's design was defective. Your defense is that the design included safety architecture. But without decision chain records, you cannot demonstrate what the design actually did in practice. The direct liability plaintiff argues your screening algorithm discriminated. Your defense is that the outputs were fair. But without contemporaneous logs, you have no evidence of what the outputs were. In each case, the spoliation problem is not an additional claim. It is the mechanism that defeats your defense to the other claims.
Now reverse the direction. Trade secret exposure creates the foreseeability that triggers the preservation duty. Once cross-client contamination is a known architectural risk in multi-tenant AI systems --- and it is --- litigation arising from that risk is "reasonably foreseeable" under Zubulake. Which means you should have been logging the agent actions that would reveal whether contamination occurred. The trade secret problem creates the spoliation problem.
The board's failure to implement oversight is itself evidence of breach in the negligence claim. A plaintiff can point to the Marchand analysis --- no reporting system, no monitoring, no board-level engagement with AI risk --- and argue that the deployer's governance failure is direct proof it breached the standard of care. The Caremark problem does not just expose directors personally. It proves the negligence case against the company.
Regulatory enforcement compounds the exposure further. Every FTC action, every SEC penalty, every CFPB enforcement position documented in this article enters the evidentiary record that a plaintiff's expert can present to a jury as the standard of care the deployer failed to meet. The Rite Aid disgorgement. The Schwab penalty. The Wealthfront action. These are not just regulatory outcomes. They are the benchmarks against which a negligence jury will measure your governance decisions --- and the benchmarks you cannot distinguish if you implemented none of the controls those regulators found absent.
And UETA binds the deployer to whatever its agent transacted --- while the absence of logging means the deployer may not even know what was transacted until opposing counsel tells them.
This is what it means to face these doctrines without agency law's defenses. A principal with a human agent can point to the agent's independent conduct --- the frolic, the exceeded scope, the unauthorized act --- and argue that the harm should be allocated to the agent, not the principal. A deployer with an AI tool has no such intermediary. Every doctrine lands directly on the deployer, and every gap in governance feeds every other doctrine's case. There is no buffer. There is no allocation. There is the deployer, and there is the full weight of seven doctrines converging on the same set of facts.
The only rational response to a compounding system is a comprehensive one. Fixing one gap reduces exposure across multiple doctrines. Leaving one open increases it across all of them.
The Imperative: You Need Certainty, Not Hope
Every month of governed AI operation is a month of defensible evidence. Every month without it is a month of exposure that cannot be remediated after the fact.
That is the through-line connecting every doctrine in this article. The board oversight that Caremark demands, the decision chains that spoliation law presumes you preserved, the proportionate controls that negligence law requires, the product safety architecture that Garcia evaluates, the access controls that prevent trade secret exposure, the compliance documentation that regulators will request --- each one asks the same question of the deployer: can you show what your AI did, and can you show that you governed it?
Emerging regulation sharpens the question but does not create it. The EU AI Act, enforceable August 2, 2026, imposes fines up to three percent of global annual turnover for high-risk AI violations and seven percent for prohibited practices.40 The Colorado AI Act, effective June 30, 2026, makes violations enforceable as unfair trade practices with penalties up to $20,000 per violation.41 These statutes are a sword, not a shield --- they create liability for deployers who do not comply without creating safe harbors for deployers who do. And in U.S. courts, their violation may constitute negligence per se.
Waiting for Congress to clarify the framework is not a strategy. It is a bet that no plaintiff's attorney, no regulator, and no court will apply existing law to your AI agent's conduct before the legislature acts. That bet has already lost. The FTC has brought enforcement actions. The SEC has imposed nine-figure penalties. Federal courts have held AI systems liable as products and certified collective actions covering over a billion transactions.
The question for every general counsel is not "what regulation applies?" It is: if opposing counsel walked into my office tomorrow and made any one of these arguments, could I defend?
That question is what the Know Your Agent framework is designed to answer --- agent identity verification, authority boundaries, continuous monitoring, incident response, and compliance mapping that addresses each liability exposure analyzed above. Not to eliminate legal risk. No framework can. But to provide the documented, defensible foundation that every doctrine discussed here will demand.
What Comes Next
This article is part of Astraea Counsel's Know Your Agent (KYA) governance framework, which provides the operational architecture --- identity verification, authority boundaries, monitoring, incident response, and compliance mapping --- that addresses each of the liability exposures analyzed above.
If your organization deploys AI agents and you need help assessing your governance posture against the doctrines discussed in this article, schedule a consultation.
This article provides general information for educational purposes only and does not constitute legal advice. AI governance regulation is evolving rapidly. Consult qualified legal counsel for advice on your specific situation.
Footnotes
-
Restatement (Third) of Agency, Sec. 1.01 (2006) (available at https://guides.jenkinslaw.org/restatement-agency). ↩
-
Restatement (Third) of Agency, Sec. 1.04(5) (2006) (available at https://www.ali.org/publications/restatement-law-third/agency). ↩
-
Restatement (Second) of Agency, Secs. 219-220, 228 (1958) (available at https://www.law.cornell.edu/wex/respondeat_superior). ↩
-
In re Caremark Int'l Inc. Derivative Litig., 698 A.2d 959 (Del. Ch. 1996) (available at https://law.justia.com/cases/delaware/court-of-chancery/1996/13670-3.html). ↩
-
Marchand v. Barnhill, 212 A.3d 805 (Del. 2019) (available at https://law.justia.com/cases/delaware/supreme-court/2019/533-2018.html). ↩
-
In re Boeing Co. Derivative Litig., C.A. No. 2019-0907-MTZ (Del. Ch. Sept. 7, 2021) (available at https://law.justia.com/cases/delaware/court-of-chancery/2021/c-a-no-2019-0907-mtz-0.html). ↩
-
Stone v. Ritter, 911 A.2d 362 (Del. 2006) (available at https://law.justia.com/cases/delaware/supreme-court/2006/84060.html). ↩
-
Smith v. Van Gorkom, 488 A.2d 858 (Del. 1985) (available at https://law.justia.com/cases/delaware/supreme-court/1985/488-a-2d-858-4.html). ↩
-
Fed. R. Civ. P. 37(e) (available at https://www.law.cornell.edu/rules/frcp/rule_37). ↩
-
Zubulake v. UBS Warburg LLC, 220 F.R.D. 212 (S.D.N.Y. 2003) (available at https://www.courtlistener.com/opinion/2410862/zubulake-v-ubs-warburg-llc/). ↩
-
Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal.) (available at https://www.courtlistener.com/docket/66831340/mobley-v-workday-inc/). ↩
-
OWASP, Top 10 for Agentic Applications (2026) (available at https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/); NIST, AI Agent Standards Initiative (available at https://www.nist.gov/caisi/ai-agent-standards-initiative). ↩
-
Testimony of John J. Ray III, CEO of FTX Debtors, before the U.S. House Financial Services Committee (Dec. 13, 2022) (available at https://democrats-financialservices.house.gov/uploadedfiles/hhrg-117-ba00-wstate-rayj-20221213.pdf). ↩
-
OWASP, Top 10 for Agentic Applications (2026) (available at https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/); NIST, AI Agent Standards Initiative (available at https://www.nist.gov/caisi/ai-agent-standards-initiative); Singapore IMDA, Model AI Governance Framework for Agentic AI (2026) (available at https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai). ↩
-
Colorado AI Act, SB 24-205 (effective June 30, 2026); Colo. Rev. Stat. 6-1-113 (available at https://leg.colorado.gov/bills/sb24-205). ↩
-
Restatement (Third) of Torts: Liability for Physical and Emotional Harm, Secs. 29, 34 (2010) (available at https://www.ali.org/publications/restatement-law-third/torts-liability-physical-and-emotional-harm). ↩
-
SEC, In re Knight Capital Americas LLC, Admin. Proc. File No. 3-15570 (Oct. 16, 2013) (available at https://www.sec.gov/newsroom/press-releases/2013-222). ↩
-
FTC, In re Rite Aid Corp. (Dec. 2023) (available at https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without). ↩
-
NTSB, Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, Report No. HAR-19/03 (2019) (available at https://www.ntsb.gov/investigations/accidentreports/reports/har1903.pdf). ↩
-
Garcia v. Character Technologies, Inc., No. 6:24-cv-01903-ACC-UAM, Doc. 115 (M.D. Fla. May 21, 2025) (order on motions to dismiss) (available at https://scholarblogs.emory.edu/proflawrence/files/2025/05/Garcia-v.-Character-Technologies-Inc.-et-al-Entry-115.pdf). ↩
-
In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, MDL No. 3047 (N.D. Cal.) (available at https://www.courtlistener.com/docket/65407433/in-re-social-media-adolescent-addictionpersonal-injury-products-liability/). ↩
-
AI Liability Enforcement and Accountability in Deployment Act (AI LEAD Act), S. 2937, 119th Congress (proposed) (available at https://www.congress.gov/bill/119th-congress/senate-bill/2937/text). ↩
-
Juries are already reaching nine-figure verdicts against deployers of autonomous systems. In Benavides v. Tesla, Inc. (S.D. Fla., Aug. 2025), a jury awarded over $240 million after finding Tesla's Autopilot system defective as a product --- the first jury verdict holding an autonomous driving system liable under products liability. The court found Tesla allowed activation of the system in conditions it was not designed to handle and "dangerously oversold" its capabilities (available at https://www.courtlistener.com/docket/59932667/benavides-v-tesla-inc/). ↩
-
Defend Trade Secrets Act, 18 U.S.C. Secs. 1831-1839 (available at https://www.law.cornell.edu/uscode/text/18/part-I/chapter-90). ↩
-
18 U.S.C. Sec. 1836(b)(3)(A) (authorizing injunctive relief to prevent "actual or threatened misappropriation") (available at https://www.law.cornell.edu/uscode/text/18/1836). ↩
-
FTC, "FTC Announces Crackdown on Deceptive AI Claims and Schemes" (Operation AI Comply, Sept. 2024) (available at https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes). ↩
-
FTC, In re Rite Aid Corp. (2023). The Commission ordered algorithmic disgorgement -- destruction of all data, models, and algorithms derived from Rite Aid's AI-powered facial recognition program -- in addition to a five-year ban on the technology (available at https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without). ↩
-
SEC Division of Investment Management, IM Guidance Update No. 2017-02 (Feb. 2017) (available at https://www.sec.gov/investment/im-guidance-2017-02.pdf). ↩
-
SEC, Interpretation Regarding Standard of Conduct for Investment Advisers, Release No. IA-5248 (June 2019) (available at https://www.sec.gov/rules-regulations/2019/06/ia-5248). ↩
-
SEC, In re Wealthfront Advisers LLC, Release No. IA-5086 (2018) (available at https://www.sec.gov/newsroom/press-releases/2018-300). ↩
-
SEC, In re Charles Schwab Corp. (2022) (available at https://www.sec.gov/newsroom/press-releases/2022-104). ↩
-
CFPB, Circular 2022-03, "Adverse Action Notification Requirements in Connection with Credit Decisions Based on Complex Algorithms" (2022) (available at https://www.consumerfinance.gov/compliance/circulars/circular-2022-03-adverse-action-notification-requirements-in-connection-with-credit-decisions-based-on-complex-algorithms/). ↩
-
Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal. 2024) (available at https://www.courtlistener.com/docket/66831340/mobley-v-workday-inc/). ↩
-
Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal. May 2025) (order granting preliminary collective certification of age discrimination claims) (available at https://www.courtlistener.com/docket/66831340/mobley-v-workday-inc/). ↩
-
EEOC, Amicus Brief in Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal. Apr. 2024) (available at https://www.eeoc.gov/litigation/briefs/mobley-v-workday-inc). ↩
-
Louis v. SafeRent Solutions, LLC (D. Mass.), settled Nov. 2024 for $2.275 million (available at https://www.courtlistener.com/docket/63335697/louis-v-saferent-solutions-llc/). ↩
-
Uniform Electronic Transactions Act, Sec. 2(6), enacted in Washington as RCW 1.80 (available at https://app.leg.wa.gov/RCW/default.aspx?cite=1.80&full=true). ↩
-
Uniform Electronic Transactions Act, Sec. 14, enacted in Washington as RCW 1.80 (available at https://app.leg.wa.gov/RCW/default.aspx?cite=1.80&full=true). ↩
-
Electronic Signatures in Global and National Commerce Act (E-SIGN), 15 U.S.C. Sec. 7001(h) (available at https://www.law.cornell.edu/uscode/text/15/7001). ↩
-
EU AI Act, Regulation (EU) 2024/1689 (enforceable Aug. 2, 2026). Article 99 imposes fines of up to 3% of global annual turnover for high-risk AI violations and up to 7% for prohibited practices (available at https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng). ↩
-
Colorado AI Act (SB 24-205), effective June 30, 2026. Violations are enforceable as unfair trade practices under the Colorado Consumer Protection Act, with penalties up to $20,000 per violation under CRS 6-1-113 (available at https://leg.colorado.gov/bills/sb24-205). ↩