Legal Update

Federal AI Regulation Landscape: What's Coming in 2025-2026

Chanté Eliaszadeh
AI RegulationFederal LawAlgorithmic AccountabilityNISTCompliance

Federal AI Regulation Landscape: What's Coming in 2025-2026

By Chanté Eliaszadeh | October 8, 2025

While California Governor Gavin Newsom vetoed SB 1047—what would have been the nation's most comprehensive AI regulation—federal AI legislation continues advancing through Congress with bipartisan momentum. The question is no longer whether federal AI regulation will happen, but when and in what form.

For AI companies navigating this uncertainty, the stakes are clear: comprehensive federal regulation appears inevitable, with multiple bills pending, aggressive agency rulemaking underway, and a 273-page bipartisan Congressional task force report recommending over 85 specific policy actions. Companies that wait for final regulations before building compliance infrastructure will find themselves scrambling to catch up.

This article maps the federal AI regulatory landscape as it stands in October 2025, tracks pending legislation through Congress, analyzes agency enforcement priorities, and provides strategic guidance for AI companies preparing for the regulatory regime that's rapidly taking shape.

The Post-Executive Order Landscape: What Changed in 2025

On January 20, 2025, President Trump rescinded Biden's Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence—the most comprehensive federal AI governance framework to date.1 The order had directed over 50 federal entities to engage in more than 100 specific actions across eight policy areas, with significant progress made before its revocation.

What Was Lost:

Biden's Executive Order established critical AI governance infrastructure, including requirements for AI developers to share safety test results with the federal government before releasing systems that pose serious risks to national security, economic security, or public health. The order mandated watermarking of AI-generated content, established standards for AI safety and security, and required federal agencies to assess AI risks in their operations.

The revocation created immediate uncertainty for companies that had begun implementing EO 14110-inspired policies, with many actions already initiated by both public and private sectors now in regulatory limbo.

What Remains:

The Trump administration replaced the Biden framework with a new executive order on January 23, 2025, titled "Removing Barriers to American Leadership in Artificial Intelligence," which focuses on AI dominance and innovation rather than safety guardrails.2 This new order requires an action plan within 180 days and directs the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs to review all policies under the revoked Executive Order 14110, identifying actions inconsistent with the new administration's AI competitiveness agenda.

Critical Takeaway for AI Companies:

Executive orders are inherently unstable regulatory foundations, subject to immediate reversal with each administration change. Federal legislation—once passed—provides durable, predictable requirements that survive political transitions. This instability accelerates the push for Congressional action to establish permanent AI governance frameworks.

Pending Federal Legislation: The Bills to Watch

1. Algorithmic Accountability Act of 2025 (S. 2164)

Status: Introduced June 25, 2025, by Senator Ron Wyden; referred to Senate Committee on Commerce, Science, and Transportation3

What It Does:

The Algorithmic Accountability Act represents the most comprehensive federal approach to AI transparency and accountability. The bill directs the Federal Trade Commission to establish mandatory impact assessment requirements for "automated decision systems" and "augmented critical decision processes"—essentially any system using AI to inform decisions affecting individuals.

Key Provisions:

Broad Definitional Scope: The Act defines "automated decision system" expansively as any system, software, or process derived from machine learning, statistics, or other data processing or AI techniques that uses computation to inform decisions or judgments. This reaches far beyond narrow AI applications to encompass virtually all algorithmic decision-making tools.

Impact Assessment Requirements: Companies deploying covered systems must conduct comprehensive impact assessments evaluating:

  • Disparate impact on protected classes (race, gender, age, disability, etc.)
  • Data quality, accuracy, and representativeness
  • Training methodologies and validation procedures
  • Privacy implications and data security measures
  • Procedures for human review and override capabilities
  • Transparency mechanisms for affected individuals

FTC Enforcement Authority: The Commission receives enforcement jurisdiction equivalent to Federal Trade Commission Act authority, with power to investigate violations, issue subpoenas, seek injunctive relief, and impose civil penalties. The bill authorizes 25 additional enforcement personnel specifically for algorithmic accountability enforcement.

Agency Coordination: The FTC must negotiate information-sharing agreements with other federal agencies (EEOC, CFPB, HHS, DOJ) to coordinate enforcement across jurisdictional boundaries—addressing the current fragmented regulatory landscape where AI discrimination falls under multiple agencies' purview depending on context.

Timeline Implications:

The bill remains in committee with no scheduled markup. However, its reintroduction in the 119th Congress (following similar bills in the 117th and 118th Congresses) demonstrates sustained legislative interest. Expect committee hearings in late 2025 or early 2026, with potential floor consideration contingent on bipartisan negotiation progress.

2. AI Foundation Model Transparency Act (H.R. 6881)

Status: Introduced December 22, 2023, by Representatives Don Beyer (D-VA) and Anna Eshoo (D-CA); no committee markup scheduled4

What It Does:

This legislation targets foundation model developers specifically, requiring public transparency disclosures about training data, model documentation, and alignment with federal AI standards. Unlike the Algorithmic Accountability Act's focus on deployment and impact assessment, this bill addresses AI development and pre-deployment transparency.

Key Requirements:

Training Data Transparency: Foundation model developers must publicly disclose:

  • Sources of training data, including specific identification of copyrighted materials used
  • Data labeling methodologies and validation processes
  • Information about data curation, filtering, and preprocessing
  • Known biases, limitations, and data quality issues

This provision directly addresses the "black box" problem plaguing current foundation models, where developers provide minimal information about what data trained their systems—creating risks for copyright infringement, bias amplification, and unpredictable model behavior.

Model Documentation Standards: Required disclosures include:

  • Intended purposes and use cases (both recommended and prohibited)
  • Foreseen limitations, failure modes, and risks
  • Model version history and change logs
  • Performance benchmarks and evaluation results
  • Description of alignment efforts with NIST AI Risk Management Framework

Copyright and Attribution: The bill's training data disclosure requirements directly implicate ongoing copyright litigation against AI companies. Developers would need to identify copyrighted works used in training—potentially creating evidence for copyright infringement claims while simultaneously providing transparency for rights holders.

Enforcement: The FTC receives rulemaking authority to establish specific standards within nine months of enactment, with enforcement authority equivalent to FTC Act violations. The bill mandates alignment with NIST AI Risk Management Framework standards, creating cross-referencing between voluntary standards and mandatory legal requirements.

Practical Implications:

Foundation model developers (OpenAI, Anthropic, Google, Meta, etc.) would face significant compliance burdens, potentially requiring disclosure of proprietary training methodologies currently treated as trade secrets. Expect fierce industry opposition based on competitive concerns, though transparency advocates argue public disclosure is essential for AI safety and accountability.

3. CREATE AI Act of 2025 (H.R. 2385 / S. 2714)

Status: House Science, Space, and Technology Committee approved September 11, 2024; Senate version (S. 2714) introduced5

What It Does:

Unlike the previous bills focused on AI regulation and transparency, the CREATE AI Act (Creating Resources for Every American to Experiment with Artificial Intelligence) establishes federal infrastructure to democratize AI research and development. The bill authorizes $2.6 billion over six years for the National Artificial Intelligence Research Resource (NAIRR).

Core Framework:

National AI Research Resource (NAIRR): Creates a shared national research infrastructure providing researchers, students, and small companies access to:

  • High-performance computing resources for AI training and experimentation
  • Large-scale datasets for AI research (curated by federal agencies)
  • Educational tools and training resources
  • Technical support and expertise

Program Management Office: Establishes an NSF-administered office to oversee NAIRR operations, coordinate with federal agencies providing computing resources, and manage access allocation.

Advisory Committees: Requires diverse stakeholder representation from academia, industry, government, and public interest groups to guide NAIRR priorities and ensure equitable access.

Why This Matters:

Currently, cutting-edge AI research requires computing resources and datasets available primarily to large technology companies (Google, Microsoft, Meta, OpenAI). This creates competitive moats where only well-capitalized firms can develop state-of-the-art models. NAIRR aims to level the playing field, enabling university researchers, startups, and independent scientists to conduct frontier AI research.

Bipartisan Support: The bill passed the House Science Committee with 66 bipartisan cosponsors—rare in the current polarized Congress. This suggests strong likelihood of eventual passage, though timeline remains uncertain given competing legislative priorities.

Congressional AI Task Force: The Blueprint for Federal Regulation

In December 2024, the bipartisan House Task Force on Artificial Intelligence delivered a 273-page report to Speaker Mike Johnson and Democratic Leader Hakeem Jeffries, representing nearly 10 months of work by 24 members (12 Republicans, 12 Democrats) led by co-chairs Jay Obernolte (R-CA) and Ted Lieu (D-CA).6

Report Scope:

The task force conducted over a dozen hearings and roundtable discussions with government officials, industry leaders, academic researchers, and civil society organizations. The resulting report provides 66 key findings and 85 recommendations across 15 issue areas—the most comprehensive Congressional analysis of AI policy to date.

High-Level Principles:

The task force rejected creating a comprehensive AI regulatory framework all at once, instead recommending a sectoral, use-case-specific approach that maintains flexibility as technology evolves. Key principles include:

  1. Avoid Stifling Innovation: No premature comprehensive regulation; focus on addressing specific harms in specific contexts
  2. Leverage Existing Authorities: Use current agency powers (FTC, EEOC, CFPB, FDA, etc.) rather than creating new regulatory bureaucracy
  3. Public-Private Partnerships: Government should enable and facilitate AI development, not solely restrict it
  4. Protect Innovation Ecosystem: Recognize U.S. competitive advantage in AI and avoid regulations that disadvantage American companies
  5. International Coordination: Engage with allies (EU, UK, Japan) to harmonize approaches while maintaining U.S. leadership

Key Recommendations by Issue Area:

National Security: Congress should prioritize AI adoption by Department of Defense and intelligence agencies to maintain military technological superiority against adversaries (China, Russia) actively militarizing AI capabilities.

Data Privacy: Current fragmented state privacy laws create compliance challenges for AI development; federal comprehensive privacy legislation (with preemption of state laws) would provide clarity and consistency.

Research Leadership: Maintain U.S. advantage in fundamental AI research through sustained federal funding, immigration policies attracting global AI talent, and infrastructure like NAIRR.

Government Use of AI: Federal agencies should accelerate AI adoption to improve efficiency and mission delivery, with appropriate transparency and accountability safeguards.

What This Means for Pending Legislation:

The task force report provides political cover for moderate, sectoral legislation (like the Algorithmic Accountability Act focused on specific algorithmic harms) while creating skepticism toward comprehensive, prescriptive frameworks (like California's vetoed SB 1047). Expect 2026 legislation to follow task force guidance: targeted interventions addressing specific risks rather than economy-wide AI regulation.

Agency Rulemaking and Enforcement: The Real Action

While Congress debates legislation, federal agencies are actively regulating AI through existing statutory authorities, enforcement actions, and guidance documents. For AI companies, agency action creates immediate compliance obligations—no need to wait for new laws.

Federal Trade Commission: Operation AI Comply

In September 2024, the FTC launched "Operation AI Comply"—a law enforcement sweep targeting companies using AI to "trick, mislead, or defraud people."7 FTC Chair Lina Khan emphasized bluntly: "Using AI tools to trick, mislead, or defraud people is illegal. There is no AI exemption from the laws on the books."

Key Enforcement Actions:

Rytr (September 2024): The FTC filed a complaint against Rytr, an AI-powered content generation service, alleging the company facilitated creation of fake reviews and testimonials violating FTC Act Section 5 (unfair or deceptive practices). In December 2024, the FTC approved a final order prohibiting Rytr from advertising or selling services generating reviews and testimonials.

DoNotPay (January 2025): The FTC settled an enforcement action against DoNotPay, Inc., which marketed itself as offering "the world's first robot lawyer" without employing actual attorneys or providing legitimate legal services. The settlement required disgorgement of profits and permanent injunctions against false AI capability claims.

Evolv Technologies (November 2024): The FTC alleged that Evolv's statements about its AI-powered weapons detection systems' ability to distinguish between personal items and weapons were deceptive, demonstrating FTC scrutiny of AI marketing claims in security and safety contexts.

Enforcement Theories:

The FTC is using three primary legal theories to regulate AI:

  1. Deceptive Practices (FTC Act § 5): False or misleading claims about AI capabilities, accuracy, or functionality
  2. Unfair Practices (FTC Act § 5): AI systems causing substantial consumer harm that consumers cannot reasonably avoid and that is not outweighed by benefits
  3. Algorithmic Discrimination: AI systems producing discriminatory outcomes violating fair lending laws, fair housing laws, or equal employment laws (enforced jointly with CFPB, HUD, EEOC)

Compliance Priorities:

Transparency Requirements: The FTC expects companies to maintain internal transparency about AI systems even when external explainability is technically challenging. Claims that "employees don't understand how the AI works" will not serve as viable defenses against enforcement charges.

Training and Oversight: Companies must invest in training personnel to understand AI systems' operation, limitations, and failure modes. Lack of internal expertise demonstrates inadequate governance.

Privacy Commitments: Model-as-a-service providers making privacy commitments (e.g., "we won't use your data to train our models") face FTC enforcement for violations. Several companies have faced scrutiny for quietly changing terms of service to permit training on customer data.

Data Use Limitations: The FTC has emphasized that companies cannot repurpose consumer data for AI training without clear consent, particularly when original collection occurred under different terms or for different purposes.

Equal Employment Opportunity Commission: AI in Hiring

The EEOC issued comprehensive guidance in May 2023 on using AI in employment selection to comply with Title VII, focusing on disparate impact concerns.8 In May 2022, the EEOC similarly addressed the Americans with Disabilities Act and AI in employment decisions.

However, in 2025, both guidance documents were removed from EEOC's website following President Trump's executive order requiring agencies to roll back existing AI policies. Despite the guidance removal, federal anti-discrimination laws still apply to AI use in employment—Title VII, the ADA, and the ADEA remain in force.

Key Legal Principles (Still Applicable):

Employer Liability for Vendor AI Tools: Even when using third-party AI hiring tools, employers remain liable for any discriminatory impact. You cannot outsource legal compliance to vendors.

Four-Fifths Rule: EEOC guidance referenced the four-fifths rule to identify substantial differences in selection rates between protected groups. A selection rate is considered substantially different if the ratio is less than 80% (e.g., if a hiring algorithm selects white candidates at a 50% rate, it should select candidates from other racial groups at least at a 40% rate).

Reasonable Accommodation: AI systems making employment decisions must provide reasonable accommodations for individuals with disabilities, potentially including alternative assessment methods when AI tools are inaccessible.

What Employers Should Do:

Even without active EEOC AI guidance, best practices include:

  • Conduct disparate impact analyses before deploying AI hiring tools
  • Maintain audit trails of algorithmic decisions for potential enforcement defense
  • Implement human review for material employment decisions (hiring, promotion, termination)
  • Document validation studies demonstrating job-relatedness and business necessity
  • Create complaint mechanisms for individuals to challenge algorithmic decisions

NIST AI Risk Management Framework: Voluntary Becoming Mandatory

NIST released the AI Risk Management Framework (AI RMF 1.0) on January 26, 2023, followed by the Generative AI Profile on July 26, 2024.9 While technically voluntary, the framework is rapidly becoming the de facto standard for AI governance, incorporated by reference into federal procurement requirements, state legislation, and private sector AI governance programs.

Framework Structure:

The AI RMF provides four core functions to help organizations address AI risks:

  1. GOVERN: Establish AI governance structures, policies, and accountability mechanisms
  2. MAP: Identify and document AI system context, stakeholders, and potential impacts
  3. MEASURE: Assess AI risks quantitatively and qualitatively throughout the lifecycle
  4. MANAGE: Implement controls to mitigate identified risks and monitor effectiveness

Why It Matters:

Multiple pending federal bills (including the AI Foundation Model Transparency Act) require alignment with NIST AI RMF standards. Federal procurement guidance references the framework. And in practical terms, demonstrating NIST AI RMF compliance provides evidentiary support in litigation or enforcement proceedings that your organization exercised reasonable care in AI development and deployment.

Generative AI Profile:

The July 2024 Generative AI Profile addresses unique risks of large language models and generative systems, including:

  • Confabulation and hallucination risks
  • Data poisoning and adversarial attacks
  • Harmful content generation
  • Intellectual property and privacy risks from training data
  • Dangerous or violent content generation

Department of Health and Human Services: Healthcare AI

HHS has been particularly active in AI regulation for healthcare applications, issuing multiple guidance documents and final rules in 2024-2025.10

Section 1557 Final Rule (May 2024): The ACA Section 1557 Final Rule clarified that use of biased clinical algorithms—including AI tools—could violate civil rights protections in federally funded health programs. As of July 5, 2024, new requirements protect consumers from discrimination when AI tools are used in healthcare.

HHS Office for Civil Rights "Dear Colleague" Letter: OCR issued guidance requiring regulated organizations to:

  • Establish written policies governing AI tool use in healthcare
  • Monitor AI tools' impacts on protected populations
  • Develop mechanisms to address discrimination complaints
  • Use tools allowing qualified human staff to override discriminatory AI decisions ("human in the loop")
  • Disclose to patients the use of AI in patient care decision support tools posing discrimination risks

CMS Prior Authorization (February 2024): CMS confirmed Medicare Advantage Organizations can utilize AI in prior authorization processes, provided they ensure compliance with MA rules and cannot rely solely on AI for medical necessity determinations—human physician review remains required.

FDA Predetermined Change Control Plans (December 2024): The FDA finalized guidance allowing manufacturers of AI-enabled medical devices to implement pre-approved changes without submitting new marketing applications, supporting adaptive AI tools that improve through continuous learning while maintaining safety oversight.

Department of Defense and Federal Procurement

The April 2025 OMB memoranda M-25-21 ("Accelerating Federal Use of AI through Innovation, Governance, and Public Trust") and M-25-22 ("Driving Efficient Acquisition of Artificial Intelligence in Government") replaced Biden-era procurement guidance with new frameworks emphasizing AI adoption and innovation.11

Chief AI Officers: Agencies must designate Chief AI Officers within 60 days to lead AI governance, risk management, and strategic adoption efforts.

AI Procurement Toolbox: GSA is creating a standardized AI procurement framework allowing federal agencies to easily select among multiple AI models in a manner compliant with privacy, data governance, and transparency requirements.

National Security Systems: The Department of Defense retains separate guidance for AI use in national security systems, given unique operational requirements and threat environments.

Comparison to California's Approach: Why SB 1047 Was Vetoed

California's SB 1047 represented a fundamentally different regulatory approach than emerging federal frameworks—and its veto by Governor Newsom provides valuable lessons about what AI regulation will (and won't) look like nationally.12

SB 1047's Framework:

The bill would have applied to AI models costing more than $100 million to train (using computing power greater than 10^26 operations) and required:

  • Public transparency documentation about capabilities, limitations, training data, and safety testing
  • Severe incident reporting within 72 hours for mass casualties, critical infrastructure disruption, or cybersecurity incidents exceeding $500,000 in damages
  • Pre-deployment safety testing to identify catastrophic risks
  • Capability to promptly shut down models in emergencies
  • Whistleblower protections for employees reporting safety concerns

Why Newsom Vetoed:

In his veto message, Governor Newsom raised several concerns that align with the Congressional AI Task Force's skepticism toward comprehensive frameworks:

  1. Overly Narrow Focus: The bill's focus on high-cost, large-scale models provided a "false sense of security," as smaller, specialized models could pose equally significant risks in specific contexts.

  2. Lack of Context-Specific Risk Assessment: The bill "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data"—exactly the context-specific, sectoral approach the Congressional task force recommends.

  3. Arbitrary Thresholds: The $100 million training cost threshold created a bright-line rule that might quickly become obsolete as computing costs decline and model efficiency improves.

  4. Federal Preemption Concerns: Newsom noted "a California-only approach may well be warranted—especially absent federal action by Congress—but it must be based on empirical evidence and science." Translation: California risks creating compliance burdens for AI companies that federal legislation might soon supersede.

What California Did Instead:

While vetoing SB 1047, Governor Newsom signed 17 other AI-related bills addressing specific use cases:

  • AI-generated content disclosure requirements
  • Restrictions on AI use in political advertising
  • Protections against AI-enabled deepfakes
  • Data privacy requirements for AI systems
  • Sector-specific AI regulations for insurance, healthcare, and education

This piecemeal approach mirrors the federal sectoral strategy and demonstrates regulatory preference for targeted interventions rather than economy-wide frameworks.

Lessons for Federal Legislation:

The SB 1047 veto suggests that comprehensive federal AI regulation faces significant political and practical challenges. Expect federal legislation to follow California's revised approach: targeted bills addressing specific AI harms in specific contexts, rather than sweeping frameworks regulating AI development broadly.

Timeline Projections: When Regulation Takes Effect

Based on legislative momentum, agency priorities, and political dynamics, here are realistic timeline projections for federal AI regulation:

Q4 2025: Committee Activity and Hearings

Likely: Senate Commerce Committee schedules hearings on Algorithmic Accountability Act, AI Foundation Model Transparency Act, and CREATE AI Act.

Possible: House markups of CREATE AI Act (already approved by House Science Committee) and consideration on House floor.

Agency Action: Continued FTC enforcement actions under Operation AI Comply; additional EEOC guidance on AI employment discrimination despite earlier guidance removal.

Q1-Q2 2026: Legislative Floor Consideration

Likely: CREATE AI Act passes both chambers with bipartisan support (least controversial of pending bills, focused on research infrastructure rather than regulation).

Possible: Algorithmic Accountability Act advances to Senate floor after committee markup, potentially with amendments addressing industry concerns about compliance burdens.

Uncertain: AI Foundation Model Transparency Act faces strongest industry opposition; passage contingent on negotiated compromises regarding proprietary training data disclosure.

Q3-Q4 2026: Potential Enactment and Rulemaking

Optimistic Scenario: One or more AI bills pass Congress and are signed into law, triggering agency rulemaking processes. FTC, EEOC, and other agencies issue Notices of Proposed Rulemaking (NPRMs) to implement statutory requirements, beginning 6-12 month comment and finalization processes.

Realistic Scenario: CREATE AI Act passes; Algorithmic Accountability Act stalls in Senate requiring reintroduction in 120th Congress (2027-2028); AI Foundation Model Transparency Act remains in committee.

Pessimistic Scenario: Partisan gridlock prevents any AI legislation from passing in 119th Congress; regulatory action occurs exclusively through agency enforcement and guidance rather than new legislation.

2027: Compliance Deadlines Begin

If Legislation Passes in 2026: Final rules implementing statutory requirements take effect in 2027, with phased compliance deadlines:

  • Large AI companies (>$100M revenue): 12-18 months after final rules
  • Mid-size companies ($10-100M revenue): 24 months
  • Small companies (<$10M revenue): 36 months or exemptions

Regardless of Legislation: Agency enforcement of existing laws (FTC Act, Title VII, ADA, etc.) as applied to AI continues ramping up, with test case litigation establishing precedents and safe harbors.

2027-2028: International Harmonization Efforts

EU AI Act Implementation: Europe's comprehensive AI regulation takes full effect, creating compliance obligations for U.S. companies operating in EU markets or serving EU customers.

Cross-Border Coordination: U.S. works with EU, UK, Japan, and allies to harmonize AI safety standards, testing protocols, and transparency requirements—similar to privacy framework convergence seen with GDPR influencing global practices.

Competitive Dynamics: China's AI governance framework (emphasizing state control and surveillance applications) diverges from democratic nations' approaches, creating tensions in international standard-setting bodies.

Strategic Guidance: Preparing for Federal AI Regulation Now

AI companies cannot afford to wait for final regulations before building compliance infrastructure. The regulatory trajectory is clear even if specific requirements remain uncertain. Here's how to prepare strategically:

1. Implement NIST AI Risk Management Framework

Why: NIST AI RMF is becoming the de facto standard, referenced in pending legislation, federal procurement, and agency guidance. Early adoption positions you for compliance regardless of which specific bills pass.

How:

  • Conduct initial AI inventory identifying all systems using algorithmic decision-making
  • Perform risk assessments for each system using NIST's GOVERN-MAP-MEASURE-MANAGE framework
  • Document governance structures, accountability mechanisms, and oversight processes
  • Establish metrics for measuring AI system performance, accuracy, and fairness
  • Create AI risk register tracking identified risks and mitigation measures

Investment: 40-80 hours for initial implementation; ongoing monitoring and updates quarterly. Consider engaging specialized consultants for complex systems or high-risk applications.

2. Build Algorithmic Impact Assessment Capabilities

Why: Algorithmic Accountability Act and similar bills will likely require impact assessments demonstrating AI systems don't produce discriminatory outcomes or unfair harm.

How:

  • Develop statistical methodologies for disparate impact testing across protected classes
  • Establish baseline performance metrics before AI deployment
  • Create control groups or A/B testing frameworks comparing algorithmic decisions to human decisions
  • Document data quality, representativeness, and known limitations
  • Implement ongoing monitoring detecting performance degradation or emerging biases

Investment: Requires data science expertise and statistical rigor. Budget $50,000-$200,000 for initial assessment infrastructure, depending on number and complexity of AI systems.

3. Enhance Transparency and Documentation

Why: Multiple pending bills require public transparency about AI capabilities, limitations, training data, and safety testing. Building documentation now avoids scrambling when requirements take effect.

How:

  • Create model cards documenting intended use cases, training data sources, known limitations, and performance benchmarks
  • Publish transparency reports explaining how AI systems make decisions (to the extent technically feasible)
  • Develop plain-language explanations of AI functionality for non-technical audiences
  • Maintain detailed internal documentation of model development, validation, and updates
  • Establish version control and change management for AI systems

Investment: Minimal incremental cost if integrated into development workflow; retroactive documentation for existing systems requires 20-60 hours per system.

4. Establish Human Review and Override Mechanisms

Why: Agency guidance (HHS, EEOC, FTC) consistently emphasizes "human in the loop" requirements for consequential decisions. Pure algorithmic decision-making faces heightened scrutiny.

How:

  • Design AI systems to present recommendations to human decision-makers rather than making autonomous decisions
  • Create escalation procedures for edge cases, ambiguous situations, or high-stakes decisions
  • Train personnel on AI system limitations, failure modes, and appropriate override situations
  • Document instances where humans override algorithmic recommendations and reasons for overrides
  • Implement quality assurance sampling to audit both algorithmic and human decision quality

Investment: Varies significantly by use case. High-volume, low-stakes decisions (e.g., content moderation) may use sampling-based review; high-stakes decisions (e.g., credit denial, employment termination) may require case-by-case human review.

5. Conduct Privacy and Data Governance Reviews

Why: FTC has aggressively enforced privacy commitments by AI companies, particularly regarding training data use. Data governance failures create both legal exposure and reputational harm.

How:

  • Audit all data used for AI training, validation, and operation—ensure you have appropriate rights and permissions
  • Review privacy policies and terms of service for consistency with actual AI data practices
  • Implement data minimization principles (collect only necessary data; retain only as long as needed)
  • Establish data security controls protecting training data, model weights, and inference data
  • Create clear consent mechanisms if using customer data for AI training or improvement

Investment: Privacy audit by external counsel: $25,000-$75,000 depending on data complexity. Remediation costs vary based on findings.

6. Monitor Legislative and Regulatory Developments Proactively

Why: AI regulation is evolving rapidly. Companies that wait for final rules to begin compliance planning will be behind competitors who started earlier.

How:

  • Assign internal responsibility for AI regulatory monitoring (legal, compliance, or government affairs)
  • Subscribe to regulatory tracking services covering AI legislation and agency actions
  • Join industry trade associations (TechNet, Chamber of Progress, BSA | The Software Alliance) participating in policy development
  • Engage with standard-setting bodies (NIST, IEEE, ISO) developing AI technical standards
  • Consider participating in agency listening sessions, comment periods, and stakeholder consultations

Investment: 5-10 hours monthly for tracking; additional time for comment submissions or advocacy as warranted.

7. Engage Legal Counsel with AI Regulatory Expertise

Why: AI law is highly specialized, intersecting technology, intellectual property, privacy, employment law, consumer protection, and sector-specific regulation. Generalist attorneys often lack the technical understanding necessary for effective AI compliance guidance.

When to Engage:

  • Before deploying AI systems making consequential decisions about individuals (credit, employment, healthcare, insurance, housing)
  • When using training data from third-party sources or copyrighted materials
  • If receiving FTC, EEOC, or other agency inquiries about AI practices
  • When structuring AI product terms of service and privacy policies
  • For negotiating AI vendor agreements and allocating liability

What to Look For:

  • Technical understanding of AI/ML systems architecture and operation
  • Experience with algorithmic accountability, bias testing, and fairness metrics
  • Knowledge of sector-specific AI regulation (healthcare, financial services, employment)
  • Relationships with regulators and participation in AI policy development

The Bottom Line: Regulation Is Coming—Build Compliance Infrastructure Now

Federal AI regulation will happen. The only questions are timing and specific requirements. Companies that begin building compliance infrastructure now—implementing NIST frameworks, conducting impact assessments, establishing transparency practices, and monitoring regulatory developments—will have significant competitive advantages when regulations take effect.

The alternative is reactive scrambling: waiting for final rules, then rushing to achieve compliance under tight deadlines while competitors who started earlier capture market share by demonstrating regulatory readiness to enterprise customers and government agencies.

California's SB 1047 veto teaches an important lesson: comprehensive, prescriptive AI frameworks face political and practical obstacles. But targeted, sectoral regulation addressing specific harms in specific contexts is advancing with bipartisan support. The federal framework emerging in 2026-2027 will likely require:

  • Algorithmic impact assessments for high-risk AI applications
  • Transparency about AI capabilities, limitations, and training data (particularly for foundation models)
  • Human oversight mechanisms for consequential decisions
  • Ongoing monitoring and auditing of AI system performance and fairness
  • Disclosure when AI makes material decisions affecting individuals

These requirements are predictable and achievable. Start preparing now.

Need AI Regulatory Compliance Guidance?

Astraea Counsel advises AI companies on federal and state regulatory compliance, algorithmic transparency requirements, and emerging AI governance frameworks. Explore our AI & Emerging Tech legal services.

Related Resources


Footnotes

  1. Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023), revoked by Executive Order 14148 (Jan. 20, 2025), available at https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence

  2. Executive Order, Removing Barriers to American Leadership in Artificial Intelligence (Jan. 23, 2025), available at https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/

  3. S. 2164, Algorithmic Accountability Act of 2025, 119th Cong. (2025), available at https://www.congress.gov/bill/119th-congress/senate-bill/2164

  4. H.R. 6881, AI Foundation Model Transparency Act of 2023, 118th Cong. (2023), available at https://www.congress.gov/bill/118th-congress/house-bill/6881

  5. H.R. 2385, CREATE AI Act of 2025, 119th Cong. (2025), available at https://www.congress.gov/bill/119th-congress/house-bill/2385; S. 2714, CREATE AI Act of 2024, 118th Cong. (2024), available at https://www.congress.gov/bill/118th-congress/senate-bill/2714

  6. Bipartisan House Task Force on Artificial Intelligence, Report of the Task Force on Artificial Intelligence (Dec. 2024), available at https://www.speaker.gov/wp-content/uploads/2024/12/AI-Task-Force-Report-FINAL.pdf

  7. Federal Trade Commission, FTC Announces Crackdown on Deceptive AI Claims and Schemes (Sept. 25, 2024), available at https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes

  8. Equal Employment Opportunity Commission, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022); EEOC, Employment Discrimination and AI for Workers (Apr. 2024) [Note: Both guidance documents were removed from EEOC website in February 2025 following Executive Order 14179]

  9. National Institute of Standards and Technology, AI Risk Management Framework (AI RMF 1.0) (Jan. 26, 2023), available at https://www.nist.gov/itl/ai-risk-management-framework; NIST, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (July 26, 2024), available at https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.600-1.pdf

  10. Department of Health and Human Services, Office for Civil Rights, "Dear Colleague" Letter on AI Use in Healthcare (2024); Centers for Medicare & Medicaid Services, Medicare Advantage Prior Authorization and AI (Feb. 2024); Food and Drug Administration, Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles (Dec. 2024)

  11. Office of Management and Budget, Memorandum M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (Apr. 3, 2025); OMB, Memorandum M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government (Apr. 3, 2025)

  12. Governor Gavin Newsom, Veto Message on SB 1047 (Sept. 29, 2024), available at https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf; Cal. S.B. 1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (2024)

Chanté Eliaszadeh

Principal Attorney, Astraea Counsel APC

Chanté advises AI companies on federal and state regulatory compliance, algorithmic transparency, and emerging tech law. She helps AI startups prepare for evolving regulatory frameworks.

Get in Touch →

Legal Disclaimer: This article provides general information for educational purposes only and does not constitute legal advice. The law changes frequently, and the information provided may not reflect the most current legal developments. No attorney-client relationship is created by reading this content. For advice about your specific situation, please consult with a qualified attorney.

Related Articles

Client Guide

October 15, 2025

Qualified Crypto Custodians: Regulatory Requirements and Selection Guide

GENIUS Act and state regulations require 'qualified custodians' for digital asset reserves. This guide covers regulatory standards, SOC 2 requirements, custodian comparison (Coinbase, BitGo, Anchorage, Fireblocks), and selection criteria.

Read More →
Legal Update

October 8, 2025

CFTC Commodities Regulation for Crypto: Bitcoin, Ethereum, and Digital Assets

The CFTC regulates Bitcoin, Ethereum, and other digital asset commodities—especially derivatives and DeFi protocols. Understand CFTC jurisdiction, registration requirements, and enforcement priorities for 2025.

Read More →
Client Guide

October 8, 2025

How to Get a Crypto Exchange License: State-by-State Requirements

Launching a crypto exchange requires navigating 49 state money transmitter licenses plus federal registration. This guide breaks down requirements, costs ($1-3M), strategic state selection, and BitLicense compliance.

Read More →

Need Legal Guidance for Your Digital Asset Business?

Get practical legal counsel from an attorney who understands both the technology and the regulatory landscape.

Schedule a Consultation