⚡ Quick Summary
This Future of Privacy Forum report maps how U.S. states actually legislated on AI in 2025. It tracks 210 bills (42 states), noting only ~9% were enrolled or enacted, and most of those targeted government use rather than imposing new private-sector duties. Where states did regulate industry, they moved away from broad CAIA-style frameworks and toward tighter, disclosure-centric rules tied to specific uses (healthcare, employment) or technologies (chatbots, generative AI). A third lane tested liability/accountability ideas—affirmative defenses, sandboxes, “right to compute,” and AG investigative powers. Looking to 2026, the report flags definitional drift (AI, generative, chatbots, frontier), the policy puzzle of agentic AI, and rising interest in algorithmic pricing. (pp. 3, 5, 6–7, 15–19)
🧩 What’s Covered
- Landscape and totals. FPF narrows its scope to bills that could directly affect private-sector AI. Out of 210 bills, only ~20 (~9%) became law or were enrolled—most concerning government use or strategy; industry-facing wins were rarer. (pp. 5–6, 17)
- Four thematic approaches. The report groups bills into: (1) use/context-specific; (2) technology-specific; (3) liability & accountability; and (4) government use/strategy (included because vendor obligations flow through procurement). Chart 1 on p. 6 visualizes category shares; Chart 2 on p. 7 shows the industry-obligation subset where use/context (≈42%) and tech-specific (≈29%) dominate. (pp. 6–7)
- Use/context regulation (healthcare leads). States focused on sensitive contexts—especially mental-health and “companion” interactions—requiring disclosures and limiting AI’s role in diagnosis/treatment. High-risk/ADMT frameworks advanced mainly via amendments (e.g., CT SB 1295; UT SB 226) rather than new standalones. The center of gravity shifted from impact assessments to user-facing notices. (pp. 8–10)
- Technology-specific rules (chatbots, generative, frontier).
- Chatbots: Five new laws emphasize clear and conspicuous identity disclosures; several add suicide/self-harm protocols for companions. (pp. 10–12)
- Generative AI: Labeling and provenance/watermarking dominate (e.g., CA AB 853 enrolled). (pp. 12–13)
- Frontier/foundation models: CA SB 53 (TFAIA) and NY’s RAISE Act target catastrophic/critical risks using compute thresholds (>10^26 ops) and streamlined governance (protocols, transparency reports, whistleblower protections), dropping 2024’s heavier audit/shutdown ideas. (pp. 11–13, 16–17)
- Liability & accountability. States experimented with:
- Affirmative defenses/rebuttable presumptions (e.g., UT HB 452; TX HB 149 (TRAIGA)); proposals to use certified third-party audits as defenses (CA SB 813).
- Clarifications in existing privacy/tort statutes (e.g., CA AB 316 clarifies no “AI autonomy” shield; TX updates biometric law for AI training).
- Innovation tools: sandboxes (TX, DE; UT’s first sandbox agreement), “right to compute” (MT SB 212), expanded AG civil investigative demands, and whistleblower protections. (pp. 13–16)
- What’s next (2026 watchlist).
- Definitions: Most states borrow the OECD AI baseline but diverge on generative, chatbot, and frontier thresholds (e.g., both CA and NY use >10^26 ops; cost qualifiers differ).
- Agentic AI: Early steps (DE sandbox; VA pilot) hint current risk-assessment models may not fit multi-node agent behaviors.
- Algorithmic pricing: New York mandates disclosure for “personalized algorithmic pricing”; other states test bans/limits tied to surveillance data or competitor collusion. (pp. 15–19)
💡 Why it matters?
For practitioners, this report separates noise from signal. It shows where compliance is actually landing: disclosures over assessments; sector-specific guardrails (health, chatbots) over omnibus laws; and liability architecture (defenses, sandboxes, AG tools) shaping practical risk. It also gives early markers for frontier thresholds and definitional seams that will drive scope, vendor selection, and cross-state patchwork risk in 2026. (pp. 3, 8–13, 15–19)
❓ What’s Missing
- Limited quantitative enforcement picture (e.g., AG actions, penalties, private rights of action) beyond tool descriptions.
- Minimal operational guidance on how to implement disclosure UX that meets varying state triggers (timing, frequency, minors).
- Early but thin treatment of agentic AI controls (e.g., role-based capabilities, delegated authority limits, chain-of-actions logging).
- Interplay with federal sectoral rules (FTC UDAP, health, finance) is noted indirectly but not mapped for conflict preemption or duplication. (pp. 10–16, 18)
👥 Best For
- General counsel, policy leads, and privacy/AI governance teams building 2026 state compliance roadmaps.
- Product and safety leads for chatbots/companions, healthcare tooling, and foundation model developers tracking threshold-based obligations.
- Public-policy teams calibrating advocacy on sandboxes, defenses, and “right to compute” language. (pp. 8–16)
📄 Source Details
Future of Privacy Forum (FPF). The State of State AI: Legislative Approaches to AI in 2025. Authors: Justine Gluck, Beth Do, Tatiana Rice. October 2025. Includes executive summary, charts (pp. 6–7), and trends for 2026 (definitions, agentic AI, algorithmic pricing).
📝 Thanks to
Thanks to the authors and the FPF team for a rigorous taxonomy and concrete bill references; the Chart 1 on page 6and Chart 2 on page 7 make the legislative drift visually clear, and the definitional appendix notes are especially useful for 2026 tracking. (pp. 6–7, 15–18)