State AI Law Patchwork: 50-State Compliance Map 2025
By Chanté Eliaszadeh | December 18, 2025
On July 1, 2025, the United States Senate stripped the proposed 10-year AI regulatory moratorium from the federal budget bill in a near-unanimous 99-1 vote. What would have blocked states from enforcing AI regulations died in committee, unleashing a regulatory gold rush that fundamentally transformed the American AI legal landscape.
The result: 118 new state AI laws enacted in 2025 alone, with every single state plus the District of Columbia, Puerto Rico, and the Virgin Islands introducing AI-related legislation. Over half the states now have some form of AI regulation on the books. For AI companies operating nationally, this creates a compliance nightmare—50 different jurisdictions, each with distinct requirements, effective dates, penalties, and enforcement mechanisms.
This comprehensive guide maps the entire state AI regulatory landscape, identifies the 10 most active jurisdictions with detailed requirements, provides a state-by-state summary table covering all 50 states, and delivers a practical compliance framework for companies navigating multi-state operations.
Why the Federal Moratorium Failed—And What It Means for AI Companies
The proposed federal AI moratorium would have preempted state and local AI regulations for a full decade, creating uniform national standards and eliminating the state-by-state patchwork. Tech companies and industry associations strongly supported the measure, arguing that inconsistent state requirements would stifle innovation and create impossible compliance burdens.
But the coalition opposing the moratorium proved unstoppable:
- 17 Republican governors urged Congress to preserve state sovereignty over AI regulation
- 40 state attorneys general (from both parties) warned that federal preemption would leave citizens unprotected
- Civil liberties organizations opposed industry-led deregulation
- Consumer advocacy groups demanded immediate protections, not a 10-year delay
When the Senate voted 99-1 to strip the moratorium provisions on July 1, 2025, it sent an unmistakable message: states will lead AI regulation in America. With little indication of comprehensive federal AI legislation on the horizon, companies must now navigate this complex multi-jurisdictional landscape as the new permanent reality.
Strategic implications:
- State law is here to stay - Even if federal legislation eventually passes, it will likely establish regulatory floors, not ceilings, allowing states to impose additional requirements
- California sets the standard - As the world's fifth-largest economy with 39 million residents, California's AI laws become de facto national requirements (companies cannot economically maintain California-specific vs. national AI systems)
- Compliance complexity compounds - Each new state law creates overlapping obligations, conflicting definitions, and multiplicative compliance costs
- Enterprise procurement drives adoption - Major customers demand compliance with the most stringent state requirements regardless of legal necessity
- Early compliance = competitive advantage - Companies that proactively build robust compliance infrastructure position themselves as trusted partners and attract top AI safety talent
The 10 Most Active AI Regulatory States: Deep Dive
1. California: The Comprehensive Leader
California's AI regulatory framework spans multiple laws addressing different aspects of AI development, deployment, and use. While Governor Newsom vetoed the controversial SB 1047 (targeting frontier AI models), California enacted numerous other AI laws that collectively create the nation's most comprehensive state-level framework.
Key California AI Laws:
AB 1008 (Effective January 1, 2025) - Amended the California Consumer Privacy Act (CCPA) to include AI systems capable of outputting personal information in the definition of "personal information." This seemingly small change brings AI systems into CCPA's requirements for notice, consent, data subject rights, and reasonable security controls.1
SB 420 (Pending) - Would establish an AI "bill of rights" requiring impact assessments and transparency measures for AI systems deployed in California.
SB 243 (Pending) - Aims to protect minors from manipulative chatbot systems through disclosure requirements and design restrictions.
AB 1018 (Pending) - Would require strict disclosure and fairness requirements for AI used in high-stakes decisions like hiring, housing, credit, and insurance.
Bot Disclosure Law - Requires disclosure when using bots to sell goods/services or influence elections. Penalties include $1,000 fine per violation, with civil enforcement by state and local prosecutors (no private right of action).
Deepfake Laws - Multiple statutes addressing deepfakes in different contexts (elections, intimate images, defamation), with enforcement ranging from civil penalties to criminal charges depending on severity.
California Compliance Costs (Estimated):
- Small AI companies (<50 employees): $75,000-$150,000 annually (compliance staff time, legal counsel, documentation systems)
- Medium AI companies (50-250 employees): $200,000-$500,000 annually (dedicated compliance officer, third-party audits, enhanced documentation)
- Large AI companies (>250 employees): $750,000-$2,000,000 annually (full compliance team, continuous monitoring, legal representation)
Penalties: Vary by law; CCPA violations up to $7,500 per intentional violation; bot disclosure $1,000 per violation; deepfake laws range from civil penalties to criminal prosecution.
Strategic considerations: California law effectively sets minimum national standards. Even non-California companies should comply if serving California customers (which most national companies do). Federal legislation, if enacted, will likely adopt California's framework.
2. Colorado: First Comprehensive AI Anti-Discrimination Law
On May 17, 2024, Colorado enacted SB 24-205, the Colorado Anti-Discrimination in AI Law (ADAI), making Colorado the first state in the nation to enact broad restrictions on private companies using AI. After subsequent amendments, the law's effective date was delayed to June 30, 2026.2
Coverage: "High-risk AI systems" - AI systems that make or are a substantial factor in making "consequential decisions" concerning consumers in areas including:
- Education, employment, financial services, healthcare
- Housing, insurance, legal services
- Essential government services
Key Requirements:
For Developers (those who develop or substantially modify high-risk AI):
- Use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination
- Provide detailed documentation to deployers about:
- System's intended uses and known limitations
- Data types used for training
- Description of transparency measures
- How deployers can use system to minimize algorithmic discrimination risks
- Make annual statements to Attorney General documenting compliance
- Disclose known or reasonably foreseeable algorithmic discrimination risks within 90 days of discovery
For Deployers (those who deploy high-risk AI affecting Colorado consumers):
- Use reasonable care to protect consumers from algorithmic discrimination
- Conduct annual impact assessments reviewing whether system causes algorithmic discrimination
- Provide clear notice to consumers when high-risk AI makes or substantially contributes to consequential decisions
- Implement reasonable management policies and practices governing use
- Provide opportunity for consumers to appeal adverse decisions and access human review
Algorithmic Discrimination Defined: Any condition where deployment of AI system results in unlawful differential treatment or impact that disfavors individuals based on protected classifications (race, color, ancestry, religion, sex, national origin, disability, age, sexual orientation).
Enforcement: Exclusive enforcement by Colorado Attorney General. Violations constitute unfair trade practices under Colorado Consumer Protection Act, with penalties up to $20,000 per violation. Each day of non-compliance may constitute separate violation. No private right of action.
Compliance Timeline:
- Now - June 30, 2026: Prepare impact assessment procedures, transparency documentation, notice systems
- June 30, 2026: Compliance required for all covered systems
- Ongoing: Annual impact assessments, continuous monitoring, Attorney General statements
Colorado Compliance Costs:
- Small deployers: $50,000-$100,000 (initial impact assessment, documentation, notice implementation)
- Medium deployers: $150,000-$300,000 (dedicated resources, third-party assessments, enhanced monitoring)
- Large deployers/Developers: $400,000-$800,000 (comprehensive compliance program, legal counsel, continuous auditing)
3. Texas: Responsible AI Governance Act (TRAIGA)
On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, becoming effective January 1, 2026 - one month before Colorado's law. While narrower than earlier drafts, TRAIGA establishes significant guardrails for AI development and government use.3
Key Prohibitions:
TRAIGA prohibits the intentional development or deployment of AI systems to:
- Produce or disseminate child sexual abuse material
- Create unlawful sexually explicit deepfake content
- Generate explicit text-based conversations impersonating minors
- Discriminate unlawfully against individuals
- Impair constitutional rights
- Incite harmful or criminal acts
Transparency Requirements:
For Government Use: State agencies must provide "clear and conspicuous notice" to individuals when they are interacting with an AI system. This creates transparency obligations for government AI deployment but not private sector use.
For Private Sector: TRAIGA does not impose similar notification requirements on private companies, creating asymmetry between public and private sector obligations.
Texas Deepfake Laws:
Senate Bill 441 (SB 441) - Increases criminal penalties for threatening with or creating sexually explicit deepfakes to a Class A misdemeanor. Criminalizes threatening to create intimate deepfakes to coerce, extort, harass, or intimidate.
House Bill 581 (HB 581) - Assigns civil liability to operators of websites or applications used to create deepfakes of minors. First law in the United States to hold platform operators liable for deepfake creation tools.
Enforcement: Criminal prosecution for prohibited AI uses; civil liability for platform operators enabling minor deepfakes. No specified civil penalties for TRAIGA violations, but potential criminal charges for intentional development of prohibited systems.
Compliance Requirements:
- Audit AI systems for prohibited purposes (discrimination, constitutional violations, criminal incitement)
- If operating government-facing AI, implement notice systems
- If operating deepfake creation tools, implement age verification and content restrictions
Texas Compliance Costs:
- AI developers: $40,000-$80,000 (audit for prohibited purposes, documentation)
- Government contractors: $60,000-$120,000 (notice implementation, compliance verification)
- Platform operators: $100,000-$250,000 (content moderation systems, age verification, legal liability assessment)
Penalties:
- Criminal prosecution for prohibited AI development (penalties vary by specific violation)
- Civil liability for platform operators (damages determined by court)
- Potential injunctive relief prohibiting deployment
4. New York: Employment AI Bias Audits
New York City enacted Local Law 144, the nation's first AI hiring bias audit requirement, effective July 5, 2023. While this is a city ordinance rather than statewide law, NYC's size and influence make it effectively mandatory for companies hiring in the New York metropolitan area.4
Coverage: "Automated employment decision tools" (AEDTs) - any computational process derived from machine learning, statistical modeling, data analytics, or AI that issues simplified output used to substantially assist or replace discretionary decision-making for:
- Hiring employees or independent contractors
- Promoting current employees
Key Requirements:
Bias Audits: Employers using AEDTs must have tools audited annually by an independent auditor for bias. Audits must assess differential impact by race/ethnicity and sex, calculating selection rates and impact ratios.
Public Disclosure: Organizations must publicly post on employment section of website:
- Date of most recent bias audit
- Summary of audit results (selection rates, impact ratios)
- Distribution date of AEDT
Candidate Notice: Employers must notify candidates or employees at least 10 days before using AEDT in employment decision.
Alternative Process: Must provide opportunity for individuals to request alternative selection process or accommodation.
Data Retention: Must retain audit documentation and make available upon request.
Enforcement: NYC Department of Consumer and Worker Protection. Civil penalties:
- First violation: $500 per instance
- Subsequent violations: Up to $1,500 per instance
- Each day of continued violation = separate instance
2025 State-Level Developments:
New York State Legislature is considering bills that would:
- Extend bias audit requirements statewide (beyond NYC)
- Increase transparency requirements (algorithm explanation, training data disclosure)
- Create private right of action (allowing applicants/employees to sue employers and AI vendors)
- Expand coverage to promotion, performance evaluation, and termination decisions
New York Compliance Costs:
- Small employers (using vendor AEDT): $15,000-$30,000 (audit costs, notice implementation, documentation)
- Medium employers (custom AEDT): $40,000-$75,000 (independent audit, legal review, process modifications)
- Large employers (multiple AEDTs): $100,000-$200,000 (comprehensive audit program, dedicated compliance resources)
- AI vendors selling to NY employers: $50,000-$150,000 (audit certifications, customer documentation, legal counsel)
Strategic Considerations: Even without statewide requirement, many employers adopt NYC standards company-wide to avoid maintaining different hiring systems. Major AI hiring platforms (HireVue, Pymetrics, etc.) now include bias audit certifications as standard offerings.
5. Illinois: BIPA Extended to AI Systems
Illinois's Biometric Information Privacy Act (BIPA), originally enacted in 2008, remains the nation's most protective biometric privacy law and the only one allowing private lawsuits (unlike California CCPA which requires Attorney General enforcement).5
Coverage: Any entity that collects, captures, purchases, or otherwise obtains biometric identifiers or biometric information.
Biometric Identifiers Include:
- Retina or iris scans
- Fingerprints, voiceprints, scans of hand or face geometry
- Any other identifier based on individual's biological characteristics
AI Systems Using Biometric Data: Many AI systems—particularly facial recognition, emotion detection, and identity verification tools—fall squarely within BIPA's scope. The law's application to AI has led to major enforcement actions and settlements.
Key Requirements:
Written Policy: Must publicly post written policy establishing retention schedule and destruction guidelines for biometric data.
Informed Consent: Must obtain informed written consent before collecting biometric data, including:
- Specific disclosure of what biometric data is being collected
- Specific purpose and length of collection
- Written release from individual
No Sale or Profit: Prohibited from selling, leasing, trading, or otherwise profiting from biometric data.
Data Security: Must use reasonable standard of care (at least same standard as for other confidential information) to protect biometric data from disclosure.
Major 2024 Amendment:
Senate Bill 2979 (effective August 2024) amended BIPA's damages provision to limit recovery to single violation for same method of collection, drastically reducing potential damages exposure. Previously, each scan/collection constituted separate violation (enabling multi-million dollar damages in class actions).
2025 Clearview AI Settlement:
In March 2025, a federal judge approved a landmark $51.75 million settlement in class action against Clearview AI for BIPA violations. Settlement also barred Clearview from granting access to Illinois state and local agencies for five years.6
Enforcement: Private right of action (individuals can sue directly). Statutory damages:
- Negligent violation: $1,000 per violation
- Intentional/reckless violation: $5,000 per violation
- Attorney's fees and costs to prevailing plaintiff
Post-Amendment Impact: While 2024 amendment limits damages (single violation per collection method rather than per instance), BIPA remains most expensive biometric privacy law due to private right of action and statutory damages.
Illinois Compliance Costs:
- AI companies using facial recognition: $80,000-$150,000 (consent systems, policy documentation, data security audit, legal counsel)
- Emotion detection/biometric AI: $100,000-$200,000 (comprehensive consent infrastructure, enhanced security, risk assessment)
- Settlement/litigation risk reserve: $500,000-$5,000,000 (depending on deployment scale and user base)
Strategic Considerations: Many AI companies avoid collecting biometric data from Illinois users entirely. Others implement geofencing to disable biometric features for Illinois users. Class action risk remains substantial despite 2024 amendment.
6. Connecticut: Consumer Protection Framework
Connecticut is advancing SB 2, the state legislature's second attempt at comprehensive AI regulation ("An Act Concerning Artificial Intelligence"). The bill regulates private sector use of AI with focus on consumer protection and algorithmic discrimination.7
Key Provisions (Pending - Various Effective Dates Starting July 1, 2025):
Developer Requirements:
- Protect consumers against algorithmic discrimination in high-risk AI systems
- Conduct impact assessments before deployment
- Implement AI risk mitigation policies
- Provide transparency disclosures regarding:
- System capabilities and limitations
- Training data sources and characteristics
- Known risks and mitigation measures
Deployer Requirements:
- Conduct impact assessments for high-risk systems
- Provide notice to consumers when AI makes consequential decisions
- Implement management and oversight procedures
- Maintain documentation of AI system use and impacts
High-Risk AI Definition: Systems that make or are substantial factor in making decisions with legal or similarly significant effects concerning:
- Education, employment, financial services
- Healthcare, housing, insurance
- Access to essential services
Algorithmic Discrimination: Similar to Colorado definition - unlawful differential treatment or impact based on protected characteristics.
Connecticut Compliance Timeline & Costs:
- July 1, 2025: Initial provisions effective (pending final passage)
- Initial compliance (if enacted): $60,000-$120,000 (impact assessments, documentation systems, notice implementation)
- Ongoing annual costs: $40,000-$80,000 (continuous monitoring, updated impact assessments, legal review)
Status: Pending final legislative votes. Connecticut's framework closely mirrors Colorado's approach, suggesting multi-state convergence on common requirements.
7. Massachusetts: Attorney General Guidance
Massachusetts has not enacted AI-specific legislation, but Attorney General Andrea Joy Campbell issued comprehensive guidance on April 16, 2024, outlining how existing consumer protection laws apply to AI systems.8
Key Guidance Provisions:
Developer Obligations:
- Cannot falsely advertise AI capabilities or reliability
- Must ensure AI systems perform as represented
- Prohibited from misrepresenting safety or accuracy
- Must disclose material limitations
Supplier/Vendor Obligations:
- Accurate marketing of AI tools and services
- Transparent disclosure of system limitations
- Proper training and support for deployers
User/Deployer Obligations:
- Cannot deploy AI in ways that violate consumer protection laws
- Responsible for outcomes of AI-driven decisions
- Must maintain human oversight for material decisions
Enforcement: Massachusetts Consumer Protection Act (Chapter 93A) violations can result in:
- Civil penalties up to $5,000 per violation
- Injunctive relief
- Actual damages (in private actions)
- Attorney's fees to prevailing plaintiffs
- Treble damages for willful violations
Massachusetts Strategic Approach:
Rather than enacting new AI-specific laws, Massachusetts applies existing robust consumer protection framework to AI systems. This approach:
- Immediate applicability - No waiting for legislation
- Flexibility - Adapts to evolving AI technologies
- Enforcement precedent - Decades of consumer protection case law
- Broader scope - Covers AI uses not addressed by targeted legislation
Massachusetts Compliance Costs:
- AI developers/vendors: $30,000-$60,000 (marketing review, capability testing, disclosure development)
- AI deployers: $20,000-$40,000 (oversight procedures, documentation, vendor due diligence)
- Litigation risk management: $50,000-$150,000 (legal counsel, compliance audit, Chapter 93A risk assessment)
Strategic Considerations: Massachusetts demonstrates that comprehensive AI-specific legislation is not necessary for robust regulation. Other states may follow this "apply existing law" approach.
8. Utah: Disclosure-Focused Framework
Utah enacted two AI laws in 2024-2025 establishing disclosure-based regulatory framework:9
SB 149: Artificial Intelligence Policy Act
Key Requirements:
- Disclosure to consumers when interacting with generative AI products
- Notification must be clear and conspicuous
- Applies to consumer-facing generative AI (chatbots, content generators, etc.)
HB 452: AI-Supported Mental Health Chatbot Regulations
Specific Requirements for Mental Health AI:
- Advertising ban: Prohibited from advertising products or services during user interactions
- Data privacy: Cannot share users' personal information with third parties
- Disclosure requirements: Must clearly identify as AI system (not human therapist)
Unique Utah Innovation: Regulatory Mitigation Agreements
Utah created an Office of Artificial Intelligence Policy empowered to negotiate regulatory mitigation agreements with companies. Benefits include:
- Reduced fines for violations
- Cure periods before penalties assessed
- Collaborative compliance approach
- Regulatory certainty through formal agreements
Enforcement: Utah Division of Consumer Protection. Penalties up to $2,500 per violation.
Utah Compliance Costs:
- Generative AI products: $15,000-$30,000 (disclosure implementation, user interface modifications)
- Mental health chatbots: $40,000-$80,000 (enhanced privacy controls, advertising restrictions, regulatory agreement negotiation)
- Regulatory mitigation agreement: $10,000-$25,000 (legal counsel for negotiation)
Strategic Considerations: Utah's collaborative approach (regulatory mitigation agreements) offers potential model for reducing compliance costs while achieving regulatory goals. Companies operating in Utah should proactively engage with Office of AI Policy.
9. Virginia: Privacy Law Extensions
Virginia has not enacted standalone AI legislation but extended its comprehensive consumer privacy law (Virginia Consumer Data Protection Act - VCDPA) to cover AI systems that process personal data.
Key AI-Related Provisions:
Data Protection Impact Assessments Required for:
- Processing personal data for targeted advertising
- Sale of personal data
- Profiling that presents reasonably foreseeable risk of:
- Unfair or deceptive treatment
- Financial, physical, or reputational injury
- Intrusion upon seclusion or private affairs
- Other substantial injury
AI Systems Frequently Trigger Assessment Requirements:
- Algorithmic decision-making systems
- Automated profiling for credit, employment, housing
- Personalization engines using sensitive data
- Predictive analytics affecting consumers
Consumer Rights:
- Right to access data used in AI decision-making
- Right to correct inaccurate data
- Right to delete personal data
- Right to opt-out of profiling/targeted advertising
- Right to obtain meaningful information about processing logic
Enforcement: Virginia Attorney General. Civil penalties up to $7,500 per violation.
Virginia Compliance Costs:
- AI systems processing VA consumer data: $50,000-$100,000 (data protection impact assessments, privacy infrastructure, consumer rights implementation)
- Ongoing annual costs: $30,000-$60,000 (updated assessments, rights request handling, documentation maintenance)
Strategic Considerations: Virginia's approach shows AI regulation can be achieved through privacy law extensions rather than AI-specific legislation. Companies complying with comprehensive state privacy laws (CA, VA, CO) are partially prepared for AI-specific requirements.
10. Washington: Opposition to Federal Preemption
Washington has not yet enacted comprehensive AI legislation but has been highly active in opposing federal preemption efforts. Democratic Senator Maria Cantwell and Attorney General Nick Brown have strongly opposed federal moratorium proposals, signaling Washington's intention to regulate AI at the state level.
Current Legislative Activity:
- Multiple AI bills introduced in 2025 session
- Focus areas: algorithmic discrimination, deepfakes, data privacy
- Expected passage of AI-specific legislation in 2026 session
Washington's Expected Framework (Based on Legislative Proposals):
- Algorithmic bias testing requirements
- Transparency and disclosure obligations
- Data protection for AI training data
- Restrictions on AI use in sensitive domains (law enforcement, healthcare, education)
Timeline Prediction:
- 2026: Comprehensive AI legislation likely passes
- Effective dates: Staggered implementation (2027-2028)
Washington Compliance Costs (Estimated for Future Requirements):
- Similar to Colorado/Connecticut framework: $75,000-$150,000 initial compliance
- Ongoing: $50,000-$100,000 annually
Strategic Considerations: Washington's strong opposition to federal preemption and active legislative development suggest major AI regulation coming in 2026. Companies should monitor Washington legislative developments closely.
50-State AI Regulation Summary Table
The following table summarizes AI-related laws and regulatory status for all 50 states, DC, Puerto Rico, and the U.S. Virgin Islands as of October 2025:
| State | Primary AI Law(s) | Effective Date | Key Requirements | Penalties | Status |
|---|---|---|---|---|---|
| Alabama | None enacted | - | None (monitoring federal developments) | - | No current regulation |
| Alaska | None enacted | - | None (introduced bills did not pass) | - | No current regulation |
| Arizona | Deepfake disclosure law | Jan 1, 2025 | Disclosure required for political deepfakes 90 days before election | Civil penalties | Enacted |
| Arkansas | None enacted | - | None (2025 bills pending) | - | Legislation pending |
| California | Multiple (AB 1008, Bot Disclosure, Deepfakes) | Various (2025-2026) | CCPA extension to AI, bot disclosure, deepfake restrictions, high-stakes decision transparency | $1,000-$7,500 per violation depending on law | Multiple laws enacted |
| Colorado | SB 24-205 (ADAI) | June 30, 2026 | Algorithmic discrimination prevention, impact assessments, transparency notices | $20,000 per violation | Enacted |
| Connecticut | SB 2 (pending) | July 1, 2025 (proposed) | Impact assessments, transparency, anti-discrimination | To be determined | Pending final passage |
| Delaware | None enacted | - | None (study commission created) | - | Study phase |
| Florida | Deepfake restrictions | July 1, 2025 | Political and intimate deepfake prohibitions | Criminal penalties | Enacted |
| Georgia | None enacted | - | None (2025 proposals under consideration) | - | Legislation pending |
| Hawaii | None enacted | - | None | - | No current regulation |
| Idaho | None enacted | - | None | - | No current regulation |
| Illinois | BIPA (extended to AI) | In effect (2008, amended 2024) | Biometric data consent, security, no sale | $1,000-$5,000 per violation (private right of action) | Enacted |
| Indiana | None enacted | - | None (2025 bills introduced) | - | Legislation pending |
| Iowa | None enacted | - | None | - | No current regulation |
| Kansas | None enacted | - | None | - | No current regulation |
| Kentucky | Limited AI disclosure | Jan 1, 2025 | Disclosure for AI-generated content in certain contexts | Civil penalties | Enacted |
| Louisiana | None enacted | - | None (study ongoing) | - | Study phase |
| Maine | None enacted | - | None (consumer protection focus) | - | No AI-specific law |
| Maryland | None enacted | - | None (2025 bills pending) | - | Legislation pending |
| Massachusetts | AG Guidance | April 16, 2024 | Existing consumer protection law applies to AI (no false advertising, performance guarantees) | $5,000 per violation (Chapter 93A) | Guidance issued |
| Michigan | None enacted | - | None (introduced bills did not pass) | - | No current regulation |
| Minnesota | Deepfake law | Aug 1, 2024 | Restrictions on deepfakes in elections and intimate imagery | Civil and criminal penalties | Enacted |
| Mississippi | None enacted | - | None | - | No current regulation |
| Missouri | None enacted | - | None | - | No current regulation |
| Montana | None enacted | - | None (2025 bills introduced) | - | Legislation pending |
| Nebraska | LB 504 | Jan 1, 2026 (enforcement July 1, 2026) | Consumer protection provisions for AI | $50,000 per violation | Enacted |
| Nevada | None enacted | - | None (privacy law may extend to AI) | - | No AI-specific law |
| New Hampshire | None enacted | - | None | - | No current regulation |
| New Jersey | None enacted | - | None (multiple 2025 bills pending) | - | Legislation pending |
| New Mexico | None enacted | - | None | - | No current regulation |
| New York | NYC Local Law 144 | July 5, 2023 | AI hiring bias audits (NYC only); state bills pending | $500-$1,500 per instance | NYC enacted; state bills pending |
| North Carolina | None enacted | - | None (study commission active) | - | Study phase |
| North Dakota | None enacted | - | None | - | No current regulation |
| Ohio | None enacted | - | None (2025 bills introduced) | - | Legislation pending |
| Oklahoma | None enacted | - | None | - | No current regulation |
| Oregon | None enacted | - | None (2025 bills pending) | - | Legislation pending |
| Pennsylvania | None enacted | - | None (study ongoing) | - | Study phase |
| Rhode Island | None enacted | - | None (2025 bills introduced) | - | Legislation pending |
| South Carolina | None enacted | - | None | - | No current regulation |
| South Dakota | None enacted | - | None | - | No current regulation |
| Tennessee | Deepfake law | July 1, 2024 | ELVIS Act - protects voice and likeness from AI replication | Civil penalties | Enacted |
| Texas | TRAIGA, SB 441, HB 581 | Jan 1, 2026 (TRAIGA) | Prohibited AI uses, government transparency, deepfake restrictions, platform liability | Criminal (prohibited uses); civil (platform liability) | Enacted |
| Utah | SB 149, HB 452 | In effect (2024-2025) | Generative AI disclosure, mental health chatbot restrictions | $2,500 per violation | Enacted |
| Vermont | None enacted | - | None (consumer protection focus) | - | No AI-specific law |
| Virginia | VCDPA extensions | In effect | Data protection impact assessments for AI profiling | $7,500 per violation | Enacted (through privacy law) |
| Washington | None enacted | - | None (major legislation expected 2026) | - | Legislation expected |
| West Virginia | None enacted | - | None | - | No current regulation |
| Wisconsin | None enacted | - | None (2025 bills introduced) | - | Legislation pending |
| Wyoming | None enacted | - | None | - | No current regulation |
| District of Columbia | None enacted | - | None (bills introduced) | - | Legislation pending |
| Puerto Rico | None enacted | - | None (bills introduced) | - | Legislation pending |
| U.S. Virgin Islands | None enacted | - | None (bills introduced) | - | Legislation pending |
Key Insights from 50-State Analysis:
- Geographic Concentration: Most comprehensive AI regulation concentrated in California, Colorado, Texas, New York, and Illinois - representing 40%+ of U.S. population
- Common Requirements Emerging: Transparency/disclosure, bias testing, impact assessments appearing across multiple state frameworks
- Deepfake Focus: Narrowest regulations (20+ states) address deepfakes in elections and intimate imagery - relatively easy consensus issue
- Study Commissions: 8+ states created study commissions rather than immediate legislation, suggesting more comprehensive laws coming 2026-2027
- Privacy Law Extensions: Several states (Virginia, Nevada) regulating AI through existing comprehensive privacy laws rather than AI-specific legislation
Common Requirements Across State AI Laws
Despite variation in specifics, several core requirements appear across multiple state AI frameworks:
1. Transparency and Disclosure
What It Requires:
- Clear notification when AI makes or substantially contributes to consequential decisions
- Disclosure of AI system capabilities and limitations
- Transparency about training data sources and characteristics
- Publication of bias testing results (in some jurisdictions)
Appears In: California (various laws), Colorado, Connecticut, Texas (government use), Utah, New York (employment), Nebraska
Implementation:
- User-facing notice systems (in-app notifications, website disclosures)
- Public transparency reports (posted on company website)
- Individual decision explanations (upon request or automatically)
Cost Range: $20,000-$80,000 for disclosure systems; $10,000-$30,000 annually for transparency reporting
2. Bias Testing and Impact Assessments
What It Requires:
- Pre-deployment testing for algorithmic discrimination
- Assessment of disparate impact by protected characteristics
- Evaluation of risks to consumer rights and safety
- Documentation of testing methodology and results
- Annual re-assessment of deployed systems
Appears In: Colorado, Connecticut, New York (employment), Virginia (data protection assessments), California (pending bills)
Implementation:
- Internal testing using representative datasets
- Third-party audits by independent assessors
- Statistical analysis of selection rates and impact ratios
- Documentation systems for audit trails
Cost Range:
- Internal testing: $30,000-$75,000 per system
- Third-party audits: $50,000-$150,000 per system annually
- Comprehensive program (multiple systems): $200,000-$500,000 annually
3. Human Review and Appeal Rights
What It Requires:
- Opportunity for human review of AI-driven decisions
- Appeal process for adverse decisions
- Alternative selection processes (in employment context)
- Meaningful human oversight of automated systems
Appears In: Colorado, New York (employment), Connecticut (pending), California (pending)
Implementation:
- Human-in-the-loop workflows for material decisions
- Appeal submission and review processes
- Escalation procedures for AI system overrides
- Training for human reviewers on AI limitations
Cost Range: $40,000-$100,000 for appeal infrastructure; $50,000-$150,000 annually for staffing human review
4. Data Protection and Security
What It Requires:
- Reasonable security measures for AI systems and training data
- Protection of personal information processed by AI
- Data minimization (collect only necessary information)
- Retention limits and deletion procedures
- Security breach notification
Appears In: Illinois (BIPA), Virginia (VCDPA), California (CCPA/CPRA), Massachusetts (consumer protection)
Implementation:
- Encryption of AI training data and model weights
- Access controls and authentication
- Regular security audits
- Incident response procedures
- Data inventory and retention schedules
Cost Range: $60,000-$150,000 initial security infrastructure; $40,000-$100,000 annually for maintenance and audits
5. Prohibited Uses and Content Restrictions
What It Requires:
- Restrictions on AI use for unlawful discrimination
- Prohibitions on deepfakes (elections, intimate imagery)
- Restrictions on biometric data collection
- Prohibitions on child exploitation content
- Limits on AI in sensitive domains (sometimes)
Appears In: Texas (prohibited AI development), Illinois (biometric restrictions), 20+ states (deepfake laws), California (various restrictions)
Implementation:
- Use case audits and restrictions
- Content moderation systems (for platforms)
- Age verification and access controls
- Prohibited use monitoring and enforcement
- Legal review of deployment contexts
Cost Range: $25,000-$75,000 for use case restrictions; $100,000-$300,000 for platform content moderation systems
6. Documentation and Recordkeeping
What It Requires:
- Maintenance of impact assessment records
- Documentation of bias testing methodologies and results
- Records of AI system modifications and updates
- Training data documentation
- Consumer notice records
- Retention of documentation for regulatory inspection (typically 3+ years)
Appears In: Colorado, Connecticut, New York, California, Virginia
Implementation:
- Document management systems
- Automated recordkeeping for AI decisions
- Audit trail infrastructure
- Regular documentation reviews and updates
Cost Range: $30,000-$70,000 for documentation systems; $20,000-$50,000 annually for maintenance
Compliance Framework for Multi-State Operations
Navigating 50+ different state AI laws requires strategic compliance framework balancing legal obligations, operational efficiency, and business goals.
Strategy 1: Adopt Highest Common Denominator
Approach: Comply with most stringent state requirements across all operations, creating uniform national standards.
When It Works:
- Uniform products/services: When you cannot economically maintain state-specific AI systems
- California operations: If serving California market (which most national companies do), California requirements effectively become national floor
- Enterprise customers: When major customers demand compliance with strictest standards regardless of legal necessity
- Brand positioning: When positioning as AI safety/ethics leader
Advantages:
- Operational simplicity: Single compliance program, no geographic complexity
- Future-proofing: Prepared for additional state laws and eventual federal legislation
- Competitive positioning: "Certified compliant" with strictest standards
- Risk reduction: Eliminates risk of state-specific non-compliance
Disadvantages:
- Higher costs: Paying for strictest requirements even in states without legal obligation
- Slower innovation: Most burdensome requirements may slow product development
- Overinvestment: May exceed legal requirements in many jurisdictions
Recommended For: Large AI companies, companies serving California + 10+ other states, enterprise-focused companies, companies seeking industry leadership positioning
Implementation Costs:
- Initial: $300,000-$750,000 (comprehensive compliance infrastructure)
- Annual: $200,000-$500,000 (ongoing monitoring, audits, documentation, legal counsel)
Strategy 2: Tiered Compliance by State
Approach: Implement different compliance levels based on state requirements, maintaining separate systems/processes for different jurisdictions.
When It Works:
- Distinct product lines: When different AI systems serve different markets
- Geographic targeting: When you can reliably identify user location
- Technical feasibility: When you can geofence features or maintain state-specific versions
- Cost sensitivity: When compliance costs would be prohibitive at highest common denominator
Advantages:
- Cost optimization: Pay for compliance only where legally required
- Faster innovation: Can deploy advanced features in less-regulated states first
- Tailored approach: Customize compliance to specific state frameworks
Disadvantages:
- Operational complexity: Managing multiple compliance programs simultaneously
- Technical overhead: Geofencing, state-specific features, version control
- User experience: Inconsistent features across states may confuse users
- Regulatory risk: Geolocation failures could expose to liability in stricter states
Recommended For: Mid-sized AI companies, companies with distinct product lines, companies with primarily local/regional user bases, cost-constrained startups
Implementation Costs:
- Initial: $150,000-$400,000 (tiered compliance infrastructure, geofencing, legal analysis)
- Annual: $100,000-$300,000 (multi-state monitoring, state-specific audits, technical maintenance)
Strategy 3: Strategic Market Selection
Approach: Limit operations to states with favorable (or no) AI regulation, avoiding strictest jurisdictions.
When It Works:
- Early-stage startups: Testing product-market fit with limited resources
- Niche applications: Serving specific industries or use cases with concentrated geography
- B2B focused: When customers located in specific states
- High compliance sensitivity: When AI use case particularly susceptible to regulation (e.g., emotion detection, biometric identification)
Advantages:
- Minimized compliance costs: Avoid most expensive regulatory regimes
- Faster time-to-market: Launch without comprehensive compliance infrastructure
- Resource focus: Concentrate resources on product development rather than compliance
Disadvantages:
- Limited market access: Excluding California, New York, Texas, Illinois eliminates 40%+ of U.S. market
- Scaling challenges: Eventually must address major markets
- Competitive disadvantage: Competitors serving full national market have scale advantages
- Investor concerns: Geographic limitations may reduce valuation and investment appeal
Recommended For: Pre-seed/seed-stage startups, companies testing novel AI applications, B2B companies with geographically concentrated customer base, companies developing AI for eventual acquisition
Implementation Costs:
- Initial: $25,000-$75,000 (basic compliance for selected states, legal terms of service restrictions)
- Annual: $20,000-$60,000 (monitoring selected states, limited auditing)
Strategy 4: Compliance-as-Competitive-Advantage
Approach: Exceed legal requirements, obtaining third-party certifications and building compliance into brand positioning.
When It Works:
- Enterprise sales: When selling to highly regulated industries (healthcare, finance, government)
- Ethical AI positioning: When targeting customers with strong AI ethics/safety values
- Investor appeal: When raising capital from VCs focused on responsible AI
- Talent attraction: When recruiting AI researchers and engineers who prioritize safety/ethics
Advantages:
- Differentiation: Stand out in crowded AI market
- Customer trust: Demonstrate commitment to responsible AI
- Premium pricing: Command higher prices for certified-compliant solutions
- Risk reduction: Proactive compliance reduces enforcement risk
- Future-proofing: Prepared for regulatory evolution
Disadvantages:
- Highest costs: Exceeding legal requirements = maximum investment
- Ongoing commitment: Must maintain certification standards continuously
- Competitive disclosure: Transparency requirements may reveal proprietary information
- Slower iteration: Compliance processes may slow product development
Recommended For: Enterprise AI vendors, companies in regulated industries, AI safety-focused companies, companies seeking premium market positioning
Implementation Costs:
- Initial: $500,000-$1,500,000 (comprehensive compliance program, third-party certifications, audit infrastructure)
- Annual: $300,000-$800,000 (continuous auditing, certification maintenance, enhanced documentation, legal counsel)
Recommended Multi-State Compliance Roadmap
Phase 1: Foundation (Months 1-3)
Objective: Establish baseline understanding and compliance infrastructure
Actions:
-
Jurisdictional Analysis ($15,000-$30,000 legal counsel)
- Identify all states where you deploy AI systems or serve customers
- Determine which state laws apply to your specific AI use cases
- Assess conflicting requirements and compliance gaps
-
Current State Assessment ($20,000-$50,000 internal + external audit)
- Inventory all AI systems (models, applications, uses)
- Document current compliance status by jurisdiction
- Identify high-risk systems requiring immediate attention
-
Strategy Selection ($10,000-$25,000 legal + business consultation)
- Choose compliance strategy (highest common denominator, tiered, market selection, or competitive advantage)
- Develop multi-year compliance roadmap
- Secure executive and board approval with budget
-
Governance Structure ($15,000-$40,000 policy development)
- Designate compliance officer or cross-functional committee
- Establish reporting lines and accountability
- Create escalation procedures for compliance issues
Phase 1 Total Investment: $60,000-$145,000
Phase 2: Implementation (Months 4-9)
Objective: Build compliance infrastructure and implement required systems
Actions:
-
Transparency and Disclosure Systems ($30,000-$100,000)
- Develop user-facing notice systems
- Create public transparency reports
- Implement individual decision explanations
-
Bias Testing and Impact Assessments ($75,000-$200,000)
- Design testing protocols and methodologies
- Conduct initial impact assessments for high-risk systems
- Engage third-party auditors (if required)
- Document results and mitigation plans
-
Human Review Infrastructure ($50,000-$150,000)
- Build appeal and review processes
- Train human reviewers on AI system limitations
- Implement escalation workflows
-
Data Protection and Security ($60,000-$150,000)
- Encrypt AI systems and training data
- Implement access controls
- Conduct security audits
- Develop incident response procedures
-
Documentation Systems ($40,000-$80,000)
- Implement document management for compliance records
- Create automated recordkeeping for AI decisions
- Establish retention schedules
Phase 2 Total Investment: $255,000-$680,000
Phase 3: Operationalization (Months 10-12)
Objective: Integrate compliance into ongoing operations
Actions:
-
Continuous Monitoring ($30,000-$80,000)
- Deploy real-time monitoring for AI system performance
- Implement automated bias detection
- Create dashboards for compliance metrics
-
Training and Awareness ($20,000-$50,000)
- Train development teams on compliance requirements
- Educate customer-facing teams on disclosure obligations
- Create compliance culture throughout organization
-
Vendor Management ($15,000-$40,000)
- Audit third-party AI vendors for compliance
- Negotiate contractual compliance obligations
- Establish vendor oversight procedures
-
Regulatory Relations ($25,000-$60,000)
- Engage with state regulators proactively
- Participate in industry working groups
- Monitor emerging legislation and rulemaking
Phase 3 Total Investment: $90,000-$230,000
Year 1 Total Investment: $405,000-$1,055,000
Ongoing Annual Costs (Years 2+): $200,000-$600,000
- Annual impact assessments and bias testing
- Third-party audits
- Continuous monitoring and documentation
- Legal counsel and regulatory updates
- Training and awareness programs
- Vendor oversight
Federal Preemption Prospects: 2026 Legislative Outlook
With the federal AI moratorium defeated in July 2025, the question becomes: will comprehensive federal AI legislation eventually preempt state laws?
Current Federal Legislative Landscape:
Pending Bills (118th Congress):
- Algorithmic Accountability Act (S. 2892) - Requires impact assessments for automated decision systems, similar to state frameworks
- AI Foundation Model Transparency Act (H.R. 8670) - Transparency requirements for foundation model developers
- AI Training Act - Workforce development for AI oversight
- Various sector-specific bills - Healthcare AI, law enforcement AI, education AI
None have advanced to floor votes as of October 2025.
Prospects for 2026:
Optimistic Scenario (30% Probability):
- Bipartisan compromise legislation passes creating federal AI regulatory framework
- Legislation establishes minimum national standards but preserves state authority to impose additional requirements
- Federal framework largely adopts California/Colorado model (transparency, impact assessments, anti-discrimination)
- Effective date: 2027-2028 with staggered implementation
- Impact: Modest harmonization, but state laws remain relevant
Moderate Scenario (50% Probability):
- Narrow federal legislation passes addressing specific high-risk uses (law enforcement AI, healthcare AI)
- No comprehensive framework; states retain primary regulatory authority
- Federal legislation creates sector-specific preemption in limited areas
- Impact: State patchwork continues for most commercial AI applications
Pessimistic Scenario (20% Probability):
- No significant federal AI legislation passes in 2026
- Continued gridlock due to partisan disagreement on scope and approach
- States accelerate legislation to fill federal void
- 2027-2028: 30+ states with comprehensive AI laws
- Impact: Increased compliance complexity, potential Constitutional conflicts, industry pressure for federal action intensifies
Strategic Implications:
-
Do Not Delay Compliance Hoping for Federal Rescue
- Even optimistic scenario (federal legislation in 2026) means 2027-2028 effective dates
- State compliance required NOW (Colorado June 2026, Texas January 2026)
- Federal law unlikely to provide complete preemption
-
Federal Legislation Will Likely Adopt State Frameworks
- Compliance with California/Colorado/Connecticut standards positions for federal compliance
- Investment in state compliance not wasted even if federal law passes
-
Industry Should Support Federal Legislation
- Uniform national standards reduce compliance complexity
- Federal framework provides certainty for product development
- Preemption (even partial) reduces multi-state compliance burden
-
Constitutional Challenges Likely
- As state laws proliferate, Commerce Clause challenges probable
- Courts may strike down most burdensome state requirements as unconstitutional burden on interstate commerce
- Constitutional litigation timeline: 3-5 years to resolution
-
International Harmonization Matters
- EU AI Act effective 2026, creating global compliance standard
- U.S. federal legislation likely to harmonize with EU framework
- Companies complying with EU AI Act partially prepared for eventual U.S. federal law
Practical Next Steps: What AI Companies Should Do Now
Immediate Actions (Next 30 Days)
1. Conduct Jurisdictional Audit ($5,000-$15,000 internal/legal review)
Identify where you operate and which state laws apply:
- Where are your servers and infrastructure located?
- Which states do your customers reside in?
- Where do you have employees or contractors?
- Which states have specific laws applicable to your AI systems?
Create spreadsheet mapping: AI System → Jurisdictions → Applicable Laws → Compliance Status
2. Prioritize High-Risk Systems (Internal analysis)
Focus compliance efforts on systems most likely to trigger state requirements:
- Employment decisions (hiring, promotion, termination)
- Credit, lending, insurance underwriting
- Housing access and tenant screening
- Healthcare diagnosis or treatment recommendations
- Education admissions or student evaluation
- Law enforcement or government benefits
- Biometric identification or emotion detection
3. Designate Compliance Officer (Internal resource allocation)
Assign responsibility for AI compliance to specific individual or cross-functional committee:
- VP of Legal/Compliance (if large company)
- General Counsel (if mid-sized)
- CEO/Founder + outside counsel (if startup)
Ensure adequate budget and authority to implement compliance measures.
4. Establish Legal Monitoring ($2,000-$5,000/month for legal updates)
Subscribe to legal update services tracking state AI legislation:
- State legislature monitoring services
- Law firm regulatory alerts
- Trade association updates (if member)
- Set Google Alerts for key state bills
Short-Term Planning (Months 2-6)
5. Develop Transparency Infrastructure ($30,000-$100,000)
Build systems to provide required disclosures:
- User-facing notifications when AI makes consequential decisions
- Public transparency reports on company website
- Individual decision explanations (upon request)
- Regular updates as systems change
6. Implement Bias Testing Program ($50,000-$150,000)
Establish testing protocols:
- Define protected characteristics and test datasets
- Conduct initial bias assessments for high-risk systems
- Document methodology and results
- Engage third-party auditors (if required by jurisdiction)
- Create remediation plans for identified bias
7. Document Compliance Status ($15,000-$40,000)
Create comprehensive compliance documentation:
- Impact assessments for high-risk systems
- Data protection and security measures
- Training data sources and characteristics
- Human oversight procedures
- Recordkeeping and retention policies
8. Update Terms of Service and Privacy Policy ($10,000-$25,000 legal drafting)
Ensure customer-facing documents address:
- AI system disclosures
- Data collection for AI training
- Consumer rights (access, deletion, opt-out)
- Biometric data handling (if applicable)
- State-specific requirements
Long-Term Positioning (Months 7-12)
9. Build Compliance into Product Development (Process integration)
Integrate compliance into engineering workflow:
- Pre-deployment impact assessments
- Bias testing before launch
- Transparency documentation as part of release process
- Security audits for new AI systems
10. Engage with Regulators ($25,000-$60,000 legal counsel + participation)
Proactive regulatory engagement:
- Respond to state Attorney General requests for information
- Participate in industry working groups
- Submit comments on proposed regulations
- Consider regulatory mitigation agreements (Utah model)
11. Monitor Federal Developments (Ongoing legal counsel)
Track federal legislation and prepare for eventual national framework:
- Analyze pending bills and their likely path
- Assess potential preemption impact
- Engage in industry advocacy for favorable federal framework
12. Consider Certification ($100,000-$300,000 for third-party certification)
Pursue independent compliance certifications if:
- Selling to enterprise customers with procurement requirements
- Seeking competitive differentiation
- Preparing for highly regulated industry deployment
- Raising capital from responsible AI-focused investors
Looking Ahead: The Future of State AI Regulation
State AI regulation is not a temporary phenomenon awaiting federal rescue. It represents the new permanent reality of AI governance in America.
Expect Continued Expansion:
- 2026: 15-20 additional states enact comprehensive AI frameworks
- 2027: 30+ states with substantive AI regulation
- 2028: Potential federal legislation creates floor, not ceiling
- 2029+: State laws evolve to address emerging AI capabilities (AGI, autonomous systems)
Trends to Watch:
-
Convergence on Common Requirements - As more states legislate, frameworks increasingly resemble Colorado/California models (transparency, bias testing, impact assessments)
-
Sector-Specific Regulation - Targeted laws for AI in healthcare, education, law enforcement, financial services with stricter requirements than general commercial AI
-
Private Rights of Action - Pressure to create private lawsuits (following Illinois BIPA model) rather than exclusive government enforcement
-
Criminal Penalties - Expansion of criminal liability for prohibited AI uses (following Texas deepfake approach)
-
Licensing Regimes - Potential state-level licensing for high-risk AI developers (similar to money transmitter licensing for fintech)
-
Interstate Compacts - Possible multi-state agreements creating uniform requirements to reduce compliance complexity
Strategic Imperatives:
- Compliance is Competitive Advantage - Companies that embrace regulation as opportunity will outperform those treating it as burden
- Transparency Builds Trust - Publishing comprehensive AI safety documentation attracts customers and talent
- Federal Engagement Matters - Industry must actively shape federal legislation to achieve workable national framework
- Investment in Safety Infrastructure Pays - Robust compliance systems reduce enforcement risk, enable premium pricing, and attract capital
The state AI regulatory patchwork is complex, expensive, and operationally challenging. But it also represents the market demanding accountability, transparency, and safety in AI systems. Companies that lead in compliance will lead in the AI economy.
Need Multi-State AI Compliance Guidance?
Astraea Counsel advises AI companies on navigating the 50-state regulatory landscape, developing efficient multi-state compliance frameworks, and positioning compliance as competitive advantage.
Related Resources:
- California AI Law SB 1047 - Deep dive on California's comprehensive framework
- Federal AI Regulation Landscape - Track federal legislative developments
- AI & Emerging Technology Services - Comprehensive AI legal counsel
- Regulatory Compliance Practice - Multi-jurisdictional compliance strategy
Disclaimer: This article provides general information for educational purposes only and does not constitute legal advice. AI regulation is evolving rapidly at both state and federal levels. Consult qualified legal counsel for advice on your specific situation and compliance obligations.
Footnotes
-
California Assembly Bill 1008, Amendments to California Consumer Privacy Act (CCPA) (2024), effective January 1, 2025, available at https://leginfo.legislature.ca.gov/ ↩
-
Colorado Senate Bill 24-205, Consumer Protections for Artificial Intelligence (Colorado AI Act), signed May 17, 2024, effective date delayed to June 30, 2026, available at https://leg.colorado.gov/bills/sb24-205 ↩
-
Texas Responsible AI Governance Act (TRAIGA), signed June 22, 2025, effective January 1, 2026; Texas Senate Bill 441 (SB 441) and House Bill 581 (HB 581) (deepfake laws), signed June 22, 2025 ↩
-
New York City Local Law 144-21, Automated Employment Decision Tools (AEDT Bias Audit Law), effective July 5, 2023, available at https://www.nyc.gov/ ↩
-
Illinois Biometric Information Privacy Act (BIPA), 740 ILCS 14/, enacted 2008, amended by Senate Bill 2979 (August 2024), available at https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004 ↩
-
Clearview AI Biometric Privacy Litigation, Settlement approved March 2025, Northern District of Illinois, $51.75 million settlement with 5-year ban on Illinois government access ↩
-
Connecticut Senate Bill 2 (SB 2), "An Act Concerning Artificial Intelligence," pending final passage with various effective dates proposed starting July 1, 2025 ↩
-
Massachusetts Attorney General Andrea Joy Campbell, Guidance on Artificial Intelligence and Consumer Protection Laws (April 16, 2024), available at https://www.mass.gov/ ↩
-
Utah Senate Bill 149 (Artificial Intelligence Policy Act) and House Bill 452 (AI-Supported Mental Health Chatbot Regulations), enacted 2024-2025 ↩