Last summer, the Republican-controlled Congress failed to include in its “One Big Beautiful Bill” a major provision addressing national artificial intelligence (AI) regulation. The House inserted language into its version of the budget reconciliation bill that would have banned state-level AI regulation for 10 years, but it ran afoul of Senate rules prohibiting “extraneous matters” in the legislation. Though Sen. Ted Cruz (R–TX) tried to find a work-around for the problem and soften the measure to gain his colleagues’ support, the Senate voted 99–1 (Cruz being the 1) in support of a motion by Sen. Martha Blackburn (R–TN) to remove the AI provision.
“This provision could allow Big Tech to continue to exploit kids, creators, and conservatives,” claimed Blackburn. “Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their children.” Moreover, she is quoted as saying, “What we know is this: This body [Congress] has proven that they cannot legislate on emerging technology.”
With no federal legislation regulating AI technologies, state legislatures are making their own efforts to advance data privacy and security in AI tools. I summarized some earlier efforts in an article last year in Regulation. Here is an update.
Model legislation
Before looking at individual state efforts, I note that several of them appear to be influenced by model AI legislation offered by the American Legislative Exchange Council (ALEC). The “Model State Artificial Intelligence Act,” finalized in August 2024, encourages AI technology innovators to work together with state governments to develop applications and inform on “best practice” regulatory approaches through partnerships that mitigate regulatory safety risks.
The model legislation would allow technology innovators to apply for “regulatory mitigation agreements” to reduce state regulatory burdens for a specified duration, scope, and quantity of users. This so-called “regulatory sandbox” would assist technology innovators by reducing regulatory risk for experiments and help state governments determine which AI regulations are and are not necessary. To this end, the Federalist Society’s Regulatory Transparency Project has cited the ALEC model legislation as “the best option for states to ensure life-saving AI innovations can come about” and to show leadership on emerging AI technology policy.
ALEC is a conservative organization; its blue-leaning competitor, the National Conference of State Legislature (NCSL), is likewise becoming active in this policy area. It has formed a working group to coordinate approaches to regulating AI technologies and inform state legislatures on potential actions that balance AI governance with innovation.
Recent legislation
The year 2025 saw several different regulatory approaches enacted at the state level. Over 1,000 AI-focused bills were introduced in state legislatures in 2025, which is more than twice the number in 2024, according to Chelsea Canada, program principal in the NCSL’s Financial Services, Technology and Communication Program. In the 2025 legislative session, all 50 states, Puerto Rico, the Virgin Islands, and Washington, DC, considered AI-focused legislation. Some 38 states adopted or enacted approximately 100 legislative measures this year. In addition, 34 states introduced over 250 healthcare-related AI bills, with them generally falling into four categories: disclosure requirements, consumer protection, insurers’ use of AI, and clinicians’ use of AI.
One interesting aspect of AI state legislation in 2025 is that bills included separate “guardrails” for multiple AI technologies within different provisions of a single bill, rather than offering a uniform set of requirements across AI technology systems. Also, some state AI governance bills had provisions for AI risk management programs and/or AI risk or impact assessments that offer some form of a “safe harbor”—that is, a presumption against liability—for companies that voluntarily adopt these measures.
Examples of 2025 enacted AI legislation include:
Arkansas passed legislation that clarifies who the owner of AI-generated content is, including the person who provides data or input to train a generative AI model or an employer, the latter if the content is generated as a component of employment duties. This legislation also specifies that the generated content should not infringe on existing copyrights.
Montana passed a “Right to Compute” law establishing safety requirements for critical infrastructure controlled by an AI system. The deployer must develop a risk management policy that considers guidance from a list of specified standards, such as the latest version of the AI Risk Management Framework from the National Institute of Standards and Technology (NIST).
New Jersey adopted a resolution urging generative AI companies to make voluntary commitments regarding employee whistleblower protections.
New York enacted a law requiring state agencies to publish detailed information about their automated decision-making tools on their public websites through an inventory created and maintained by the state’s Office of Information Technology. The law also amends the state’s civil service law, requiring that an AI system cannot affect the existing rights of employees pursuant to an existing collective bargaining agreement and requires that an AI system does not result in displacement or loss of a position.
North Dakota passed an AI law prohibiting individuals from using an AI-powered robot to stalk or harass another individual, expanding current harassment and stalking laws.
Oregon enacted a law that specifies that a non-human entity, including an agent powered by AI, cannot use specific licensed and certified medical professionals’ titles such as a registered nurse or certified medication aide.
Texas passed the Texas Responsible Artificial Intelligence Act, which includes a “safe harbor” provision whereby an entity will not be prosecuted if it discovers a violation of the law based on an internal review process. But this is conditional if the entity demonstrates a risk management program that “substantially complies” with NIST’s AI Risk Management Framework, Generative Artificial Profile, or another nationally or internationally recognized AI risk management framework. Texas joins Utah in establishing an AI “regulatory sandbox” in the United States, although the two programs have significant structural differences.
By 2025, nearly every state has adopted privacy and data security legislation that—to different degrees—affects AI systems.
White House effects on state legislation
Despite the failure of Congress to pass AI legislation, there was activity on the federal level in 2025. In January, President Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which included the development of an “AI Action Plan.” Then, in July, it unveiled “Winning the Race: America’s AI Action Plan,” which has three pillars:
- Accelerate AI innovation
- Build American infrastructure
- Lead in international AI diplomacy and security
Pillar I’s components promote development and distribution of new AI technologies across every field and industry faster—and more comprehensively—than America’s competitors. It also proposes to remove unnecessary regulatory barriers that hinder private-sector AI implementation. Pillar II focuses on the nation’s need to build and maintain vast AI infrastructure and the energy sources to power it by easing environmental hurdles and eliminating bureaucratic red tape. Pillar III reflects policy initiatives that will establish American AI, from advanced semiconductors to AI application models, as the “gold standard” for global AI.
From the perspective of the development and implementation of state-level AI legislation, Pillar I states, “The Federal government should not allow AI-related federal funding to be directed toward states with burdensome AI regulations that waste these funds but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” Since the Cruz provision of preempting state-level AI system regulation was not included in the “One Big Beautiful Bill,” this policy statement now takes on greater importance in influencing state-level AI legislation.
Will the Cruz AI provision failure prevent Big Tech from lobbying for passage of federal AI legislation? Certainly not. To eliminate the fragmented approach of state-by-state AI regulatory requirements, the tech industry remains resolute in its efforts to get stand-alone federal AI data privacy and security legislation in 2026. In addition, US tech firms must contend with the European Union’s “Artificial Intelligence Act,” which took effect in 2025. Darren Kimura, CEO and president of the AI platform AI Squared, succinctly stated in the Wall Street Journal, “[A] federal AI framework is essential” (Loten, 2025). Such consistent federal standards will help promote “secure, fair development of AI,” according to an Amazon.com spokesperson quoted in the article.
As midterm election efforts ramp up, the federal legislative “window” for passage of AI legislation is narrow. Further complicating matters, it requires a tenuous balancing of states’ rights and the economic benefits of a national standardization program of AI systems.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.