Discussions about artificial intelligence (AI) have taken on an increasingly pro-regulation tone. At the state and local level over the past two years, for example, lawmakers have proposed hundreds of pieces of legislation that would impact AI-powered technologies. In this paper I investigate the opportunity costs associated with new state and local regulations that are intended to control the creation or use of AI. Utilizing data and existing research on the economic impacts of regulation and innovation, I demonstrate why state and local governments should consider a more restrained regulatory approach.

My paper draws from, and builds upon, existing research exploring the relationship between government institutions, regulatory regimes, and emerging technologies. Understanding the opportunity costs of different regulatory regimes for AI can help lawmakers evaluate tradeoffs and promote policies that support a permissive regulatory environment for public and private experimentation with AI-powered tools.

Introduction

The term artificial intelligence (AI) first emerged in the 1950s, but it burst into the public consciousness with the release of OpenAI’s generative AI model ChatGPT in November 2022.1 Generative AI models, or large language models, have become shorthand for discussing software that relies on machine learning (ML) techniques such as deep learning, a computational technique that allows software to improve itself through a process called “training.”2 While generative AI has captured public attention, it is just one area where AI/ML technologies can, and are, being deployed.

Firms are pioneering the usage of AI/ML techniques to promote human flourishing. Examples include accelerating drug development, improving personalized diagnosis and health treatment plans, creating platforms that enable AI-supported tutors that are tailored to individual students’ progress, and building an AI-enabled robotics platform to support industrial automation and robotics.3 These firms, and the scores of others leaning into the AI revolution, are developing software and data-enabled processes to bring down costs and accelerate progress in industries and products in the physical world.

As companies continue to hype their own AI offerings, public sentiment around AI has remained a mixed bag. The majority of Americans are equally excited and nervous about an AI future. Concerns about fraudulent or deepfake images proliferating online, AI being substituted for human labor throughout the economy, and the further empowerment of large corporations are prevalent across the country.4 While federal lawmakers are considering various regulatory approaches, state and local government regulators are drafting, passing, and implementing new laws covering the development and deployment of AI tools.5

Since 2022, there have been more than 800 new laws proposed at the state and federal level that focus on the creation or use of AI tools.6 Proposed legislation varies between directives for governments to learn more about AI, to evaluate existing regulatory tools for the age of AI, and to work to restrict or outlaw certain types of AI from being developed or made available to the public. This legislative approach, particularly the latter proposals, represent a shift in regulatory tone from the policy that accompanied the rise of the internet. Previously, states were less active, but the federal government worked to prevent a balkanization of regulations for cyberspace when necessary.

Not all regulation will halt innovation, just as not all innovations require new regulation. People muse that states are “laboratories of democracy,” meaning the states’ experiments in regulation can help lawmakers identify better policy options. But in an increasingly interconnected and digital world, such experiments can bubble over and have a disruptive impact beyond state borders. As noted above, the level of activity by state lawmakers on technology policy issues is a departure from the earlier years of the internet. This shift is likely a product of the ubiquity of mobile phones and the various products and experiences that more and more Americans, at increasingly younger ages, are exposed to. Generative AI models mark the next iterative phase of consumer technology, which has made it the proxy for policy discussions related to AI generally across government.

The consequences of overzealous AI regulation from state and local governments is best analyzed through the lens of opportunity cost. Opportunity cost is an economic principle, defined as the unit of output or cost forgone by an individual action. If I have a limited research and development budget, there are only so many projects I can fund. The opportunity cost is represented by projects I could have funded but did not.

In the context of AI regulation, the opportunity cost of regulation in the short run is both the cost of administering that regulation as well as productivity or output that is forfeited as a result. In the long run, it is the projects, products, and innovations that never materialize because regulations stifled their emergence. In other words, overly prescriptive regulation of AI risks stifling the emergence of new technology that could potentially improve society.

In this paper I will discuss how opportunity cost can be a useful lens to evaluate state and local governments’ policy approach to the development and deployment of AI-powered tools. Rather than rushing to expand the government’s regulatory reach, lawmakers should take stock of currently available regulatory tools and seek to understand where states and localities have traditionally regulated similarly effective general-purpose technologies.7 State and local governments should also focus on engendering experimentation with AI products. Regulatory sandboxes and pilot programs can help boost local innovation while enabling officials to identify gaps in regulation in real time. These approaches would allow public and private sector actors to integrate AI into their workflows or products with some protection from liability so they can create an opportunity to learn about the tradeoffs that different use cases create.

Federal Versus State Regulations

Before investigating the types of regulatory approaches states are taking to AI, it is important to clarify where state governments are best positioned to regulate vis-à-vis the federal government. With the rapid rate of innovation in AI technologies, government action may be perpetually reacting to new developments and thus its policies quickly become obsolete—a phenomenon referred to as the “pacing problem.” This is not just a feature of federal action, but state action as well.

General-Purpose Technology and the Role of Regulation

AI is being heralded by some people as a general-purpose technology, meaning that it can be applied across many sectors and tasks. Other notable technologies, such as the steam engine, electricity, and interconnected telecommunications, were generic technologies that improved other production processes, creating new industries and positive spillovers.8 Beyond iterative improvements in existing market processes, general-purpose technologies create new forms of production that can spill over to the rest of the economy. Each of these technologies were, in part, responsible for industrial revolutions.

Researchers argue that general-purpose technologies are invention-stimulating innovations, meaning that they create a new platform for innovation that can be applied to every sector of society.9 This is true when thinking of AI models as research assistants or force multipliers for new innovations, particularly in scientific discovery and applications such as drug development and materials science. Failing to consider the variety of applications for underlying technology such as AI could lead to misplaced regulation that restricts innovation and undercuts benefits for society writ large.

Federal Function

The United States divides regulatory powers across different levels of government in a system known as federalism. The Constitution enumerates specific powers that are the focus of the federal government, while leaving any nonenumerated powers to state and local governments. Two of the enumerated powers—known as the Commerce Clause and the Supremacy Clause, respectively—are relevant when examining the relationship between federal and state regulation.

The Commerce Clause specifies that Congress is responsible for passing laws that regulate interstate commerce.10 Interstate commerce refers to the commercial interactions that occur between states, such as a farmer in Iowa selling his corn to a popcorn company in Louisiana. This exchange crosses state lines and is regulated by the federal government to reduce transaction costs. In the context of AI systems, transaction costs include the resources expended on negotiating data-access agreements for training sets, ensuring regulatory compliance, and establishing contractual frameworks for model deployment and use.

Related Media

The Supremacy Clause stipulates that federal law takes precedence over state law if a conflict arises between the two.11 The Supremacy Clause ensures ordered preferences for legal scrutiny and liability. If there is a federal law, that is the law of the land. If there is no federal law, then states are free to regulate as they see fit. The Supremacy Clause is most impactful in the AI context with regard to federal preemption. If many states begin passing laws related to AI auditing but are doing so in a manner that conflicts with one another, a federal law could preempt all of the existing state laws, creating a uniform standard.

Beyond constitutional considerations, Congress and the now-departed Biden administration made their desire to control the technology through regulation quite clear. That approach had significant consequences for both technological development and deployment, and it differs from the tenor and early actions of the second Trump administration.

The 118th Congress has introduced more than 100 bills focusing on AI, with varying action in the Senate and House of Representatives.12 Senate Majority Leader Chuck Schumer assembled a package of bills following the conclusion of a series of closed-door AI Insight Forums that took place over the past year.13 The Senate Committee on Commerce, Science, and Transportation advanced several AI-related bills that would have allocated money for programs to expand public access by subsidizing access to the hardware necessary for training and running AI models or to improve public education about AI and its uses, as well as helping small businesses leverage new tools.14 These bills were primarily focused on spending taxpayer money to start up new programs, but they also included regulatory components through program qualifications and requirements for private firms receiving such funds.

Across the Hill, the House worked on legislation related to AI and attempted to find ways to use the technology to support internal functions. The House Committee on Science, Space, and Technology and the Committee on House Administration have focused on supporting AI research and development and providing clarity for how members of Congress and their staffs can leverage AI.

The Biden administration also staked its claim on AI regulation. In October 2023, President Biden signed Executive Order 14110 [Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence]—the longest executive order ever issued—which focused on the regulation of AI models.15 The order adapted legislative proposals from Congress; tasked federal agencies with evaluating how to safely integrate AI; and established compute thresholds, which are reporting limits for companies developing models using certain amounts of computational power, as triggers for government oversight. This followed the release of the National Institute of Standards and Technology’s Risk Management Framework for AI and the White House Office of Science and Technology Policy’s AI Bill of Rights.

Since his inauguration, Trump’s administration has changed the tenor and approach of the executive branch toward AI regulation. Three days into his administration, President Trump signed his own executive order on AI, titled “Removing Barriers to American Leadership in Artificial Intelligence.”16 In addition to repealing Biden’s Executive Order 14110, Trump ordered the creation of a new AI action plan to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Complementing the AI action plan, the Office of Management and Budget has issued two memoranda, one directing federal agencies to accelerate the use of AI throughout the federal government, and the other establishing principles for effective acquisition of AI models for federal use.17

States as Mad Scientists

States’ ability to distinguish themselves from one another through different regulation is why they are often referred to as “laboratories of democracy.” The phrase originates from an opinion written by former Supreme Court Justice Louis Brandeis, who noted how, within the United States’ system of federalism, states’ relative decentralization created individual proving grounds where state governments could experiment with new laws and regulations.18 In the context of AI, this can be both a blessing and a curse.

On the one hand, state and local lawmakers are closer to the needs of their constituents and better understand how a new technology can support the constituents’ existing comparative advantages. This is a benefit of government decentralization in the context of federalism. Just as some states are known for agriculture or biotechnology development or resource extraction, it would make sense that certain states would take the lead on different applications or approaches to leveraging AI tools.

On the other hand, such differentiation can create uncertainty for AI developers and deployers. For states with larger markets, enacting broad regulatory structures for AI could create a crowding-out effect for smaller states because suppliers would have more incentive to tailor their products to the requirements of more-populous states. This governance effect has been exemplified by California’s approach to data privacy and, more recently, artificial intelligence. California’s large internal market and penchant for expansive regulation has created the “Sacramento Effect,” a play on the “Brussels Effect,” which describes how regulations imposed upon a large consumer market can have extraterritorial impact, particularly on smaller states.19

Beyond regulatory overreach, there is also the question of whether states have the ability to effectively enforce new laws, given limited state capacity. State capacity refers to the government’s ability to effectively administer services and carry out its duties.20 Governments often lack technical talent, which could impede states’ ability to effectively regulate and govern the use of AI tools. There are several areas where states already enjoy significant regulatory autonomy that could impact the development and diffusion of AI.

There are also concerns about states’ tax laws. Progressive corporate and personal income tax rates disincentivize investors and tech workers from locating to some areas. Other taxes related to consumption, energy, and information could impact the emergence of AI, such as link taxes and robot taxes. Link taxes require sites that host information to compensate individuals and businesses when linking to their content.21 Such laws would affect the availability of information online, diminishing the amount of content available for building training datasets, as well as reducing functionality for models interacting with the open web. Lawmakers would levy robot taxes on firms that choose to adopt AI-powered tools, with the legislative intent of discouraging the replacement of labor with AI.22 Contrary to popular narratives surrounding AI and its impact on labor, early evidence suggests that AI can be a complementary technology that improves labor productivity, particularly for lower-skilled workers.23

States are also working to promote the growth of necessary infrastructure to power the AI revolution, deploying subsidies to encourage corporate investment.24 Encouraging such growth is important, but providing subsidies without addressing other regulatory barriers can undermine a policy’s effectiveness. States enforce many land-use regulations and environmental laws that affect the cost of building the physical infrastructure necessary for developing AI models.25 Addressing these regulatory restraints is one way that state governments could act and support AI diffusion.

Another area where regulatory reform can boost AI is licensing for professional services and businesses. Occupational licensing is already heterogeneous across states. Experimenting with how new technologies such as AI can augment existing jobs, lower barriers to entry for new workers, or create new opportunities or pathways should be examined. Utah’s regulatory sandbox for legal services, a pilot program for firms looking to provide both legal and nonlegal services to consumers through innovative methods, could be a template for other states.26

Finally, state law is influential when it comes to personal liability. In the AI context, it will be important to differentiate and specify where liability lies if a model is misused and causes harm to someone or something.27 Does the liability lie with the individual user, or the model developer, or both? At what point does an application built upon a foundation model become a separate product? Will there be penalties or safe harbors for model developers that pursue a more open approach to accessing and augmenting foundation models? These areas of law and regulation will affect the market and the uses for AI by individuals and businesses, as well as affecting opportunities to integrate or experiment with AI in the public sector.

There has been significant interest in AI regulation in state houses across the nation. According to the consulting firm Mul​ti​state​.ai, more than 600 AI-related bills were introduced in 2024.28 While that is a large number, less than 100 have been signed into law, and the trend is not consistent among states.29

Of the bills that have been signed into law, they break down generally into three categories: studies and fact-finding, liability for AI-generated content, and model regulation.

Studies and Fact-Finding

According to Multistate, as of May 2024, 28 states had passed legislation to study AI models in some capacity.30 Some studies focused on how governments can leverage the new technology, others focused on how AI could impact state services such as education or health care, and others examined how existing laws and regulations will impact different uses of AI.

Regulatory auditing helps state legislators understand what laws are already in place and whether a new law is actually necessary. This is especially important when considering the long-run trend toward adding regulations and the negative effect that overregulation can have on economic growth.31 Gathering such information can identify where the state is fit to regulate, or where capacity is lacking, and should be addressed before creating a new regulatory regime.

Liability for AI-Generated Content

Eighteen states have enacted legislation that makes it a crime to use generative AI tools to create nonconsensual deepfake pornography, and 14 states have enacted legislation governing deepfakes related to elections.32 While many people would agree that preventing harms from synthetic content in these areas is worthwhile, laws that restrict generative AI more broadly could be used to restrict speech and freedom of expression. A Louisiana deepfake bill was vetoed on such grounds.33

In the 118th Congress, legislators introduced more than 20 bills whose intent was to mitigate harms from AI-generated content.34 Having a uniform standard on what constitutes nonconsensual synthetic pornography or election manipulation could make it easier to hold perpetrators accountable.

Model Regulation

These types of regulations pose the greatest potential threat to AI development and diffusion because of regulatory burdens laid on the technology itself. Colorado and Utah were the first states to enact such laws, with other states considering similar action.

Colorado’s law embraces the precautionary principle, prohibiting innovations whose safety has not been demonstrated beforehand.35 With AI, the precautionary principle requires developers and deployers to identify and mitigate potential harms before they occur.36 Colorado Governor Jared Polis stated his concerns that the law will deter “an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.”37 Dozens of Colorado-based small businesses and startups expressed this concern to Polis and the state legislature.38

Utah’s Artificial Intelligence Policy Act clarifies how existing law covers the use of AI in licensed professions, directs government officials to study how AI is being used in Utah, and defines how AI tools can improve public services and support the citizens of Utah.39 It also creates a path for AI developers and deployers to receive targeted relief from regulation to encourage public and private entities to learn how AI can solve problems and create new ventures. The law focuses on clarifying how existing law already applies to model developers and users and creates paths for regulatory flexibility or exceptions to support near-term AI adoption. While it does not create specific laws regulating models, it does clarify and create a forum for adjusting laws to support model diffusion.

These three different categories of laws illustrate the wide range of actions that state governments can, and are, taking in response to the proliferation of AI. When looking through the lens of opportunity cost of regulation for AI, more-expansive regulations create a larger opportunity cost because such laws will affect a greater number of use cases and create additional burdens for model developers and deployers.

Thinking About the Opportunity Costs

Overzealous or misplaced regulation of AI model development could threaten the potential growth and welfare enhancement that AI-enabled technologies could create. Following are scenarios involving AI/ML, where policymakers eager to mitigate harms could curb the potential benefits of AI.40

Hindered AI Diffusion

The greatest threat posed by regulation is inhibiting the broader diffusion of AI/ML technologies. As a general-purpose technology, AI will hopefully create gains in efficiency and output, improving existing processes while enabling new ventures.41 Regulations that make it harder for people to experiment with AI-enabled technologies are tantamount to slowing down the diffusion of electricity or computers.

Goldman Sachs and other financial institutions have projected that AI could boost global GDP by 2–10 percent in the next few years.42 While incredibly bullish, these analyses correctly foresee AI diffusion improving productivity and efficiency within existing processes while unlocking innovative new processes.43

An industry where AI is, and will continue to be, a critical component is advanced manufacturing. Cutting-edge factories that exist in the United States, as well as in those around the world, are increasingly taking advantage of robots that rely on ML and software-enabled hardware to do jobs faster and more efficiently.44 Look no further than companies such as Hadrian, Tesla, and Anduril, where software solutions underpin their business success.45

Recent research on the necessary steps to accelerate advanced manufacturing point to the development, deployment, and experimentation of AI/ML tools, stressing the final point as particularly important for supporting this new production process.46 Regulations that necessitate predeployment approval when a model is altered or used for a new task would not just slow these industries, but also add costs that such innovations were intended to minimize.

Regulations that add additional compliance without some form of deregulatory counterbalance, such as a regulatory sandbox (a program that provides targeted regulatory relief to enable innovative products and services), inhibit both the private sector and the government from enjoying the economic and structural benefits such technologies are expected to create. For firms interested in adopting AI technologies, the cost-benefit tradeoffs will have to include potential regulatory exposure during integration and deployment. Lawmakers taking an adversarial approach to a new technology in its infancy may do well politically in the short run, but this approach could come with steep costs in the long run in regard to building capacity and trust between the public and private sectors.

Overregulation in Infancy and the Precautionary Principle

Overregulation can distort technological development by limiting innovation before it can appear. This means the technology is constrained by central planners whose expertise and motivations are not known.

The opportunity cost from an overcautious or deterministic regulatory policy is the loss or delay of breakthroughs and innovations. This is the essence of the precautionary principle.

The European Union’s tech regulation, at the bloc and member-state level, currently operates under the precautionary principle. A few metrics comparing the United States and EU on technology show its effects. The cumulative market capitalization of the top seven US tech firms is 20 times larger than the top seven EU firms. Worldwide, some 36 of the 50 largest technology firms, by market cap, are American, and only 3 are European. And, in 2023, venture capital investment in startups based in the United States was 165 percent greater than in the EU.47 A key culprit is the EU’s regulatory approach to emerging technologies: imposing compliance regimes and other ex ante regulations to mitigate foreseeable harms from technologies. Such an approach has birthed the General Data Protection Regulation and, more recently, the bloc’s AI Act. The General Data Protection Regulation has been particularly onerous for startups and new entrants while benefiting well-resourced incumbents with expansive compliance and lobbying organizations. The incumbents are well-positioned to influence the creation of such legislation, and they absorb additional compliance costs after the fact.48 The EU AI Act is already dissuading leading AI model developers from launching their products on the continent.49

The enormous gap between American and European tech development should serve as a deterrent for importing a European-styled AI regulatory regime. It should prompt a larger conversation about the misguided trains of thought that have convinced European regulators and citizens to embrace their highly risk-averse approach to emerging technologies. Europe ultimately has chosen stagnation over innovation. The United States must not make the same mistakes as Europe, and should instead enable progress and innovation powered by AI.

Regulatory Capture and Rent Seeking

Regulatory capture is an ever-present threat when governments craft new laws and regulations. Regulatory capture refers to the behavior of regulators who are supposed to serve the public, but who instead become tools of special interests through lobbying and corporate influence.50 Incumbent firms and special interest groups recognize the value of creating regulatory hurdles that insulate them from competition and disruption. AI is one more industry where this is the case. Regulatory capture in AI is most likely to manifest itself in bureaucratic processes related to auditing and algorithmic impact assessments, or laws that limit or constrain the use of AI to protect specific constituencies from labor disruption.

The captured bureaucratic processes will benefit large AI companies and existing compliance organizations at the expense of startups, new entrants, and open-source developers. Oversight and transparency can be important tools to ensure that reasonable safeguards and expectations exist for developers and deployers of AI. But as requirements become more cumbersome and complex, larger firms will benefit from access to legal and compliance teams.51 California’s recently failed model regulation, SB 1047, faced significant opposition from a host of voices in the tech community over these specific concerns.52

For the limitations on AI, polling indicates that many Americans are concerned about the potential labor disruption associated with automation.53 These concerns should be taken seriously, but policymakers must not lose sight of the potential for new technologies to birth new industries and professions.54

If regulators and government officials would like to act in this area, they should focus on acquiring more information on the effects of new technologies rather than attempting to close areas of the economy off from progress and disruption.55 For example, creating paths for upskilling or studying how AI is supporting various industries can help inform future policies related to regulation and licensing regimes, while still allowing market signals to guide resource allocation. These are alternatives to simply insulating humans from competition or creating new rules for AI. Attempts aimed at safety or stability can be well-intentioned, but history has shown that such attempts are more likely to allow incumbent industries and interests to secure their business model in the short-term.

Conclusion

The opportunity cost of AI regulation is all the benefits that are forgone if it is impeded. In the push for greater safety, equality, or political gain, regulation dooms some outcomes before they can materialize. This does not mean all regulations are inherently bad or should be eschewed. But it does mean that, just as people ask to think about the risks or downsides of technological disruption, we must maintain that same skepticism when it comes to the risks and downsides of regulations, particularly as their scope broadens.

With this dynamic in mind, state and local legislators considering AI regulation should be clear-eyed about the second- and third-order effects their policies are bound to have. This may seem obvious, but based on the scale and speed with which regulatory proposals related to AI have proliferated, it is worth raising this point as often as possible. State and local governments should prioritize learning about where AI-powered systems can work to address pressing problems that challenge their citizens, where the technology is most likely to be disruptive, and where they, as government officials, have a comparative advantage on regulation. Considering where regulation is truly necessary will first require a look within existing bodies of laws and regulations. Such fact-finding should be the first step any governing body takes before moving to enact AI-specific legislation.

AI-powered technologies are introducing new opportunities, uncertainties, and potentials for significant progress and positive change for humanity. People are reasonably concerned about what this will mean for their jobs, their families, and their communities. However, these anxieties should not obstruct advances in human welfare. Impulsive AI regulation can have enormous opportunity costs. These regulations may feel reassuring today, but they may obstruct a life-saving therapeutic, or an early-warning detection for a natural disaster, or a new profession that provides fulfillment and livelihoods for other people.

Understanding that actions in the present can—and will—have a profound effect on the choices available to people in the future is a critical aspect of opportunity cost. State and local governments should use these ideas to guide and temper their regulatory impulses. Without such restraint, we, as a society, risk missing out on a once-in-a-generation technology and all the benefits it could bring.

Citation

Levine, Joshua. “Opportunity Cost of State and Local AI Regulation,” Policy Analysis no. 997, Cato Institute, Washington, DC, June 10, 2025.