Artificial intelligence (AI) could be the most promising technological revolution of all time. AI holds the promise of advancing healthcare and medicine, accelerating scientific discoveries, transforming education and learning, and dramatically boosting productivity and wealth.
The benefits delivered by AI aren’t theoretical, only to be realised in some misty, distant future. Most of my colleagues regularly use AI tools at my own organisation to increase their productivity and enhance their creativity. The efficiency gained through AI frees up more of their time for reasoned reflection and deep thinking, allowing our work to generate greater impact and better results.
As but one example from the corporate world, Microsoft has reported that AI-driven productivity and efficiency increases at its call centres have generated savings of $500 million in a single year. And we’ve already seen life-saving applications of the technology in areas as diverse as stroke recovery to fighting wildfires.
But the benefits to date could indeed be trivial compared to what our not-too-distant future may bring. Yes, some experts believe the wealth gains from the application of AI will be modest—on the order of a 1‑to-2-percent increase in US gross domestic product (GDP) over the next decade. And although that’s nothing at which to sneeze, more optimistic forecasters see a GDP upside of as much as 8 percent or even 15 percent over the same timeframe.
The regulatory threat to progress
Overreach of government regulation can pose a grave threat to nascent, promising technologies. This is particularly true in the case of AI, with many prognosticators having made the case that AI could pose substantial—even existential—risks to humanity. This has put policymakers and regulators on high alert to discern emerging threats from AI and has left them all too ready to apply heavy-handed regulation to deal with those perceived threats.
Most of us are not technologists or, heaven knows, futurists. But everyone can recognise one of history’s most familiar patterns: The emergence of a promising technology is nearly always accompanied by fears of risk or downside from that technology, often including doomsday scenarios. And in every historical case, the technologies did, indeed, carry risks and downsides, but the downsides were dwarfed by the tremendous benefits to humanity. Arguments against AI are not without plausibility. But even more plausible is the argument that AI could be one of the most beneficial technologies humanity has ever created.
This is why overregulation is particularly concerning in the case of AI.
First, the cost of getting it wrong—of strangling the technology and denying society its upside—is incalculable. And holding back AI innovation can itself cause harm—for example, by slowing life-saving inventions such as self-driving cars and healthcare tools or denying access to AI-driven cybersecurity applications.
Second, the drive for regulation to make AI “safe” pits speculative fears against the tangible and growing benefits of the technology. And, ironically, “regulating AI to safety” is sure to be a feckless errand: For just as one can’t stop water from flowing downhill, the evolution of advanced AI capabilities—both good and bad—is sure to proceed apace regardless of regulations enacted by the United States or other developed countries in the West. The most likely result will be to place the US at a competitive disadvantage vis-à-vis the other AI superpower, China. And with free, high-quality open-source AI tools readily available, it’s hard to envision how regulation could possibly stop people from using such applications, which they can simply download from the internet.
Adding to this threat is the risk of regulatory capture. Large incumbent players often become cheerleaders for regulation, since it helps entrench their positions: They can lobby for regulations, the cost burden of which they can readily absorb—knowing that smaller and emerging competitors cannot. Innovative startup companies are logically the least able to bear such a burden. In the AI space, regulations based on model size or computational resources inherently favour large players over innovative newcomers who might otherwise develop more efficient approaches.
Learning from history
Today, seven of the world’s ten most valuable companies are US technology behemoths. One reason for this success is the light-touch approach to regulation that American policymakers have wisely taken towards technology generally, and the internet in particular, over the past 25 years. Yet, we may end up following a very different path with AI technology that is at least as promising.
My colleague, Cato Senior Fellow Jennifer Huddleston, observed, “Much of the conversation around AI policy has been based on a presumption that this technology is inherently dangerous and in need of government intervention and regulation.”
This mindset motivated the administration of US President Joe Biden to issue its “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” in October 2023. The executive order focused significantly on AI’s potential downsides, including safety, threats to individual privacy and rights, labour-market risks related to the dilution or elimination of jobs, and the potential for AI algorithms to exacerbate biases or discrimination.
The executive order’s wide-ranging nature promised to create extensive reporting requirements and a significant regulatory framework. However, much of this framework remained undefined, as numerous government agencies were charged with developing standards, guidelines and regulations to address very general concerns across a wide range of industries and disciplines.
Such an early and comprehensive move towards the regulation of AI raises two yellow flags.
First, while the development and application of artificial intelligence has spanned decades, the rapidity with which the technology is now advancing means we remain in the early stages of its evolution. It is perhaps the height of policymaker hubris to believe that at such a time, one might design and implement a regulatory framework that is at once effective at accomplishing its remarkably ambitious objectives without stifling the technology or disadvantaging the US AI industry. The trajectory and overall impact of AI simply remain too uncertain.
Second, this uncertainty suggests that any move towards comprehensive regulation may be ill-advised. This is surely the case when such a move is made through executive action. If the US is to consider adopting any meaningful AI regulation at the federal level, the stakes and uncertainty make it critical in such a context to respect our constitutional architecture. Namely, that significant AI policy changes should be made only through the legislative process rather than executive orders, with courts applying existing laws to emerging AI applications without adopting novel legal theories.
The current US administration under President Donald Trump took a step in the right direction by rescinding President Biden’s 2023 AI executive order at its earliest opportunity. And its America’s AI Action Plan—released only recently—articulates some sensible objectives for AI policy. Chief among these are the removal of barriers constraining the development and application of AI, promoting open-source AI and reducing regulatory barriers impeding the development of the critical infrastructure necessary to support the burgeoning utilisation of AI.
The action plan, however, also suggests a federal-government role in AI education, worker training and a variety of potential investments and supports across a wide range of the AI ecosystem. While more details are needed to fully understand the scope of these possible interventions, they likely betray a confidence in the efficacy of such government action that is belied by historical experience.
And perhaps the most glaring omission from the administration’s plan is any mention of the potential incoming migration and retention of international talent to bolster America’s efforts in artificial intelligence. While vigorously making the case that the US needs to “win the AI race” with China, no mention is broached of one of our most important assets in this regard: the desire of so many people around the world—which, no doubt, includes top AI researchers and engineers—to emigrate to the US. In light of the administration’s overall stance on immigration, perhaps this isn’t a surprise. But it is, surely, a mistake.
The path ahead
It could be argued that the technology sector is one of the few success stories of the US regulatory regime in this century: a light-touch, market-oriented approach that took care to preserve the tremendous upside of the industry without a strong presumption of downside risk.
We should allow history to repeat itself and follow a similar course in the prospective regulation of AI technologies. With every innovation that carries its own risks, we tend to forget that a web of existing laws—including those related to fraud, discrimination and consumer protection—is sufficient to handle much of the possible downside. In addition, before reaching for the cudgel of regulation, it’s wise to consider the alternatives. For example, education and digital literacy play important roles in protecting individuals from AI-enabled fraud and other threats, such as deepfakes. These defences empower consumers while preserving the benefits of AI innovation.
Finally, another lesson from the explosion of new technologies over the past 30 years and the US dominance of the sector is the critical role played by the extraordinary talent America is able to attract to its shores. The prevalence of immigrants among the leadership of US technology companies and exciting tech startups should tell us something.
To realise artificial intelligence’s promise, we cannot erect regulatory burdens that threaten to kill what could be a golden goose. And if the US is serious—as it claims to be—about sustaining leadership in AI as it has in so many other areas of technology, government and regulatory barriers to talent migration and innovation must come down and stay down.