Chairman Kolodin, Co-Chairman Gillette, and distinguished members of the committee, thank you for the opportunity to testify today on the urgent need to protect our first amendment rights from erosion and the steps we can take to improve AI literacy. My name is David Inserra, and I am a Fellow for Free Expression and Technology at the Cato Institute. My research focuses on the intersection of a culture of free speech and policies that encourage the development of technology and online platforms. Thank you for the opportunity to speak to this committee and share my research and views on this important topic of AI and elections. I’m here in my capacity as a Cato scholar, and my views are my own. My testimony will discuss three key points: what AI is, how to think about the concerns about AI deepfakes, and next steps for policymakers regarding the concerns about AI and election integrity.

What Is AI

A conversation about AI should first begin by defining it. There is no clear agreement of what defines AI. 15 U.S.C. § 9401(3) defines AI as a “machine-based system that can, for a given set of human-defined objectives, perform the following functions:

  • Make predictions
  • Make recommendations
  • Make decisions
  • Influence real or virtual environments”i

Other definitions have focused more on the human nature of the work it can perform, or the human-like intelligence it has. But regardless, it is worth noting that AI has been with us for as long as we have had devices with significant computing capabilities. The autocomplete function, Clippy the early Microsoft Office digital assistant, the Chess computer Deep Blue that defeated World Chess Champion Garry Kasparov in 1997 are all examples of what might be considered AI but today we find normal or even outdated.ii We have incorporated and adapted to these systems and now they cease to be thought of as AI by many users. But modern AI systems, including generative AI and large language models, have drastically advanced in the ability to learn from and utilize larger amounts of data. The result has been an explosion in new uses of AI in areas previously thought to be impossible or reserved for humans.iii This has included groundbreaking new technologies such as self-driving vehicles, analysis of medical data to spot disease or cancer, and of course the ability to generate art, videos, music, and descriptive reports.

Concerns About Deepfakes and Elections

While there have been many positive uses of AI, there have also been growing fears that AI generated content that is fake or misleading can deceive voters. Such deepfakes could spread false information about government actions, election times and procedures, or the statements of political figures, all while appearing to be legitimate and authentic. Fooling potential voters into not voting, or voting based on faulty information, is injurious to our democratic system.

It is also true that such deepfakes have not significantly harmed our elections. Such policy discussions predate our recent AI boom. I first heard serious discussions of deepfakes in 2018 and every election since there have been various policymakers and thinkers who have warned that this election will be the one where deepfakes overwhelm our electorate.iv Such concerns though have grown louder since developments in AI. In 2024, for example, the World Economic Forum released its Global Risks Report based its polling of hundreds of global experts and found that AI powered misinformation and disinformation (such as deepfakes) was the greatest threat immediately facing our world.v

But, these concerns have not materialized. We perhaps came closest in 2024 when a political consultant named Steven Kramer sent an AI generated message designed to sound like President Biden in the days leading up to the New Hampshire primaries. The message attempted to convince voters to “save” their votes for the November election rather than voting in the primary.vi Our political system and government responded quickly, identifying the deepfake as false and quickly blanketing the airwaves with correct messages about voting procedures. Kramer was charged with felony voter suppression, though he was acquitted due to certain specifics of the 2024 Democratic primary in New Hampshire.vii He also faces a $6 million dollar fine from the FCC for caller ID spoofing with the intent to defraud and cause harm.viii

The point is that existing systems and laws largely worked to prevent any significant harm to our elections. And this case is also illustrative because Kramer claimed that he did this as a warning about deepfakes, not an attempt to actually meddle in the primary. Which points to the reality that seemingly bad or false uses of AI are not all inherently malicious. Someone may use AI to mistakenly or inadvertently create content that is misleading, or as a joke.

In fact, it is often the case that AI is used to engage in political parody, criticism, and commentary. Such speech is strongly protected by the First Amendment even when those in power may not like it, or others find it disparaging. Americans of all stripes and many political figures have posted AI generated content to criticize their political opponents, whether the target be President Donald Trump, Gov. Gavin Newsom, Vice President JD Vance, Sen Klobuchar, or former Vice President Kamala Harris.ix Such parody or satirical speech is protected by the First Amendment, and a California law that took aim at such deepfakes has already been struck down along with other similar state laws in the courts as well.x While the medium of AI is different than a newspaper cartoon or a comedy impression such as Saturday, the nature of the political speech is fundamentally the same.

What is different, however, is that we as a society have yet to adjust to the widespread and impactful nature of AI. And this shock and unpreparedness in response to new technologies is nothing new. The introduction of Adobe Photoshop in 1990 created its own fear that people would no longer be able to trust any photograph or image.xi Experts and media institutions worried that “The potential for abuse is obvious” and “frightening.”xii But even this fear was not itself new. President Taft was frustrated with “fake Taft pictures” in 1911, resulting in his attorney general demanding such practices stop and Senator Henry Cabot Lodge proposing a law to outlaw fake photos.xiii

Whether the technology was the first spread of photography, basic editing tools, photoshop, or AI generated content, society has long had to deal with the provenance of a given piece of imagery. But we understand photo editing and photoshop and don’t approach it with panic or worry anymore. As our society becomes more exposed to and educated about AI generated content, we will develop norms and tools for how to handle it just as we have with prior technologies.xiv So rather than restricting political speech in the name of stopping deepfakes, we must continue to protect free expression with our laws while also encouraging a culture of free expression that adapts to new technologies.

Indeed, there have been various efforts in many states and at the federal level to prohibit deepfakes or require some disclosure that they were created with AI tools.xv Several concerns exist with this approach. Prohibition and disclosure may chill speech by forcing a wide spread of content to be labeled as AI generated.xvi As discussed earlier, there are many tools that could fall within the definition of AI, thus lumping malicious deepfakes together with mundane and non-controversial uses of AI. This combination of many different types of AI may cause viewers to treat all content with such a label with suspicion or skepticism, even if it was subject to normal AI uses such as minor editing. Another concern is that such regulations often cover content that is satire, parody, or political criticism, and that prohibiting or labeling such content may prevent its creation or undermine its effectiveness, thus chilling important political speech.

So rather than regulating AI in a way that will broadly impact protected speech, we should apply existing laws that already prohibit fraud, harassment, and other relevant harms.xvii If a state believes its laws are unclear or might not apply to AI, then it is reasonable to clarify that relevant existing laws do indeed apply. Here too we see that society can learn and adapt to prevent belief in widespread frauds. Nigerian Prince emails in the early days of the world wide web were a far greater threat of fraud then today because our society has learned to not fall for that trap. Criminals will always abuse new technologies, but society and our laws can and will adapt to prevent abuse, whether that is a thief looking to steal from others or a criminal impersonating an election official to prevent voters from voting.

Next Steps for Policymakers

Beyond this, how should policymakers proceed in this space to ensure their voters’ electoral and expressive rights are protected in light of new AI tools? The answers may not lie in seeking to regulate the technology, but rather in improving literacy around AI technologies and how it may be used in the media landscape including information around elections.

A first step would be to work to improve the education around elections and AI. While the government can attempt to prohibit lies about election locations or times, ultimately the best answer is to have an informed and empowered electorate. If individuals clearly understand how, where, and when they can cast their vote it is harder for false or confusing beliefs to take hold. Similarly, while policymakers can try to limit harmful uses of new technologies like AI, the best way to prevent abusive uses of technology is to ensure that our citizens understand and know how to navigate new technologies. This starts with digital literacy in classrooms, teaching children how to engage with new technologies, but also includes opportunities that state and local officials have to connect citizens with good resources from civil society. This includes learning exercises and lessons from educational organizations like MIT’s Media Literacy in the Age of Deepfakes and tools for using and identifying AI from industry such as Orig​inial​i​ty​.ai or Winston AI.xviii With such tools and resources, society can adapt and learn how to navigate AI-generated content.

Another important thing to remember is that industry is creating standards and tools to tackle problematic AI content. For example, various platforms and companies have developed their own requirements for labeling AI content that their products produce or that appear on their platforms.xix Companies are also working to develop standards to label or establish the provenance of AI generated content such as the Coalition of Content Provenance and Authenticity and the Data and Trust Alliance.xx And more generally AI companies are constantly innovating and trying to provide AI tools that users find useful and accurate. The specific technologies and policies that each company uses are different and also provide users with different options that best serve their needs or viewpoints.

And finally, policymakers should remember all the positive ways AI can help citizens and elections. AI tools can be used to stop bad actors and identify deepfakes.xxi It can provide cheap yet deep campaign analysis of voter data so local candidates can improve their operations and messaging or connect with typically under-represented and underserved communities. AI can make it easier and cheaper to create effective election materials, advertisements, documentaries, and all sorts of other political content, enabling individuals and candidates to more easily and effectively engage in political dialogue.xxii Candidates have used AI to create content that is in their own voice but is in other languages, helping candidates to more directly connect with different groups of voters.xxiii In Arizona this might look like allowing candidates, as well as advocacy and educational organizations, to more easily reach Spanish speakers or the Native American community.

Conclusion

So, as we all work to protect our elections, I encourage policymakers to only narrowly target illegal acts while encouraging the use and development of AI tools that enable more speech. And by focusing on digital literacy and education rather than mandates and prohibition, citizens can be better consumers of information online and make more informed decisions.