Sen. Amy Klobuchar (D‑Minn.) recently penned an op-ed that begins with an “urgent” call to congressional action lest horrible harms and dangers be allowed to run loose within our society. What could have happened to inspire Klobuchar to such immediate and forceful action? A mass shooting? A natural disaster? International conflicts and wars?
No, the precipitating event is far worse.
It is a satirical deepfake of Klobuchar talking about Sydney Sweeney’s jeans ad.
I wish I was joking, but I’m not. Klobuchar is upset with an AI video that depicts her saying that “Republicans have girls with perfect titties in their ads,” but “You know, just because we’re the party of ugly people doesn’t mean we can’t be featured in ads. Okay?”
Is the AI video deep, insightful, policy focused criticism of the senator or her party? No. But that doesn’t make it any less protected by the First Amendment. Klobuchar complains that some people “clearly thought it was real.” And even worse, she says that “people who see this type of content develop lasting negative views of the person in the video, even when they know it is fake.”
Our society is full of satire and parody including, Saturday Night Live, South Park, “Weird Al” Yankovic, The Onion, The Babylon Bee, etc. Klobuchar is arguing that parody and satire is inherently harmful and thus should be regulated and censored by the government.
But the problem with the government trying to stop misinformation, AI or otherwise, is that people are biased and disagree about what misinformation is. Accusations of misinformation often just become cudgels used against one’s political opponents. And it is not limited to any political party. Take the last election in which Florida’s Department of Health threatened local TV stations to not run pro-choice ads because such “false advertisements… would likely have a detrimental effect on the lives and health of pregnant women in Florida.”
And in California, Gov. Gavin Newsom signed AB 2839 that attempted to prohibit deceptive or manipulated media. Newsom was upset that a satirical deepfake of Kamala Harris —similar to the Klobuchar video — used Harris’s voice to mock herself as the “ultimate diversity hire,” and a “deep state puppet” hiding her “total incompetence.”
The courts stopped the leaders of Florida and California from silencing their political opponents because they clearly violated the First Amendment. And it’s almost certain that Klobuchar’s proposed solution to unwanted deepfakes, her NO FAKES Act, would also be quickly struck down.
Thanks to recent research on the NO FAKES Act by technology and legal expert Daphne Keller at Stanford University, it is easy to see why the NO FAKES Act is a deeply flawed and harmful piece of legislation. Keller concludes “But don’t let the tedium or intricacy fool you. NO FAKES is a seriously bad law for free expression online.”
It directly outlaws multiple categories of currently legal speech including AI generated content; legally and financially encourages companies to remove even more legal speech; introduces a vast notice and takedown regime that favors trolls and the overly litigious; creates requirements to broadly filter speech regardless of intent or context; restricts the development of AI products; and forces companies to identify anonymous users who break these new requirements.
While the bill technically contains carveouts to protect free expression, Klobuchar is arguing we need NO FAKES to take down the deepfake mocking her. That should make it pretty clear just how little protection she wants to offer to political satire and parody. And this attack on political speech and satire is bipartisan, as other prominent lawmakers, including Sens. Chris Coons (D‑Del.), Thom Tillis (R‑N.C.), and Marsha Blackburn (R‑Tenn.), are co-sponsors of this censorial bill.
A better approach is to first remember that existing laws still apply to AI. Laws against illegal speech and actions — whether they be financial fraud, incitement to violence or election interference — still apply to AI generated content. The fears Klobuchar mentions in her op-ed are the same fears that existed and were managed in the world before AI.
Just as we have learned to avoid Nigerian prince email scams, we now need to learn how be critical consumers of AI generated content. This doesn’t happen overnight and yes there will be challenges as we adjust to new transformative technologies.
But rather than censorship and getting angry about online memes, politicians should also get some thicker skin.
And maybe some new jeans.