Explicitly opening platforms to lawsuits for algorithmically curated content would compel them to remove potentially extreme speech from algorithmically curated feeds. Algorithmic feeds are given center stage on most contemporary platforms – Twitter’s Home timeline, the Facebook Newsfeed, and TikTok’s “For You” page are all algorithmically curated. If social media platforms are exposed to liability for harms stemming from activity potentially inspired by speech in these prominent spaces, they will cleanse them of potentially extreme, though First Amendment protected, speech. This amounts to legislative censorship by fiat.
Exposing platforms to a general liability for speech implicated in terrorism and civil rights deprivation claims is more insidiously restrictive than specifically prohibiting certain content. In the face of a nebulous liability, contingent on the future actions of readers and listeners, platforms will tend to restrict speech on the margins.
The aspects of the bill specifically intended to address Force v. Facebook’s “Suggested Friends” claims, the imposition of liability for automated procedures that “recommend” any “group, account, or affiliation,” will be even more difficult to implement without inhibiting speech and political organizing in an opaque, idiosyncratic, and ultimately content‐based manner.
After I attended a Second Amendment Rally in Richmond early this year, Facebook suggested follow‐up rallies, local town hall meetings, and militia musters. However, in order to avoid liability under the PADAA, Facebook wouldn’t have to simply refrain from serving me militia events. Instead, it would have to determine if the Second Amendment rally itself was likely to foster radicalism. In light of their newfound liability for users’ subsequent actions, would it be wise to connect participants or suggest similar events? Would all political rallies receive the same level of scrutiny? Some conservatives claim Facebook “incites violent war on ICE” by hosting event pages for anti‐ICE protests. Should Facebook be held liable for Willem van Spronsen’s firebombing of ICE vehicles? Rep. Malinowski’s bill would require private firms to far reaching determinations about diverse political movements under a legal Sword of Damocles.
Spurring platforms to exclude organizations and interests tangentially linked to political violence or terrorism from their recommendation algorithm would have grave consequences for legitimate speech and organization. Extremists have found community and inspiration in everything from pro‐life groups to collegiate Islamic societies. Budding eco‐terrorists might be connected by shared interests in hiking, veganism, and conservation. Should Facebook avoid introducing people with such shared interests to one another?
The bill also fails to address real centers of radicalization. As I noted in my recent testimony to the House Commerce Committee, most radicalization occurs in small, private forums, such as private Discord and Telegram channels, or the White Nationalist bulletin board Iron March. The risk of radicalization – like rumor, disinformation, or emotional abuse, is inherent to private conversation. We accept these risks because the alternative – an omnipresent corrective authority – would foreclose the sense of privileged access necessary to the development of a self. However, this bill does not address private spaces. It only imposes liability on algorithmic content provision and matching, and wouldn’t apply to sites with fewer than 50 million monthly users. Imageboards such as 4 or 8chan are too small to be covered, and users join private Discord groups via invite links, not algorithmic suggestion.
The Protecting Americans from Dangerous Algorithms Act attempts to prevent extremism by imposing liability on social media firms for algorithmically curated speech or social connections later implicated in extremist violence. Expecting platforms to predictively police algorithmically selected speech a la Minority Report is fantastic. In practice, this liability will compel platforms to set broad, stringent rules for speech in algorithmically arranged forums. Legislation that would push radical, popularly disfavored, or simply illegible speech out the public eye via private proxies raises unavoidable First Amendment concerns.