Another Section 230 Reform Bill: Dangerous Algorithms Bill Threatens Speech

Legislation that would push radical, popularly disfavored, or simply illegible speech out the public eye via private proxies raises unavoidable First Amendment concerns.

October 28, 2020 • Commentary
This article appeared on Techdirt on October 28, 2020.

Representatives Malinowski and Eshoo and have introduced a Section 230 amendment called the “Protecting Americans from Dangerous Algorithms Act” (PADAA). The title is somewhat of a misnomer. The bill does not address any danger inherent to algorithms but instead seeks to prevent them from being used to share extreme speech.

Section 230 of the Communications Act prevents providers of an interactive computer service, such as social media platforms, from being treated as the publisher or speaker of user‐​submitted content, while leaving them free to govern their services as they see fit.

The PADAA would modify Section 230 to treat platforms as the speakers of algorithmically selected user speech, in relation to suits brought under 42 U.S.C. 1985 and the Anti‐​Terrorism Act. If platforms use an “algorithm, model, or computational process to rank, order, promote, recommend, [or] amplify” user provided content, the bill would remove 230’s protection in suits seeking to hold platforms responsible for acts of terrorism or failures to prevent violations of civil rights.

These are not minor exceptions. A press release published by Rep. Malinowski’s office presents the bill as intended to reverse the US Court of Appeals for the 2nd Circuit’s ruling in Force v. Facebook, and endorses the recently filed McNeal v. Facebook, which seeks to hold Facebook liable for recent shootings in Kenosha, WI. These suits embrace a sweeping theory of liability that treats platforms’ provision of neutral tools as negligent.

Force v. Facebook concerned Facebook’s algorithmic “Suggested Friends” feature and its prioritization of content based on users’ past likes and interests. Victims of a Hamas terror attack sued Facebook under the Anti‐​Terrorism Act for allegedly providing material support to Hamas by connecting Hamas sympathizers to one another based on their shared interests and surfacing pro‐​Hamas content in its Newsfeed.

The 2nd Circuit found that Section 230 protected Facebook’s neutral processing of the likes and interests shared by its users. Plaintiffs appealed the ruling to the US Supreme Court, which declined to hear the case. The 2nd Circuit’s held that, although:

plaintiffs argue, in effect, that Facebook’s use of algorithms is outside the scope of publishing because the algorithms automate Facebook’s editorial decision‐​making.

Facebook is nevertheless protected by Section 230 because its content‐​neutral processing of user information doesn’t render it the developer or author of user submissions.

The algorithms take the information provided by Facebook users and “match” it to other users—again, materially unaltered—based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers.

The court concludes by noting the radical break from precedent that that plaintiffs’ claims demand. The PADAA would establish this sweeping shift as law.

Plaintiffs “matchmaking” argument would also deny immunity for the editorial decisions regarding third‐​party content that interactive computer services have made since the early days of the Internet. The services have always decided, for example, where on their sites (or other digital property) particular third‐​party content should reside and to whom it should be shown

Explicitly opening platforms to lawsuits for algorithmically curated content would compel them to remove potentially extreme speech from algorithmically curated feeds. Algorithmic feeds are given center stage on most contemporary platforms – Twitter’s Home timeline, the Facebook Newsfeed, and TikTok’s “For You” page are all algorithmically curated. If social media platforms are exposed to liability for harms stemming from activity potentially inspired by speech in these prominent spaces, they will cleanse them of potentially extreme, though First Amendment protected, speech. This amounts to legislative censorship by fiat.

Exposing platforms to a general liability for speech implicated in terrorism and civil rights deprivation claims is more insidiously restrictive than specifically prohibiting certain content. In the face of a nebulous liability, contingent on the future actions of readers and listeners, platforms will tend to restrict speech on the margins.

The aspects of the bill specifically intended to address Force v. Facebook’s “Suggested Friends” claims, the imposition of liability for automated procedures that “recommend” any “group, account, or affiliation,” will be even more difficult to implement without inhibiting speech and political organizing in an opaque, idiosyncratic, and ultimately content‐​based manner.

After I attended a Second Amendment Rally in Richmond early this year, Facebook suggested follow‐​up rallies, local town hall meetings, and militia musters. However, in order to avoid liability under the PADAA, Facebook wouldn’t have to simply refrain from serving me militia events. Instead, it would have to determine if the Second Amendment rally itself was likely to foster radicalism. In light of their newfound liability for users’ subsequent actions, would it be wise to connect participants or suggest similar events? Would all political rallies receive the same level of scrutiny? Some conservatives claim Facebook “incites violent war on ICE” by hosting event pages for anti‐​ICE protests. Should Facebook be held liable for Willem van Spronsen’s firebombing of ICE vehicles? Rep. Malinowski’s bill would require private firms to far reaching determinations about diverse political movements under a legal Sword of Damocles.

Spurring platforms to exclude organizations and interests tangentially linked to political violence or terrorism from their recommendation algorithm would have grave consequences for legitimate speech and organization. Extremists have found community and inspiration in everything from pro‐​life groups to collegiate Islamic societies. Budding eco‐​terrorists might be connected by shared interests in hiking, veganism, and conservation. Should Facebook avoid introducing people with such shared interests to one another?

The bill also fails to address real centers of radicalization. As I noted in my recent testimony to the House Commerce Committee, most radicalization occurs in small, private forums, such as private Discord and Telegram channels, or the White Nationalist bulletin board Iron March. The risk of radicalization – like rumor, disinformation, or emotional abuse, is inherent to private conversation. We accept these risks because the alternative – an omnipresent corrective authority – would foreclose the sense of privileged access necessary to the development of a self. However, this bill does not address private spaces. It only imposes liability on algorithmic content provision and matching, and wouldn’t apply to sites with fewer than 50 million monthly users. Imageboards such as 4 or 8chan are too small to be covered, and users join private Discord groups via invite links, not algorithmic suggestion.

The Protecting Americans from Dangerous Algorithms Act attempts to prevent extremism by imposing liability on social media firms for algorithmically curated speech or social connections later implicated in extremist violence. Expecting platforms to predictively police algorithmically selected speech a la Minority Report is fantastic. In practice, this liability will compel platforms to set broad, stringent rules for speech in algorithmically arranged forums. Legislation that would push radical, popularly disfavored, or simply illegible speech out the public eye via private proxies raises unavoidable First Amendment concerns.

About the Author