Today I want to make two broad points. First, internet‐borne extremism is not primarily the fault of algorithms, but the natural result of open communicative tools that allow users to create nongeographic communities of interest. As a result, efforts to quash extreme speech will necessarily suppress innocent speech alongside it. Second, platform counter‐extremism efforts should be cognizant of the off‐platform effects of their decisions. Deplatforming extremists doesn’t make them disappear and may drive them towards more extreme communities.
Algorithms Aside
Internet extremism is not primarily an algorithmic phenomenon. Instead, it is a function of the internet’s capacity to host non‐geographic communities of interest. The internet’s connective power has been long celebrated, from the early days of Usenet to Tahir Square. However, as capacity for interconnection has been increasingly utilized by domestic extremists, Americans have begun to see cracks in the promise of an open internet. The problem is often understood as primarily algorithmic – social media feeds users increasingly extreme content in an effort to keep them engaged, slowly inducing a radical worldview. This tidy concern ignores both individual user agency and the broader effects of the internet.
Prior to the internet, and particularly social media, it was more difficult to form communities around niche interests. If you were a trainspotter, a fan of a foreign sports team, or obsessed with Nazi occultism, apart from a few periodicals and the occasional distant convention, you had little ability to join a community of the likeminded. Mainstream media was gatekept against radical thought on both the right and left – Huey Newton or David Duke might be occasionally interviewed by the mainstream press, but their adherents couldn’t imbibe a 24–7 diet of Black Panther or KKK videos. On the internet, however, there is more content catering to every particular subcultural niche than anyone could hope to consume in a lifetime.