Telecom, Internet & Information Policy

October 20, 2020 9:55AM

A New Threat to Online Speech?

There seems to be a growing consensus among legal experts and free speech activists that international human rights law (IHR) should provide the framework for social media’s content moderation. In a new essay titled “But Facebook’s Not a Country,” Dangerous Speech Project founder Susan Benesch joins this growing chorus.

What is IHR? It is a modern phenomena. The Universal Declaration of Human Rights (UDHR) is considered its foundational document. It was adopted by the UN General Assembly in 1948. It has provided the basis for a set of treaties, covenants and conventions signed by UN member states, though governments often exempt themselves from specific obligations. They are intended to promote human rights across borders and create a framework of rules, norms, and standards accepted in relations between sovereign states, free and unfree, democratic and authoritarian.

In our context, the relevant international treaties are the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) and the International Covenant on Civil and Political Rights (ICCPR), though I will focus mainly on the ICERD. Benesch acknowledges that the ICERD’s proscriptions “seem[s] considerably broader than the ICCPR’s hate speech provisions” because it requires “the prohibition of different, and likely much more, speech.”

Indeed, the ICERD’s Article 4 calls for the criminalization of “(D)issemination of ideas based on racial superiority or hatred, incitement to racial discrimination.) (…) including the financing thereof…” The section also requires the prohibition of “organizations, and also organized and all other propaganda activities, which promote and incite racial discrimination, and (…) participation in such organizations or activities.” This ban would probably include the selling of Hitler’s Mein Kampf and a ban on organizations like the Nation of Islam or the Islamist Hizb‐​ut‐​Tahrir.

Benesch is not overly concerned about this threat to speech. She concludes that the ICCPR and the ICERD have been reconciled because the UN body charged with interpreting the ICERD “seems to have deferred to the ICCPR by accepting the principles of legality, proportionality, and necessity.” This three‐​prong test is embedded in Article 19 of the ICCPR that says that legitimate restrictions on speech need to be “provided by law and necessary.” Benesch posits that this three‐​prong test will doom most speech restrictions required by the ICERD.

Benesch doesn’t explain her reasons for claiming that the ICCPR and the ICERD have been reconciled, but the claim is probably based on General Recommendation No. 35 titled Combating Racist Hate Speech and adopted by the UN Committee on the Elimination of Racial Discrimination in September 2013. The Committee is charged with authoritative interpretations of the ICERD. It says that “the application of criminal sanctions should be governed by the principles of legality, proportionality, and necessity.”

However, the Committee in the same decision also recommends that states criminalize:

Dissemination of ideas based on racial or ethnic superiority or hatred; expressions of insults, ridicule or slander of persons or groups or justification of hatred or contempt when it clearly amounts to incitement to hatred or discrimination; participation in organizations and activities which promote and incite racial discrimination.


The Committee even recommends that “denials or attempts to justify crimes of genocide and crimes against humanity” if they constitute incitement to racial hatred, should be criminalized. At the same time, the Committee stresses that the expression of opinions about historical facts should not be prohibited.

The propositions stated by the Committee in its General recommendation No. 35 seem irreconcilable. On the one hand, they call for applying the test of legality, proportionality, and necessity in a way supporters of IHR would endorse as protecting free speech. On the other they call for speech criminalization that would often fail the three‐​prong test of legality, proportionality, and necessity.

UN case law sets out authoritative interpretations of the ICERD. That case law, contrary to Benesch’s claim, indicates that the principles of legality, proportionality, and necessity do not prevent the body charged with interpreting the ICERD from issuing decisions that go further than the hate speech provisions of most liberal democracies.

Take the case against German politician and author Thilo Sarrazin. In 2009, Sarrazin, a former Social Democratic finance senator for the city‐​state of Berlin, lashed out at Muslim immigrants in an interview with the magazine Lettre International.

Sarrazin said the majority were living off social benefits and did not contribute to the economy beyond the fruit and vegetables trade. He complained about high birth rates among Muslim immigrants and called for a general ban on immigration “except for highly qualified individuals.” Sarrazin’s interview was reported to the police, but the prosecutor refused to charge him for violation of the country’s law against ”hate speech.” The petitioner then complained to the U.N. Committee on the Elimination of Racial Discrimination.

In 2013 the committee reprimanded Germany for not effectively investigating Sarrazin. They concluded that Sarrazin’s statements “amounted to dissemination of ideas based upon racial superiority or hatred and contained elements of incitement to racial discrimination in accordance with article 4 (a), of the Convention.” By not punishing Sarrazin, the German state had violated international human rights law. In spite of the fact that Germany has the toughest laws against ”hate speech” in Western Europe, the committee called on Berlin to impose even stricter limits on speech in order to fulfill Germany’s obligations under article 4 of the Convention.

In this case, the principles of legality, proportionality, and necessity did not constrain an authoritative UN interpretation of the ICERD.

Of course, none of this means I endorse Thilo Sarrazin’s opinions, but that’s not the point. It is about defining the limits of legitimate public debate, especially on contested political issues like immigration. In a liberal democracy, speech should be free up to incitement to violence or to other criminal activity, i.e. the famous “emergency principle”. IHR justifies many more restrictions on speech.

Another example involves the Danish politician Pia Kjaersgaard who is a member of parliament and former leader of the Danish People’s Party. In 2003, in a letter to the editor of the newspaper Kristeligt Dagblad Kjaersgaard called on the government to ban female circumcision. She complained that the Danish‐​Somali Association had been consulted about the forthcoming law. She wrote: “To me this corresponds to asking the association of pedophiles whether they have any objections against child sex or asking rapists whether they have any objections to a tougher sentence for rape.” A petitioner reported Kjaersgaard to the police for having compared individuals of Somali origin to pedophiles and rapists. The Danish public prosecutor refused to press charges.

The petitioner complained to the U.N. Committee on the Elimination of Racial Discrimination. In 2006, the Committee concluded that Denmark had violated Article 4 of the ICERD by not prosecuting Kjaersgaard for hate speech. Once again, the three‐​prong test embedded in international human rights law didn’t protect speech that would be considered legal in democracies with hate speech laws.

If one adds the UN Human Rights Council’s Universal Periodic Reviews (UPR) of member states as authoritative interpreters of the ICERD, then it becomes clear that the support for further restrictions on hate speech is widespread within the UN‐​system.

The UPRs were introduced in 2006. They are conducted by a working group which consists of the 47 member states of the Council. Information for the reviews is provided by the states, human rights experts and groups, NGO’s, and the U.N. treaty bodies like the Committee on the Elimination of Racial Discrimination. The reviews assess the extent to which states respect their human rights obligations set out in the human rights treaties like the ICERD, and they contain recommendations for improvement. Usually, the UPRs express little or no concern with freedom of expression. Recommendations from the Committee on the Elimination of Racial Discrimination, for example, recently called for tougher and broader hate speech laws in Denmark and the U.K.

During Denmark’s Universal Periodic Review in 2016 the U.N. Committee on the Elimination of Racial Discrimination made specific recommendations to criminalize more speech. For example:

”The Committee on the Elimination of Racial Discrimination encouraged Denmark to amend its Criminal Code to bring it fully into line with the provisions of ICERD.” And the U.N. committee said, ”(i)t was concerned about the low number of court cases on hate crimes and the lack of an explicit prohibition in the Criminal Code of organizations that promoted racial discrimination.”

And the UPR for the United Kingdom of Britain and Northern Ireland (2017) recommends among other things, that the U.K. “incorporate the Convention on the Elimination of All Forms of Racism into the domestic law to ensure direct and full application of the principles and provisions of the Convention.” This is a call for broader controls on speech.

Benesch and other supporters of IHR point to the so‐​called Rabat Plan of Action as an insurance policy against overly broad interpretations of the hate speech provisions of IHR. The Rabat Plan of Action was adopted by the U.N. High Commissioner for Human Rights in 2013. Its goal is to draw the proper line between freedom of expression and illegal “hate speech”. In order to narrow the scope of legitimate bans of ”hate speech”, the plan defines a six‐​part threshold test for forms of speech that are prohibited under criminal law. The test takes into consideration: the context of incitement to hatred, the speaker, intent, content, extent of the speech, and likelihood of causing harm. According to the plan, any limitations to freedom of speech “must remain within strictly defined parameters flowing from the international human rights instruments, in particular the International Covenant on Civil and Political Rights and the International Convention on the Elimination of Racial Discrimination.” The plan reiterates that restrictions need to be assessed by the test of legality, proportionality, and necessity laid down in Article 19 of the ICCPR.

I agree with Benesch and others that this is a welcome development, but I am less sure that the consequences are as positive for free speech as the supporters of IHR claim. The Sarrazin case described above supports my doubts. It was decided after the adoption of the Rabat Plan of Action. In any case, it won’t meet a First Amendment standard of emergency and viewpoint neutrality. For a proper evaluation of the three‐​prong test of legality, proportionality, and necessity it would be important to find out if the Committee on the Elimination of Racial Discrimination is ignoring the test in its decisions or applying it in a way that is different from the understanding of the proponents of IHR who favors free speech.

In spite of the progress made by the Rabat Plan of Action to define and narrow the concept of hate speech, we should not ignore the inherent instability, vagueness, and arbitrariness of the hate speech provisions of IHR. Having gained new ways of speaking on social media, we should think hard before adopting IHR as a limit on our newly won powers.

October 15, 2020 3:09PM

Accusations of Social Media “Election Interference” Put Online Speech at Risk

social media logos

Earlier this week, the New York Post published articles containing information about alleged emails between Hunter Biden, the son of Democratic presidential nominee Joe Biden, and employees at Chinese and Ukrainian energy firms. Twitter and Facebook both took steps to limit the spread of the articles, prompting accusations of “election interference.” Prominent Republican lawmakers took to social media to condemn Twitter’s and Facebook’s decisions. These accusations and condemnations reveal a misunderstanding of policy that could result in dramatic changes to online speech.

According to Twitter, the company restricted access to the New York Post’s articles because it violated the company’s policies against spreading personal and private information (such as email addresses and phone numbers) and hacked materials. Twitter cited the same policy when it prohibited users from sharing 269GB of leaked police files. Twitter users who click on links to the two Post articles face a click‐​though “this link may be unsafe” warning. The articles in question include such information in images of the leaked emails. Those accusing Twitter of a double standard because the company allows users to share the recent New York Times article based on the president’s leaked tax documents neglect the fact that the New York Times did not publish images of the documents. Although consistent with Twitter’s policies, the decision to block the spread of the Post’s articles on Twitter absent an explanation or context was criticized by Twitter CEO Jack Dorsey.

According to a Facebook spokesperson, Facebook’s decision to restrict the spread of the Post’s Hunter Biden articles is “part of [Facebook’s] standard process to reduce the spread of misinformation.” Compared to Twitter’s response, Facebook’s was less clear.

Whatever one thinks about Twitter’s and Facebook’s decisions in this case the decisions were legal and consistent with Section 230 of the Communications Decency Act. Much of the online commentary surrounding restrictions on the New York Post (head over to #Section230 on Twitter to take a look for yourself) makes reference to a non‐​existent “publisher” v. “platform” distinction in the law.

In brief, Section 230 states that interactive computer services (such as Twitter, the New York Time’s comments section, Amazon, etc.) cannot — with some very limited exceptions — be considered the publisher of the vast majority of third‐​party content. Twitter is not the publisher of your tweets, but it is the publisher of its own content, such as the warning that appears when users click on the two New York Post article links. Section 230 applies to “platforms” and “publishers,” and does not prevent social media sites from fact‐​checking, removing, or limiting access to links.

Some “Big Tech” critics decided not to focus on Section 230 and instead focus on election interference. The conservative outlet The Federalist issued a statement making this claim, as did many others. According to those making the “election interference” claim, the New York Post articles are embarrassing to Joe Biden, and Twitter’s and Facebook’s actions constitute a pro‐​Biden interference in the 2020 presidential election. Conservative pundits are not the only ones making this kind of claim. Senator Joshua Hawley (R-MO) wrote to Dorsey asking him to appear at a hearing titled “Digital Platforms and Election Interference.” Sen. Ted Cruz (R-TX) wrote to Dorsey accusing Twitter of trying to influence the upcoming election. Later he accused Twitter of election interference and supported the Senate Judiciary Committee issuing a subpoena to Dorsey, which is expected to happen this coming Tuesday.

It is one thing for conservative pundits to accuse a private company of interfering in an election. In today’s political climate it is expected. What should send chills down the spine of everyone who values the freedom of speech and the freedom of association is the sight of two of the most powerful politicians in the country making the same accusation and insisting that Twitter’s CEO appear before a hearing and hand over documents related to how Twitter conducts its business.

To portray how Twitter and Facebook handled the New York Post articles as “election interference” has significant implications. Twitter and Facebook limited access to an article that is potentially embarrassing to a political candidate. If such actions can be considered “election interference,” should every content moderation action by a private company taken against any politician or candidate be considered interference? If The Wall Street Journal rejects an op‐​ed written by the Green Party’s presidential candidate is not that also “election interference”? When a music hall owner decides to allow the Trump campaign, but not the Biden campaign, to host a rally is that not “election interference”?

“Election interference” is a term that ought to mean something useful. Unfortunately, conservative commentators seem intent on warping the term so that it means little more than, “moderating content.”

So‐​called “Big Tech” and content moderation will continue to make headlines next year regardless of who wins the presidential election next month. While conservative commentators and activists are convinced that “Big Tech” is engaged in an anti‐​conservative crusade, they should consider that the political left has its own complaints. Bipartisan anger towards Big Tech could result in Section 230 reform or other legislation that puts the freedom of speech and freedom of association at risk. As lawmakers continue to criticize the most prominent social media companies we should remember that attempts to regulate online speech could have disastrous consequences.

October 6, 2020 1:16PM

Rights against Speech

Why do social media companies have the right to suppress speech on their platforms? In the United States, they may do so because the U.S. Supreme Court has said the First Amendment does not apply to private companies. But the companies want more than sheer discretion, and they seem unwilling to say, “we’re maximizing shareholder value which requires suppressing speech.” Indeed, they seem to want an answer to the question: why should we suppress speech?

This desire for a broader foundation for content moderation has led Facebook to the door of the United Nations and international law. Need to ban “hate speech”? Article 20 of the International Covenant of Civil and Political Rights requires it. And not just of governments. Facebook has signed the Guiding Principles on Business and Human Rights which requires businesses to “respect” human rights.

Susan Benesch treats the issues implicit in mixing content moderation and international law in her essay “But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies.” The “human rights law” she would have platforms adopt may be found in Articles 19 and 20 of the International Covenant of Civil and Political Rights. Benesch argues that “human rights law,” (hereafter IHRL) suitably modified, can improve and legitimate content moderation. I have my doubts.

International human rights does have a First Amendment of sorts. Article 19 of the ICCPR states:

Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.

Benesch adds that the ICCPR explicitly prohibits only two kinds of speech: “propaganda for war” and what we have come to call “hate speech” (both in its Article 20). In aggregate these two banned forms of expression “represent only a tiny proportion of all the content that companies restrict under their own rules.” That’s correct. “Hate speech” receives a lot of attention, but it is actually a small part of all speech on social media and of all speech restricted by platform moderators..

In any case, Benesch believes bans on propaganda for war and “hate speech” will be quite limited because any restrictions must be “provided by law [and]…necessary.” Like others, she believes the terms “by law” and “necessary” support a tripartite test for any restrictions on speech under ICCPR. As I noted in my earlier post, this test demands that a restriction on expression must be clear enough to follow, must serve a legitimate state purpose, and must be the least intrusive means to that end.

Benesch argues that IHRL is likely to improve social media speech regulation. As noted, if only two kinds of speech may be banned, many social media speech restrictions must fall. And the remaining restrictions, strictly grounded in the ICCPR, might become more legitimate and acceptable to users. Moreover, IHR would give the companies “a stronger basis to resist improper pressure from states to suppress speech.” Benesch may not be correct, however, that platform adoption of international law would prohibit only two kinds of speech.

Article 19 of the ICCPR also states that free expression “carries with it special duties and responsibilities.” It may therefore be subject to certain restrictions. But the grounds for restriction seem limited: “respect of the rights or reputations of others” and “the protection of national security or of public order (ordre public), or of public health or morals.” Such are the legitimate purposes demanded by the tripartite test noted earlier.

Propaganda for war and “hate speech” are fairly concrete terms, however controversial. “Rights” is a pure abstraction. How can we attach some concrete meaning to this term? Benesch lists the sources of rights under international law:

the International Bill of Rights and the International Labor Organization’s Declaration on Fundamental Principles and Rights at Work. For speech regulation the relevant documents are in the Bill of Rights, which includes the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and the International Covenant on Economic, Social, and Cultural Rights (ICESCR).

That’s a lot of documents and a lot of rights presumably and thus a lot of reasons to restrict speech. And we are still quite abstract.

Fortunately the United Nations has provided us with a brief compendium of human rights. By my count there are 21 human rights including “freedom of opinion and expression” and another dozen “Human Rights Protections of Specific Groups.” Among the rights discussed are “the right to an adequate standard of living,” “the right to social security,” “the right to health,” and “the rights to work and to just and favorable conditions of work.”

In other words, governments or social media companies do not lack for justifications for restricting speech, all legitimated by international law which the companies themselves have endorsed. The “legitimate purpose” part of the tripartite test may be satisfied in many ways. Indeed, the right to free expression itself seems to be just one right among many, any one of which in some circumstances might trump “voice.”

Yet some experts might reply that free expression is different: restrictions on voice must be legal and proportionate. Perhaps when put in the balance against speech all thirty of those rights recognized by IHRL will turn out to be too vague and too intrusive to justify limits on voice. But the rights against speech are many, and time is long. I assume speech will give way later if not sooner.

Whether international law turns out to be an improvement on social media community standards depends less on the content of human rights law and more on how those rights are weighed against free speech. IHRL may turn out to be the root cause of illiberalism without borders, but it will require assistance from social media companies and their helpers, the putative proximate causes of a decline of free speech. On the other hand, the tripartite test may ultimately vindicate a broad right to free expression online if the interpreters of IHRL care more about speech than they do the 30‐​odd rights that might justify limitations on it. But giving primacy to free expression among our rights would be a more secure path forward and that international law does not do. The tech companies and their content moderators may recognize such primacy. Will they?

September 25, 2020 1:45PM

A Good Bad Idea from TikTok

The interim leader of TikTok, Vanessa Pappas, has just proposed that social media companies agree to warn one another about violent, graphic content on their platforms. Specifically, TikTok proposes a “hashbank for violent and graphic content” with a special concern about suicide videos. The company believes the hashbank and subsequent cross-platform suppression of the objectionable content would “significantly reduce the chances of people encountering it and enduring the emotional harm that viewing such content can bring.”

As it happens I came near some violent and graphic content this morning. A friend sent me a link to a BBC News story about “Cameroon soldiers jailed for killing women and children.” The embedded video, whose label says it “contains disturbing scenes”, apparently depicts the murders of two women and children. The video might be a candidate for TikTok’s proposed hashbank. Using the BBC video to fix ideas, let’s examine the costs and benefits of TikTok’s proposal.

Let’s begin with where TikTok is on solid ground. Some of their users are younger people and thus may be protected from extreme speech in ways that adults should not be. But preventing adults from seeing content they wish to see is paternalistic. But there’s another wrinkle here. Tiktok mentions people “encountering” objectionable content. The dictionary tells us to encounter is “to come upon or experience especially unexpectedly.” If I choose to see risky content, it may be surprising but the risk is part of the choice even if it turns out to be more than I would have wanted to see ex ante. Encountering content seems more like being algorithmically chosen to view content than personally choosing to see it.

Read the rest of this post »
September 17, 2020 9:49AM

WeChat or We Don’t Chat? A Total Ban on WeChat Goes Too Far

WeChat is an internet application owned by Chinese tech company Tencent. It may not sound familiar to most Americans, but it is a tool widely used in China and among Chinese communities world‐​wide. It is a one‐​stop‐​shop that combines payment services, social media, messaging platforms, and news outlets. This app is now facing a ban in the United States.

Last month, President Trump issued an executive order related to WeChat, under which any transaction that is related to WeChat by any person, or with respect to any property, subject to the jurisdiction of the United States” would be banned. The rationale was that WeChat continues to threaten the national security, foreign policy, and economy of the United States.” The order itself lacks details on how exactly a ban would work, and the Department of Commerce is expected to issue implementing rules by September 20. The scope of the regulation could range from banning the app from smartphone app stores, to banning all U.S. firms, even ones located outside of the U.S., from dealing with WeChat. The ban would affect both individual users of the app and businesses who rely on it.

WeChat does present some risks, but banning it nationwide is overkill. As I explain below, a more targeted approach, such as banning the app on work phones of defense department employees, like the Australian government did, or even for all government workers, is more reasonable.

As noted, there is a lot of uncertainty surrounding the executive order. With regard to individuals in particular, it is unclear what transactions” would be covered. If the ban is intended to take the application off the various mobile marketplaces, it will seriously disrupt person‐​to‐​person communications that have been crucial for the 19 million active WeChat users who live in the United States. For some of these users, WeChat is their main channel of keeping in touch and a primary source of information; for others, the app is one of the tools they use to make money and earn a living.

The inability to use WeChat in the United States could spook many potential Chinese students and visitors, resulting in a likely drop in school enrollment and tourism revenues. Chinese students contribute approximately $15 billion to the U.S. economy every year, and Chinese tourists contribute $36 billion. The number of Chinese students and tourists was already declining due to deteriorating U.S.-China relations and the COVID-19 pandemic. The WeChat ban, which makes the United States less attractive to Chinese visitors than other countries who allow the app, can only exacerbate this trend.

In addition, the ban could go well beyond personal access to the app and extend to associated business activities. The President of the American Chamber of Commerce in Shanghai has noted that all American companies, regardless of their location, could be barred from any transactions related to the WeChat application. Because WeChat is so deeply embedded in the Chinese peoples daily life and has become a platform for a wide range of activities, a ban on transactions related to WeChat would include sales, promotions, payments and other business activities. For instance, Starbucks and McDonald’s use the application as a marketing and sales platform. Filmmakers use WeChat to promote their movies and make ticket sales. Walmart and many other venders use the app to sell products. In fact, the app accounts for as much as 30% of all Walmart sales in China. A WeChat ban could cost corporate users significant revenues.

A ban could also result in a 30% decline of iPhone sales. Many Chinese netizens have said that if they have to choose between iPhone and WeChat, they will ditch their iPhones in a heartbeat.

According to a survey conducted by the American Chamber of Commerce in Shanghai, nine out of ten companies forecast that the ban would have a negative impact on their business in China, with nearly half of them predicting a loss of revenue because of the ban. Furthermore, a WeChat ban could add more fuel to the already tense relations between the United States and China, making it more difficult for businesses to operate. So far, 86% of companies have reported a negative impact on their business with China as a result of the growing tensions. The situation will only get worse with a WeChat ban.

Clearly, the WeChat ban comes with huge costs, but are their offsetting benefits? How big a security risk is WeChat, and would a ban mitigate this risk? With an increasingly hostile geopolitical rivalry brewing, the U.S. government needs to look carefully at the cyber practices coming out of China. The Executive Order indicates that the ban will protect user data from the Chinese government and fight against China’s “disinformation campaigns.Will they do so?

Despite a lack of direct evidence on this point, there have been concerns that WeChat would share data with the Chinese government. If the administration is concerned about this with average citizens (as opposed to military personnel or other government employees, where special considerations apply), instead of banning the app, it could simply warn consumers of such risks and let the them choose whether to use the application. Individuals should have the freedom to decide the level of risk they are willing to take with regard to their personal information. It’s worth noting that the issues related to consumer data are much broader than WeChat. The Chinese government has other ways, such as through web tracking and datasets available on the Dark Web, to obtain users’ information. A better way to protect consumer data, from not only the Chinese government but all governments, is to establish more general domestic and international rules on data protection and hold countries and companies accountable for violating these rules.

The Executive Order also highlighted that a ban of the WeChat application will help to mitigate “disinformation campaigns.” As with user data, if the administration is concerned about campaign meddling, it is a much larger problem than WeChat and is happening on other platforms such as Facebook and Twitter and through other means such as hacking. Focusing on only one application is insufficient and could distract from other campaigns coming from other countries.

When foreign government try to collect data on American citizens and non‐​citizens living in America, for the purpose of manipulating them for political or other reasons, it is a serious issue. The U.S. government needs to get a handle on this problem immediately. In the case of WeChat, however, any national security benefits of a total ban are outweighed by the disruptions to people’s personal lives and the potential economic losses for individuals and businesses that would likely materialize in the presence of a ban. There are real concerns related to WeChat, but more targeted actions like the Australians have taken make more sense. A total ban on WeChat simply goes too far.

September 9, 2020 11:42AM

A History of Crowdfunding in the Wake of Violence

In the days following a vigilante shooting in Kenosha, WI, activists and supporters launched GoFundMe pages soliciting donations for the families of the dead, the medical expenses of the injured, and the costs of the shooter’s legal defense. GoFundMe has since removed the legal defense fundraiser for Kyle Rittenhouse. For those who see the shooting as a case of self-defense, GoFundMe is attempting to deny a young man legal assistance for partisan reasons. Others, proceeding from the belief that Rittenhouse is a mass shooter, looked askance at Facebook’s initial failure to remove statements supporting him, likening #FreeKyle to genocide advocacy in Myanmar.

It has become an all-too-familiar script. Some confusing, symbolically charged event occurs and partisans assemble disparate sets of often correct—albeit incomplete–facts. They then interpret them according to their tribe’s values. It is not simply a matter of different camera angles (though there is plenty of jockeying for those) but of the partisan frameworks used to interpret what everyone sees. This produces incompatible narratives that frustrate nuanced deliberation, lasting grievances that can be drawn upon in future conflicts, and a no-win situation for social media platforms.

Expectations of platform policy are inevitably informed by partisans’ incompatible narratives; moderation on the basis of any one perspective will alienate users with conflicting views. Because most public commentary and media coverage proceeds from one narrative or the other, when platforms fail to reflect partisans’ preferred account, their decisions are taken as evidence of irresponsibility or bias. Platforms’ attempts to maintain neutrality are treated as a failure to face facts.

Platforms don’t do enough to maintain consistency across high-profile divisive incidents, and their learning process is often an attempt to discover which stance will invite the least pushback. Services such as Gab and Mastodon offer alternatives to mainstream platforms that increasingly embrace popular narratives. This paradigm offers platforms of last resort to disfavored speech, but directs partisan demands toward the internet infrastructure that supports these alternatives.

Read the rest of this post »
September 8, 2020 12:37PM

Independence in Content Moderation

Social media managers cannot tolerate all speech on their platforms. They are obligated to maximize value for their shareholders. Leaving some kinds of speech on a platform would drive some users away. So social media managers establish rules intended to maximize the number of users. Managers also hire content moderators to enforce the rules. Users that get thrown off have no complaint. When they joined the platform, they agreed to the rules and to how they are enforced. End of story.

Except it’s not the end of the story. Social media managers do not seem to believe that mutual consent to rules and their application is enough to make content moderation legitimate. For present purposes, I simply accept this belief; we need not inquire into its validity. If consent is not enough, social media need other justifications for legitimacy. Some social media managers embraced a judicial model: due process would foster legitimacy. For example, Facebook instituted written rules whose enforcement could be ultimately appealed to an Oversight Board (OSB).

How might an appeals process be legitimate? The Charter, Bylaws, and Code of Conduct for OSB members mention the words “independent” or “independence” 28 times. Here are a few examples. The OSB is established by “an independent, irrevocable trust” which oversees administrative matters. The purpose of the OSB "is to protect free expression by making principled, independent decisions about important pieces of content…” OSB members are required to "exercise neutral, independent judgment and render decisions impartially.” Moreover, OSB members "must not have actual or perceived conflicts of interest that could compromise their independent judgment and decision-making.” The Bylaws say members "will exercise neutral, independent judgment and render decisions impartially.”

The adjective independent has many meanings. The most relevant here is "not subject to control by others." Many fear social media content moderation will be dependent on tech companies' financial priorities. The more successful social media are (and will be) owned by their shareholders. Their managers will have a duty to maximize value for those shareholders. What's wrong with that? Critics say profit maximizing leads social media to tolerate speech that harms others. Social media seek to engage users and keep them on a platform, thereby maximizing revenue. Critics assert, however, that some speech that harms others also engages users. Social media can protect users only by failing to maximize revenue. In this way, it is argued, privately-owned social media are thought to face a potential conflict between their obligations to their shareholders and the independence of their content moderation (including an appeals process).

Read the rest of this post »