Topic: Telecom, Internet & Information Policy

Can Pluralism Work Online?

The Wall Street Journal reports that Facebook has consulted with conservative individuals and groups about its content moderation. Recently I suggested that social media managers would be inclined to give stakeholders a voice (though not a veto) on content moderation policies. Some on the left were well ahead in this game, proposing that the tech companies essentially turn over content moderation of “hate speech” to them. Giving voice to the right represents a kind of rebalancing of the play of political forces. 

argued earlier that looking to stakeholders had a flaw. These groups would be highly organized representatives of their members but not of most users of a platform. The infamous “special interests” of regular politics would thus come to dominate social media content moderation which in turn would have trouble generating legitimacy with users and the larger world outside of the internet.  

But another possibility exists which might be called “pluralism.” Both left and right are organized and thus are stakeholders. Social media managers recognize and seek advice from both sides about content moderation. But the managers retain the right of deciding the “content” part of content moderation. The groups are not happy, but we settle into a stable equilibrium that over time becomes a de facto speech regime for social media.  

A successful pluralism is possible. A lot will depend on the managers rapidly developing the political skills necessary to the task. They may be honing such skills. Facebook’s efforts with conservatives are far from hiring the usual suspects to get out of a jam. Twitter apparently followed conservative advice and verified a pro-gun Parkland survivor, an issue of considerable importance to conservative web pundits, given the extent of institutional support for the March for Our Lives movement. Note I am not saying the Right will win out but rather the companies may be able to manage a balanced system of oversight.  

But there will be challenges for this model.  

Spending decisions by Congress are often seen as a case of pluralist bargaining. Better organized or more skillful groups get more from the appropriations process; those who lose out can be placated with “side payments” to make legislation possible. Overall you get spending bills that no one completely likes, but everyone can live with until the next appropriations cycle. (I know that libertarians reject this sort of pluralism, but I not discussing what should be but rather what is as a way of understanding private content moderation). 

Here’s the challenge. The groups trying to affect social media content moderation are not bargaining over money. The left believes much of the rhetoric of the right has no place on any platform. The right notes that most social media employees lean left and wonder if their effort to cleanse the platforms begins with Alex Jones and ends with Charles Murray (i.e. everyone on the right). The right is thus tempted to call in a fourth player in the pluralist game of content moderation: the federal government. Managing pluralist competition and bargaining is a lot harder in a time of culture wars, as Facebook and Google have discovered.  

Transparency will not help matters. The Journal article mentioned earlier states: 

For users frustrated by the lack of clarity around how these companies make decisions, the added voices have made matters even murkier. Meetings between companies and their unofficial advisers are rarely publicized, and some outside groups and individuals have to sign nondisclosure agreements. 

Murkiness has its value! In this case, it allows candid discussions between the tech companies and various representatives of the left and the right. Those conversations might build trust between the companies and the groups from the left and the right and maybe even, among the groups. The left might stop thinking democracy is threatened online, and the right might conclude they are not eventually going to be pushed off the platforms. We might end up with rules for online speech that no one completely likes and yet are better than all realistic alternatives.  

Now imagine that everything about private content moderation is made public. For some, allowing speech on a platform will become compromising with “hate.” (Even if a group’s leaders don’t actually believe that, they would be required to say it for political reasons). Suppressing harassment or threats will frighten others and foster calls for government intervention to protect speech online. Our culture wars will endlessly inform the politics of content moderation. That outcome is unlikely to be the best we can hope for in an era when most speech will be online. 

 

The Politics of Public-Private Censorship

A month ago the novelist Jay Seliger asked “Is there an actual Facebook crisis, or media narrative about Facebook crisis?” After two years of criticism of the company, he noted, its users are still on board. Indeed, you might have to pay them a $1,000 to give up Facebook for one year. Seliger remarks that an earlier New York Times story “reads a lot like a media narrative that has very little to do with users’ actual lives.”

Seliger asserts that Facebook is “a Girardian scapegoat for a media ecosystem that is unable or unwilling to consider its own role” in the election of Donald Trump. (On Rene Girard see this). I don’t know about the culpability of the “media ecosystem,” but the ferocity of the campaign against Facebook suggests something more at work than a concern about privacy and the use of online data.

Many people were horrified and surprised by Trump’s election. But Trump himself, his campaign, and those who voted for him bear responsibility for his election; to be more accurate those who voted for him in a small number of states like Michigan and Wisconsin put him in the White House.

It is difficult to believe that Facebook’s managers were dumb enough to take sides in a presidential campaign, least of all the side of Donald Trump. Brad Parscale, Trump’s campaign manager in 2016, says plausibly that Facebook gave the campaign as much assistance as it would any multi-million dollar advertising customer. The company sent a person to be a “living manual” to the platform and to fix it quickly when it did not work.

Who Should Moderate Content at Facebook?

By the end of 2019, Facebook promises to establish an independent body to handle appeals of its content moderation decisions. That intention follows an earlier suggestion by Mark Zuckerberg that Facebook might establish a “Supreme Court” of content moderation. Like the real Supreme Court, Facebook’s board will presumably review the meaning and application of its Community Standards, which might be considered the basic law of the platform.

There are many questions about this new institution. This post looks at how its members might be selected.

Keep Government Away From Twitter

Twitter recently re-activated Jesse Kelly’s account after telling him that he was permanently banned from the platform. The social media giant informed Kelly, a conservative commentator, that his account was permanently suspended “due to multiple or repeat violations of the Twitter rules.” Conservative pundits, journalists, and politicians criticized Twitter’s decision to ban Kelly, with some alleging that Kelly’s ban was the latest example of perceived anti-conservative bias in Silicon Valley. While some might be infuriated with what happened to Kelly’s Twitter account, we should be wary of calls for government regulation of social media and related investigations in the name of free speech or the First Amendment. Companies such as Twitter and Facebook will sometimes make content moderation decisions that seem hypocritical, inconsistent, and confusing. But private failure is better than government failure, not least because unlike government agencies, Twitter has to worry about competition and profits.

It’s not immediately clear why Twitter banned Kelly. A fleeting glance of Kelly’s Twitter feed reveals plenty of eye roll-worthy content, including his calls for the peaceful breakup of the United States and his assertion that only an existential threat to the United States can save the country. His writings at the conservative website The Federalist include bizarre and unfounded declarations such as, “barring some unforeseen awakening, America is heading for an eventual socialist abyss.” In the same article he called for his readers to “Be the Lakota” after a brief discussion about how Sitting Bull and his warriors took scalps at the Battle of Little Bighorn. In another article Kelly made the argument that a belief in limited government is a necessary condition for being a patriot.

I must confess that I didn’t know Kelly existed until I learned the news of his Twitter ban, so it’s possible that those backing his ban from Twitter might be able to point to other content that they consider more offensive that what I just highlighted. But, from what I can tell Kelly’s content hardly qualifies as suspension-worthy.

Some opponents of Kelly’s ban (and indeed Kelly himself) were quick to point out that Nation of Islam leader Louis Farrakhan still has a Twitter account despite making anti-semitic remarks. Richard Spencer, the white supremacist president of the innocuously-named National Policy Institute who pondered taking my boss’ office, remains on Twitter, although his account is no longer verified.

All of the of the debates about social media content moderation have produced some strange proposals. Earlier this year I attended the Lincoln Network’s Reboot conference and heard Dr. Jerry A. Johnson, the President and Chief Executive Officer of the National Religious Broadcasters, propose that social media companies embrace the First Amendment as a standard. Needless to say, I was surprised to hear a conservative Christian urge private companies to embrace a content moderation standard that would require them to allow animal abuse videos, footage of beheadings, and pornography on their platforms. Facebook, Twitter, and other social media companies have sensible reasons for not using the First Amendment as their content moderation lodestar.

Rather than turning to First Amendment law for guidance, social media companies have developed their own standards for speech. These standards are enforced by human beings (and the algorithms human beings create) who make mistakes and can unintentionally or intentionally import their biases into content moderation decisions. Another Twitter controversy from earlier this year illustrates how difficult it can be to develop content moderation policies.

Shortly after Sen. John McCain’s death a Twitter user posted a tweet that included a doctored photo of Sen. McCain’s daughter, Meghan McCain, crying over her father’s casket. The tweet included the words “America, this ones (sic) for you” and the doctored photo, which showed a handgun being aimed at the grieving McCain. McCain’s husband, Federalist publisher Ben Domenech, criticized Twitter CEO Jack Dorsey for keeping the tweet on the platform. Twitter later took the offensive tweet down, and Dorsey apologized for not taking action sooner.

The tweet aimed at Meghan McCain clearly violated Twitter’s rules, which state: “You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.”

Twitter’s rules also prohibit hateful conduct or imagery, as outlined in its “Hateful Conduct Policy.” The policy seems clear enough, but a look at Kelly’s tweets reveal content that someone could interpret as hateful, even if some of the tweets are attempts at humor. Is portraying Confederate soldiers as “poor Southerners defending their land from an invading Northern army” hateful? What about a tweet bemoaning women’s right to vote? Or tweets that describe our ham-loving neighbors to the North as “garbage people” and violence as “underrated”? None of these tweets seem to violate Twitter’s current content policy, but someone could write a content policy that would prohibit such content.

Imagine developing a content policy for a social media site and your job is to consider whether content identical to the tweet targeting McCain and content identical to Kelly’s tweet concerning violence should be allowed or deleted. You have four policy options:

     
  Delete Tweet Targeting McCain Allow Tweet Targeting McCain
Delete Kelly’s Tweet

1

2

Allow Kelly’s Tweet

3

4

 

Many commentators seem to back option 3, believing that the tweet targeting McCain should’ve been deleted while Kelly’ tweet should be allowed. That’s a reasonable position. But it’s not hard to see how someone could come to the conclusion that 1 and 4 are also acceptable options. Of all four options only option 2, which would lead to the deletion of Kelly’s tweet but also allow the tweet targeting McCain, seems incoherent on its face.

Social media companies can come up with sensible-sounding policies, but there will always be tough calls. Having a policy that prohibits images of nude children sounds sensible, but there was an outcry after Facebook removed an Anne Frank Center article, which had as its feature image a photo of nude children who were victims of the Holocaust. Facebook didn’t disclose whether an algorithm or a human being had flagged the post for deletion.

In a similar case, Facebook initially defended its decision to remove Nick Ut’s Pulitzer Prize-winning photo “The Terror of War,” which shows a burned, naked nine year old Vietnamese girl fleeing the aftermath of an South Viernamese napalm attack in 1972. Despite the photo’s fame and historical significance Facebook told The Guardian, “While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.” Facebook eventually changed course, allowing users to post the photo, citing the photo’s historical significance:

Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image on Facebook where we are aware it has been removed.

What about graphic images of contemporary and past battles? On the one hand, there is clear historic value to images from the American Civil War, the Second World War, and the Vietnam War, some of which include graphic violent content. A social media company implementing a policy prohibiting graphic depictions of violence sounds sensible, but like a policy banning images of nude children it will not eliminate difficult choices or the possibility that such a policy will yield results many users will find inconsistent and confusing.

Given that whoever is developing content moderation policies will be put in the position of making tough choices it’s far better to leave these choices in the hands of private actors rather than government regulators. Unlike the government, Twitter has a profit motive and competition. As such, it is subject to far more accountability than the government. We may not always like the decisions social media companies make, but private failure is better than government failure. An America where unnamed bureaucrats, not private employees, determine what can be posted on social media is one where free speech is stifled.

To be clear, calls for increased government intervention and regulation of social media platforms is a bipartisan phenomenon. Sen. Mark Warner (D-VA) has discussed a range of possible social media policies, including a crackdown on anonymous accounts and regulations modeled on the European so-called “right to be forgotten.” If such policies were implemented (the First Amendment issues notwithstanding), they would inevitably lead to valuable speech being stifled. Sen. Ron Wyden (D-OR) has said that he’s open to carve-outs of Section 230 of the Communications Decency Act, which protects online intermediaries such as Facebook and Twitter from liability for what users post on their platforms.

When it comes to possibly amending Section 230 Sen. Wyden has some Republican allies. Never mind that some of these Republicans don’t seem to fully understand the relevant parts of Section 230.

That social media giants are under attack from the left and the right is not an argument for government intervention. Calls for Section 230 amendment or “anti-censorship” legislation are a serious risk to free speech. If Section 230 is amended to increase social media companies’ risk of liability suits we should expect these companies to suppress more speech. Twitter users may not always like what Twitter does, but calls for government intervention are not the remedy.

Alex Jones and the Bigger Questions of Internet Governance

Last week Facebook, Google, and Apple removed videos and podcasts by the prominent conspiracy theorist Alex Jones from their platforms (Twitter did not). Their actions may have prompted increased downloads of Jones’ Infowars app. Many people are debating these actions, and rightly so. But I want to look at the governance issues related to the Alex Jones imbroglio.

The tech companies have the right to govern speech on their platforms; Facebook has practiced such “content moderation” for at least a decade. The question remains: how should they govern the speech of their users?

The question has a simple, plausible answer. Tech companies are businesses. They should maximize value for their shareholders. The managers of the platform are agents of the shareholders; they have the power to act on their behalf in this and other matters. (On the other hand, if their decision to ban Jones was driven by political animus, they would be shirking their duties and imposing agency costs on shareholders). As private actors, the managers are not constrained by the First Amendment. They could and should remove Alex Jones because they reasonably believed he drives users off the platform and thereby harms shareholders. End of story.

For many libertarians, this story will be convincing. But others, not so inclined to respect private economic judgments, may not be convinced. I see two limits on business logic as a way of governing social media: free speech and fear.

The Facebook Takedown

Since the 2016 election Facebook has faced several problems, some related to the election, some not. In 2016 Russian agents bought ads on Facebook and posted messages related to the election. Facebook has been blamed for not preventing the Russians from doing this. Many people may believe the Russian efforts led to Donald Trump’s election. That view remains unproven and highly implausible.

Beset by other problems, Facebook seeks to avoid a replay of 2016 after the 2018 elections. Yesterday Facebook tried to take the offensive by removing 32 false pages and profiles from its platform; the pages had 16,000 to 18,000 followers, all connected to an upcoming event “No Unite the Right 2 – DC”.  

Facebook stated the pages engaged in “coordinated inauthentic behavior [which] is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing.” Facebook does not allow anonymity on its platform at least in the United States. They appear to be enforcing their community standards.

Most people might not worry too much about what Facebook did. The speech at issue was said to be divisive disinformation supported by a traditional adversary of the United States. Who worries about the speech of hostile foreigners? Still a reasonable person might be concerned for other reasons.

The source of the Facebook pages, not the company’s policies, seemed of most interest in Washington. Sen. Mark Warner said that “the Kremlin” had exploited Facebook “to sow division and spread disinformation.” Warner’s confidence seems unwarranted. The Washington Post reported that Facebook “couldn’t tie the activity to Russia.” Facebook’s chief security officer called the Russian link “interesting but not determinant.” The company did say “the profiles shared a pattern of behavior with the [2016] Russian disinformation campaign.”

The takedown also affected some Americans. Ars Technica said the event on the removed page “attracted a lot of organic support, including the recruitment of legitimate Page admins to join and advertise the effort.” Perhaps Russian operatives have no protections for their speech. But the Americans affected by the takedown do or at least would have had such protections if the government had ordered Facebook to take down the page in question.

But the source of the speech was not the only problem. As noted earlier, Sen. Warner thought two kinds of speech deserved suppression: divisive speech and disinformation. But, as a member of Congress, he cannot act on that belief. Courts almost always prevent public officials from discriminating against speech based on its content. For example, the First Amendment protects “abusive invective” related to “race, color, creed, religion or gender.” The Supreme Court has also said false statements are not an exception to the First Amendment.

In contrast, Facebook can remove speech from their private forum. The First Amendment does not govern their actions. But Facebook’s freedom in this regard might one day threaten everyone else’s.

Here’s how. Facebook might have removed the page for purely business reasons. Or they have acted more or less as agents of the federal government. The New York Times reported that Sen. Warner “has exerted intense pressure on the social media companies.” His colleague Sen. Diane Feinstein told social media companies last year “You’ve created these platforms, and now they are being misused, and you have to be the ones to do something about it. Or we will.” Free speech would fare poorly if social media were both free of constitutional constraints and effectively under the thumb of public officials.

Facebook officials may see business reasons to resist Russian efforts on their platform, a goal served by enforcing existing rules. At the same time Facebook wishes to be seen by Congress as responsive to congressional bullying. But being too responsive would only encourage more threats later, and in general, giving elected officials even partial control over your business is not a good idea. So Facebook is both careful about Russian influence and responsive to congressional concerns, a good citizen rather than an enthusiastic conscript in defense of the nation.

Facebook’s efforts may yet keep Congress at a safe distance. But members of Congress may be learning they can get they want from the tech companies. In the future federal officials free of constitutional constraints may indirectly but effectively decide the meaning of “divisive speech” and “disinformation” on Facebook and elsewhere. Their definitions would be unlikely to affect only the speech of America’s adversaries.

Some Reasons to Trust Mark Zuckerberg with Freedom of Speech

Last week Mark Zuckerberg gave an interview to Recode. He talked about many topics including Holocaust denial. His remarks on that topic fostered much commentary and not a little criticism. Zuckerberg appeared to say that some people did not intentionally deny the Holocaust. Later, he clarified his views: “I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that.” This post will not be about that aspect of the interview.

Let’s recall why Mark Zuckerberg’s views about politics and other things matter more than the views of the average highly successful businessman. Zuckerberg is the CEO of Facebook which comprises the largest private forum for speech. Because Facebook is private property, Facebook’s managers and their ultimate boss, Mark Zuckerberg, are not bound by the restrictions of the First Amendment. Facebook may and does engage in “content moderation” which involves removing speech from that platform (among other actions).

Facebook F8 2017 San Jose Mark Zuckerberg by Anthony Qunintano is licensed under CC BY 2.0

What might be loosely called the political right is worried that Facebook and Google will use this power to exclude them. While their anxieties may be overblown, they are not groundless. Zuckerberg himself has said that Silicon Valley is a “pretty liberal place.” It would not be surprising if content moderation reflected the dominant outlook of Google and Facebook employees, among others. Mark Zuckerberg is presumably setting the standards for Facebook exercising this power to exclude. How might he exercise that oversight?

Pages