Topic: Telecom, Internet & Information Policy

Alex Jones and the Bigger Questions of Internet Governance

Last week Facebook, Google, and Apple removed videos and podcasts by the prominent conspiracy theorist Alex Jones from their platforms (Twitter did not). Their actions may have prompted increased downloads of Jones’ Infowars app. Many people are debating these actions, and rightly so. But I want to look at the governance issues related to the Alex Jones imbroglio.

The tech companies have the right to govern speech on their platforms; Facebook has practiced such “content moderation” for at least a decade. The question remains: how should they govern the speech of their users?

The question has a simple, plausible answer. Tech companies are businesses. They should maximize value for their shareholders. The managers of the platform are agents of the shareholders; they have the power to act on their behalf in this and other matters. (On the other hand, if their decision to ban Jones was driven by political animus, they would be shirking their duties and imposing agency costs on shareholders). As private actors, the managers are not constrained by the First Amendment. They could and should remove Alex Jones because they reasonably believed he drives users off the platform and thereby harms shareholders. End of story.

For many libertarians, this story will be convincing. But others, not so inclined to respect private economic judgments, may not be convinced. I see two limits on business logic as a way of governing social media: free speech and fear.

The Facebook Takedown

Since the 2016 election Facebook has faced several problems, some related to the election, some not. In 2016 Russian agents bought ads on Facebook and posted messages related to the election. Facebook has been blamed for not preventing the Russians from doing this. Many people may believe the Russian efforts led to Donald Trump’s election. That view remains unproven and highly implausible.

Beset by other problems, Facebook seeks to avoid a replay of 2016 after the 2018 elections. Yesterday Facebook tried to take the offensive by removing 32 false pages and profiles from its platform; the pages had 16,000 to 18,000 followers, all connected to an upcoming event “No Unite the Right 2 – DC”.  

Facebook stated the pages engaged in “coordinated inauthentic behavior [which] is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing.” Facebook does not allow anonymity on its platform at least in the United States. They appear to be enforcing their community standards.

Most people might not worry too much about what Facebook did. The speech at issue was said to be divisive disinformation supported by a traditional adversary of the United States. Who worries about the speech of hostile foreigners? Still a reasonable person might be concerned for other reasons.

The source of the Facebook pages, not the company’s policies, seemed of most interest in Washington. Sen. Mark Warner said that “the Kremlin” had exploited Facebook “to sow division and spread disinformation.” Warner’s confidence seems unwarranted. The Washington Post reported that Facebook “couldn’t tie the activity to Russia.” Facebook’s chief security officer called the Russian link “interesting but not determinant.” The company did say “the profiles shared a pattern of behavior with the [2016] Russian disinformation campaign.”

The takedown also affected some Americans. Ars Technica said the event on the removed page “attracted a lot of organic support, including the recruitment of legitimate Page admins to join and advertise the effort.” Perhaps Russian operatives have no protections for their speech. But the Americans affected by the takedown do or at least would have had such protections if the government had ordered Facebook to take down the page in question.

But the source of the speech was not the only problem. As noted earlier, Sen. Warner thought two kinds of speech deserved suppression: divisive speech and disinformation. But, as a member of Congress, he cannot act on that belief. Courts almost always prevent public officials from discriminating against speech based on its content. For example, the First Amendment protects “abusive invective” related to “race, color, creed, religion or gender.” The Supreme Court has also said false statements are not an exception to the First Amendment.

In contrast, Facebook can remove speech from their private forum. The First Amendment does not govern their actions. But Facebook’s freedom in this regard might one day threaten everyone else’s.

Here’s how. Facebook might have removed the page for purely business reasons. Or they have acted more or less as agents of the federal government. The New York Times reported that Sen. Warner “has exerted intense pressure on the social media companies.” His colleague Sen. Diane Feinstein told social media companies last year “You’ve created these platforms, and now they are being misused, and you have to be the ones to do something about it. Or we will.” Free speech would fare poorly if social media were both free of constitutional constraints and effectively under the thumb of public officials.

Facebook officials may see business reasons to resist Russian efforts on their platform, a goal served by enforcing existing rules. At the same time Facebook wishes to be seen by Congress as responsive to congressional bullying. But being too responsive would only encourage more threats later, and in general, giving elected officials even partial control over your business is not a good idea. So Facebook is both careful about Russian influence and responsive to congressional concerns, a good citizen rather than an enthusiastic conscript in defense of the nation.

Facebook’s efforts may yet keep Congress at a safe distance. But members of Congress may be learning they can get they want from the tech companies. In the future federal officials free of constitutional constraints may indirectly but effectively decide the meaning of “divisive speech” and “disinformation” on Facebook and elsewhere. Their definitions would be unlikely to affect only the speech of America’s adversaries.

Some Reasons to Trust Mark Zuckerberg with Freedom of Speech

Last week Mark Zuckerberg gave an interview to Recode. He talked about many topics including Holocaust denial. His remarks on that topic fostered much commentary and not a little criticism. Zuckerberg appeared to say that some people did not intentionally deny the Holocaust. Later, he clarified his views: “I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that.” This post will not be about that aspect of the interview.

Let’s recall why Mark Zuckerberg’s views about politics and other things matter more than the views of the average highly successful businessman. Zuckerberg is the CEO of Facebook which comprises the largest private forum for speech. Because Facebook is private property, Facebook’s managers and their ultimate boss, Mark Zuckerberg, are not bound by the restrictions of the First Amendment. Facebook may and does engage in “content moderation” which involves removing speech from that platform (among other actions).

Facebook F8 2017 San Jose Mark Zuckerberg by Anthony Qunintano is licensed under CC BY 2.0

What might be loosely called the political right is worried that Facebook and Google will use this power to exclude them. While their anxieties may be overblown, they are not groundless. Zuckerberg himself has said that Silicon Valley is a “pretty liberal place.” It would not be surprising if content moderation reflected the dominant outlook of Google and Facebook employees, among others. Mark Zuckerberg is presumably setting the standards for Facebook exercising this power to exclude. How might he exercise that oversight?

Streaming Music and Copyright

The Senate Judiciary Committee recently voted in favor of a bill that would update copyright law and apply new regulations to interactive streaming services, such as Spotify. The Music Modernization Act (MMA) addresses the issues of non-payment to copyright holders—the basis of a $1.6 billion lawsuit against Spotify—and undefined unenforceable music property rights stemming from the lack of a comprehensive database that records the ownership of copyrights. In the current issue of Regulation, Thomas Lenard and Lawrence White recount the history of music copyright law and discuss some of the shortcomings of the MMA.

The New York Times quotes one supporter of the Act as stating, “This is going to revolutionize the way songwriters get paid in America.” But the MMA primarily incorporates streaming services into the existing framework through which distributors of music obtain permission from and provide compensation to music copyright holders.

A key provision of the MMA is that the Register of Copyrights would designate a Musical Licensing Collective (MLC) with two primary functions. The first is to serve as a collective rights organization that grants licenses for interactive streaming, receives royalties from streaming services, and remits the royalties to copyrights holders. The second function is to create and manage a database of rights holders.

The revolutionary aspect of the MMA is the creation of such a database. Currently, the music industry lacks a comprehensive database that keeps track of copyrights, which is what has created the problems of nonpayment and limited music distributors’ ability to negotiate with individual copyright holders. Lenard and White contend that the database building function of the MLC may be necessary because the economies of scale in managing such a database might be large enough to create a natural monopoly (though nongovernmental groups are already developing open source and blockchain initiatives to solve these problems).

However, by linking the database function of the MLC with its role as a collective rights organization, Lenard and White argue that the MMA simply extends a regulatory regime that limits competition. As it stands, the music copyright system largely consists of compulsory licenses and rates set by administrative or judicial proceedings. The MLC as outlined in the MMA would be a government enforced monopoly with the same anticompetitive practices.

As Lenard and White state,

Whenever an opportunity for pro-competitive reform of music licensing arises, policymakers seem to revert to the traditional regulatory model that discourages competition. They never miss an opportunity…to miss an opportunity. The MMA— with its reliance on compulsory licensing, blanket licensing by a marketing collective, and regulated rates—is the latest of several recent examples.

Instead of extending the current anticompetitive regulations to streaming services, policymakers should instead update the music copyright registration system and allow a competitive copyright market to develop through which those copyrights are traded.  Those changes would provide greater benefits for music creators, distributors, and consumers.

Written with research assistance from David Kemp.

A New Podcast on Free Speech: Many Victories, Many Struggles

In 1996 John Perry Barlow penned A Declaration of the Independence of Cyberspace, a radical call for complete online freedom. The document begins with an optimistic word of caution for states the world around; “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us.”

The internet did not develop as Barlow had hoped, as Jacob Mchangama illustrates in the latest episode of his podcast, Clear and Present Danger: A History of Free Speech. He notes that the “digital promised land turned into a dystopia of surveillance, disinformation, trolling and hate, to which governments responded with increasingly draconian measures.” China has simply banned foreign social media platforms like Facebook and Twitter, while inside the “great firewall,” the government manipulates its population’s infoscape through a combination of flooding popular sites with positive comments and the prohibition of specific characters. Elsewhere, states pressure social media companies to establish sophisticated censorship mechanisms with threats of regulation and liability imposition.

 “The Great Disruption: Part I,” interrogates the claim that the disruptive effects of the internet and social media on the spread of information are historically unprecedented. In some ways, of course, the internet’s effects are unparalleled. But throughout the podcast, Mchangama demonstrates that they are less novel than they might appear. The human rights lawyer turns to the early 16th century and the Great Disruption, a period of social and religious upheaval sparked by the invention of the Gutenberg printing press in 1447.

Like the internet, the impact of the printing press cannot be overstated. The social effects of the printing press are mirrored by the consequences of social media today. The new technology allowed a single document to be cheaply, reliably copied (making it hard for authorities to get rid of material they deemed problematic), allowed authors to effectively write anonymously (allowing them to put pen to page with less fear of consequence), and provided individuals with the ability to bypass traditional authorities and gatekeepers when creating content.

Scholars disagree on why the printing press failed to take hold in the Ottoman empire, but many think religion played a role, as some were fearful the printing press could contribute to the distribution of erroneous copies of the Qur’an. After all, the printing press had fueled religious schism in Europe. Their disruptive potentials aside, both the internet and the printing press have driven tremendous growth and economic development. The Ottomans increasingly lagged behind Western Europe in terms of literacy, science, and the spread of new ideas – “disciplines in which Muslims had previously been civilizational frontrunners.” The cost of being outside the information loop was high then, and is high now, exemplified by the fact that it would be nearly impossible to be successful starting an “offline” business or university. Societies that spurn technological adoption may be more stable in the short run, but they miss out on opportunities for future benefits, productivity, and progress.

Mchangama recounts Martin Luther used the printing press. In October 1517, Luther sent his list of 95 theses to the local printing press, prompting the Reformation that disrupted the religious unity of Europe. Mchangama compares Luther’s work to a twitter post (95 tweets?) as his arguments often were short and simplified so they could be spread widely and easily digested. They found an audience: by 1523, one third of Germany’s book titles came from Luther alone.

New Bill Would Ban Internet Bots (and Speech)

Sen. Dianne Feinstein has introduced the Bot Disclosure and Accountability Act, a proposal to regulate social media bots in a roundabout fashion. The bill has several shortcomings.

Automation of social media use exists on a continuum, from simple software that allows users to schedule posts throughout the day, to programs that scrape and share information about concert ticket availability, or automatically respond to climate change skeptics. Bots may provide useful services, or flood popular topics with nonsense statements in an effort to derail debate. They often behave differently across different social media platforms; Reddit bots serve different functions than Twitter bots.  

What level of automation renders a social media account a bot? Sen. Feinstein isn’t sure, so she’s relinquishing that responsibility to the Federal Trade Commission:

The term ‘‘automated software program or process intended to impersonate or replicate human activity online’’ has the meaning given the term by the [Federal Trade] Commission

If Congress wants to attempt to regulate Americans’ use of social media management software, they should do so themselves. Instead, they would hand the hard and controversial work of defining a bot to the FTC, dodging democratic accountability in the process. Moreover, the bill demands that the FTC define bots “broadly enough so that the definition is not limited to current technology”, virtually guaranteeing initial overbreadth.

While the responsibility of defining bots is improperly passed to the FTC, the enforcement of Feinstein’s proposed bot disclosure regulations is accomplished through a further, even less desirable delegation. The Bot Disclosure and Accountability Act compels social media firms to adopt policies requiring the operators of automated accounts to “provide clear and conspicuous notice of the automated program.” Platforms would need to continually “identify, assess, and verify whether the activity of any user of the social media website is conducted by an automated software program”, and “remove posts, images, or any other online activity” of users that fail to disclose their use of automated account management software. Failure to reasonably follow this rubric is to be considered an unfair or deceptive trade practice.

This grossly infringes on the ability of private firms, from social media giants like Facebook to local newspapers that solicit readers’ comments, to manage their digital real-estate as they see fit, while tipping the balance of private content moderation against free expression. Social media firms already work to limit the malicious use of bots on their platforms, but no method of bot-identification is foolproof. If failure to flag or remove automated accounts is met with FTC censure, social media firms will be artificially incentivized to remove more than necessary.  

The bill also separately, and more stringently, regulates automation in social media use by political campaigns, PACs, and labor unions. No candidate or political party may make any use of bots, however the FTC defines the term, while political action committees and labor unions are prohibited from using or purchasing automated posting software to disseminate messages advocating for the election of any specific candidate. It is as if Congress banned parties and groups from using megaphones at rallies. Would that prohibition reduce political speech? No doubt it would. How then can the prohibitions in this bill comport with the constitutional demand to make no law abridging the freedom of speech? They cannot.

Feinstein’s bill attempts to automate the process of regulating social media bots. In doing so, it dodges the difficult questions that attend regulation, like what, exactly, should be regulated, and foists the burden of enforcement on a collection of private firms ill-equipped to integrate congressional mandates into their content moderation processes. Automation may provide for the efficient delivery of many services, but regulation is not among them. Most importantly, the bill does not simply limit spending on bots. It prohibits political (and only political) speech by banning the use of an instrument for speaking to the public. Online bots may worry Americans, but this blanket prohibition of speech should worry us more.

Surveillance Tech Still a Concern After Carpenter

Last week the Supreme Court issued its ruling in Carpenter v. United States, with a five-member majority holding that the government’s collection of at least seven days-worth of cell site location information (CSLI) is a Fourth Amendment search. The American Civil Liberties Union’s Nathan Wessler and the rest of Carpenter’s team deserve congratulations; the ruling is a win for privacy advocates and reins in a widely used surveillance method. But while the ruling is welcome it remains narrow, leaving law enforcement with many tools that can be used to uncover intimate details about people’s private lives without a warrant, including persistent aerial surveillance, license plate readers, and facial recognition.


Timothy Carpenter and others were involved in a string of armed robberies of cell phone stores in Michigan and Ohio in 2010 and 2011. Police arrested four suspects in 2011. One of these suspects identified 15 accomplices and handed over some of their cell phone numbers to the Federal Bureau of Investigation. Carpenter was one of these accomplices.

Prosecutors sought Carpenter’s cell phone records pursuant to the Stored Communications Act. They did not need to demonstrate probable cause (the standard required for a search warrant). Rather, they merely had to demonstrate to judges that they had “specific and articulable facts showing that there are reasonable grounds to believe” that the data they sough were “relevant and material to an ongoing criminal investigation.”

Carpenter’s two wireless carriers, MetroPCS and Sprint, complied with the judges’ orders, producing 12,898 location points over 127 days. Using this information prosecutors were able to charge Carpenter with a number of federal offenses related to the armed robberies.