Telecom, Internet & Information Policy

July 27, 2020 10:12AM

PACT Act Does More Harm than Good

The bipartisan, process oriented “Platform Accountability and Consumer Transparency Act” joins a recent parade of Section 230 reform proposals. Sponsored by Sens. Brian Schatz (D-HI) and John Thune (R-SD), the PACT Act proposes a collection of new requirements intended to optimize social media platforms’ governance of user speech. These government mandated practices for handling both illegal speech and that which merely violates platform community standards would upset delicate, platform specific balances between free expression and safety. While more carefully constructed than competing proposals, with provisions actually tailored to the ends of accountability and transparency, the bill threatens to encumber platforms’ moderation efforts while encouraging them to remove more lawful speech.

The PACT Act would establish a process for removing illegal speech, giving platforms 24 hours to remove content that deemed illegal by a court. A company that fails to act would lose Section 230’s protections against liability. Such protections are generally thought essential to these companies. Leaving decisions of legality to the courts is important, it preserves democratic accountability and prevents platforms from laundering takedown demands that wouldn’t otherwise pass legal muster. Under Germany’s NetzDG law platforms must remove “manifestly unlawful” content within 24 hours or risk steep fines, a set of incentives that have encouraged the removal of legal speech on the margins.

The bill’s proposed process for removal would be improved by the addition of a counter‐​notice system, more specific illegal content identification requirements, and a longer takedown window to allow for either user or platform appeal. Still, it is a broadly reasonable approach to handling speech unprotected by the First Amendment.

The breadth of covered illegal content is somewhat limited, including only speech “determined by a Federal or State court to violate Federal criminal or civil law or State defamation law.” This would exclude, for instance, New Jersey’s constitutionally dubious prohibition on the publication of printable firearms schematics.

While the legal takedown mechanism requires a court order, the bill’s requirement that platforms investigate all reports of community standards violations is ripe for abuse. Upon receiving notice of “potentially policy‐​violating content,” platforms would be required to review the reported speech within 14 days. Like law enforcement, content moderators have limited resources to police the endless flow of user speech and must prioritize particularly egregious or time sensitive policy violations. Platform‐​provided user reporting mechanisms are already abused in attempts to vindictively direct moderators’ focus. Requiring review (with a deadline) upon receipt of a complaint would make abusive flagging more effective by limiting moderators’ ability to ignore bad‐​faith reports. Compulsory review will be weaponized by political adversaries to dedicate limited platform enforcement capacity to the investigation of their rivals. Community standards can often be interpreted broadly; under sustained and critically directed scrutiny even broadly compliant speakers may be found in breach of platform rules. Moderators, not the loudest or most frequent complainants, should determine platform enforcement priorities. While the bill also mandates an appeals process, this amounts to a simple re‐​review rather than an escalation and will at best invite an ongoing tug of war over contentious content.

Some of the bill’s components are constructive. Its transparency reporting requirements would bring standardization and specificity to platform enforcement reports, particularly around the use of moderation tools like demonetization and algorithmic deprioritization. This measure would formalize platforms’ hitherto voluntary enforcement reporting, allowing for better cross‐​platform comparisons and evaluations of disparate impact claims. Beneficent intentions and effects aside, as requirements these reporting standards would likely raise compelled speech concerns.

However, other aspects of the bill are sheer fantasy in the face of platform scale. PACT would require platforms to maintain “a live company representative to take user complaints through a toll‐​free telephone number,” during regular business hours. If in a given day even a hundredth of a percent of Facebook’s 2.3 billion users decided to make use of such an option, they would generate tens of thousands of calls. In the early days of Xbox Live, Microsoft maintained a forum to answer user moderation complaints. The forum was so inundated with unreasonable and inane questions that the project was later abandoned. While Microsoft may have incidentally provided some valuable civic education, other platforms should not be required to replicate its Sisyphean efforts.

xbox screenshot

Provisions drafted without regard for the demands of scale will fall hardest on small firms trying to scale, hindering competition. Parler is a conservative alternative to twitter with 30 employees and 1.5 million users as of June. It’s large enough to lose the bill’s small business exemption (applying to services with less than a million monthly users or visitors) but not large enough to dedicate employees to call‐​center duty or review all reports of policy violations within 14 days.

There is a real danger that the bill may be treated as a solution to perceived problems with Section 230 simply because it is less radical or more thoughtfully drafted than competing proposals for reform. The PACT Act may be better than other proposed modifications, but that doesn’t make it, on net, an improvement on the status quo. While bill would increase the transparency of moderation, its impositions on policy enforcement and appeals processes will create more problems than they solve. In heaping new demands on complex moderation systems without regard for platform constraints, the PACT Act places a thumb on the scale in favor of removal while creating new avenues for abuse of the moderation process.

July 23, 2020 2:40PM

International Law and “Hate Speech” Online

From the beginning social media managers have excluded content from their platforms. At first they did so intuitively. Only a few people moderated content; like Justice Potter Stewart’s view of obscenity, “the new governors” of speech knew what to exclude when they saw it. As the platforms grew, such judgments seemed too subjective and opaque. Content moderation teams sought to formulate general rules, published as Community Standards, that could be consistently applied. Some also spoke of their company’s values, an effort to go beyond the judgments of this or that employee. Perhaps values were needed to turn the cold text of Community Standards into living guidelines accepted by all. The move from individual intuition to “accepted by all” bespoke a need for legitimacy. The rules and their application needed support from users and others outside the platform.

Economic success deepened the legitimacy problem and threatened the companies’ commitment to “voice” or free expression. The most successful companies, like Google or Facebook, had grown far beyond the United States. Their content moderators wondered whether their desire to protect speech reflected their cultural backgrounds as Americans. This belief ran counter to growing cultural relativism in the developed world: if all cultures were equal, why should U.S. free speech ideals be applied outside its borders? Maybe social media’s global expansion required less weight for voice and more for other values.

In any case, appreciation of “voice” was waning in American culture, if not in its legal system. Organized interests demanded social media remove speech that would be protected under the First Amendment. Social media were not obligated to observe the First Amendment; they were not the government. The demands for suppressing speech fostered continually evolving codes of prohibited speech for various platforms. The leaders of social media companies had long professed support for speech. If they meant those professions, social media leaders needed a globally acceptable foundation for Community Standards that protected speech. 

International law (and its subset, international human rights rules) offers a plausible answer to social media’s quandary. In 1948, the United Nations adopted the Universal Declaration of Human Rights. Most of that Declaration became legally binding when the UN General Assembly adopted two international human rights treaties in 1966: the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social, and Cultural Rights. The U.S. ratified the ICCPR in 1992. I focus here on the ICCPR which purports to be law beyond borders, thus addressing one challenge for social media.

What about protections for speech? ICCPR’s Article 19 states “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” Insofar as social media looks to international law to inform their Community Standards, Article 19 commits them to strong protections for expression. But that’s not the whole story.

ICCPR also places positive obligations to limit speech on governments. Article 20(2) of the ICCPR states “Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.” In other words, Article 20(2) requires governments who adopt the ICCPR to prohibit “hate speech.” Social media are not governments, do not make law, and did not ratify the ICCPR. But Article 20(2) might be taken to legitimate social media “hate speech” rules. After all, Article 20(2) has been ratified by many nations.

But not all. For example, when the United States ratified the ICCPR, the Senate’s “advice and consent” was subject to a reservation: “That Article 20 does not authorize or require legislation or other action by the United States that would restrict the right of free speech and association protected by the Constitution and laws of the United States.” Belgium and the United Kingdom also limited the scope of Article 20(2) in defense of free expression. Other nations, like Australia, reserved a right not to introduce new legislation on “hate speech” and other issues. Indeed, many nations objected to many aspects of the ICCPR (here’s a list of nations and their objections).

Article 20(2) faces another problem in the United States. One scholar has noted, “Where U.S. duties under a treaty conflict with rights protected in the U.S. Constitution, rights in the Constitution must prevail.” In Reid v. Covert (1957), the Supreme Court said it “would be manifestly contrary to the objectives of those who created the Constitution… to construe Article VI as permitting the United States to exercise power under an international agreement without observing constitutional prohibitions.” The same Court “has consistently ruled that [hate] speech enjoys First Amendment protection unless it is directed to causing imminent violence or involves true threats against individuals.”

Where does all this analysis leave social media content moderators? Article 20(2) instructs governments to ban “hate speech”; U.S. courts say the government may not ban “hate speech” in the United States. Perhaps neither instruction matters; social media are not governments, properly understood, so they are not strictly obligated by either Article 20(2) or the U.S. Constitution. But both ICCPR and U.S. law suggest more broadly that “hate speech” bans are both legitimate and illegitimate, beyond borders and within one border, respectively. The implications for the legitimacy of content moderation are unclear.

Yet we are not done with international rules yet. In June 2011, the U.N. Human Rights Council endorsed “Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework.” These principles stipulate: “Business enterprises should respect human rights. This means that they should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.” Their “responsibility to respect” human rights “exists over and above compliance with national laws and regulations protecting human rights.” The rights in question may be found in several places including the ICCPR.

Article 20(2) requires action (not inaction) by governments. However, the section could be defined as a positive right to live under a government that has criminalized “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.” It requires no great leap of imagination to conclude that social media could respect that “right” by banning “hate speech” from their platforms.

At this point, international law seems like a dead end for free speech. Governments are required to prohibit a broad and ambiguous category of speech while businesses are instructed to “respect” a putative right to be free of “hate speech,” a demand that could support banning a wide range of speech. The U.S. legislature and courts do not recognize such required prohibitions, but their reticence does not obligate social media companies incorporated in the United States. The ICCPR does include Article 19 which offers strong words favoring free expression. (It also provides other reasons to restrict speech that I have not mentioned like reputation and national security, but these limits are less controversial than “hate speech”). But in practice the liberal words of Article 19 seem undermined by the illiberal demands of Article 20(2).

And yet we have not examined all of Article 19. Article 19(3) states that free expression may “be subject to certain restrictions, but these shall only be such as are provided by law and are necessary.” The U.N. Special Rapporteur on the promotion and protection of the right to freedom of expression and opinion has fashioned a tripartite test to apply the words “as are provided by law and are necessary” in Article 19(3). This test subjects a restriction on speech to three conditions: legitimacy, legality and necessity/​proportionality. Legitimacy means the restriction may only pursue a limited set of public interests specified in ICCPR. Legality means a restriction “must be provided by laws that are precise, public and transparent; it must avoid providing authorities with unbounded discretion, and appropriate notice must be given to those whose speech is being regulated.” Finally, necessity and proportionality means a limitation on speech must be “the least restrictive means” to achieve the aforementioned public interest. This prong of the test means a regulator must pursue its goals at the least possible cost to speech. Often other policies that take a smaller toll on speech are available to governments and perhaps to social media. Presumably, since no government indicated a reservation to Article 19(3), this tripartite test applies to all government restrictions on speech and should be respected by private businesses including social media.

Where does all this leave online speech? The ICCPR supports free expression, requires a ban on “hate speech,” and permits restrictions on speech in pursuit of a limited number of important public interests. All such restrictions on speech, public or private, must be legitimate, legal, and the least restrictive means to a legitimate end. But does the tripartite test matter?

Maybe. For example, the Special Rapporteur mentioned above noted that “‘Hate speech’, a shorthand phrase that conventional international law does not define, has a double ambiguity. Its vagueness and the lack of consensus around its meaning can be abused to enable infringements on a wide range of lawful expression.”(For this reason, I have put the term “hate speech” in quotation marks throughout this essay). The legality prong of the tripartite test condemns vague restrictions on free expression. What Article 20(2) of the ICCPR giveth, the tripartite test almost always should take away. Or so free speech advocates may hope, not least if they have anything to do with social media content moderation.

Such is the case that international law might protect speech online in ways that legitimate social media content moderation. I leave for another day (and another post) the validity of this case.

Thanks to Evelyn Aswad for comments on an earlier draft of this post. Professor Aswad’s scholarship on these topics may be found here. This paper will be especially interesting for readers thinking through the topics covered in this post.

July 22, 2020 10:01AM

New Poll: 62% Say the Political Climate Prevents Them from Sharing Political Views

50% of strong liberals support firing Trump donors, 36% of strong conservatives support firing Biden donors; 32% are worried about missing out on job opportunities because of their political opinions

62% agree "the political climate these days prevents me from saying things I believe because others might find them offensive"

A new Cato Institute/​YouGov national survey of 2,000 Americans finds that 62% of Americans say the political climate these days prevents them from saying things they believe because others might find them offensive. This is up from 2017 when 58% agreed with this statement. Majorities of Democrats (52%), independents (59%) and Republicans (77%) all agree they have political opinions they are afraid to share.­­

Strong liberals stand out, however, as the only political group who feel they can express themselves: 58% of staunch liberals feel they can say what they believe.

Centrist liberals feel differently, with 52% who feel they have to self‐​censor, as do 64% of moderates, and 77% of conservatives. This demonstrates that political expression is an issue that divides the Democratic coalition between centrist Democrats and their left flank.

Read the full survey report and results here.

Staunch Liberals Stand Out as Only Group Who Feels They Can Share their Political Opinions

What’s changed? In 2017 most centrist liberals felt confident (54%) they could express their views. However today, slightly less than half (48%) feel the same. The share who feel they cannot be open increased 7 points from 45% in 2017 to 52% today. In fact, there have been shifts across the board, where more people among all political groups feel they are walking on eggshells.

More Americans Have Opinions They’re Afraid to Share in 2020 than in 2017

Although strong liberals are the only group who feel they can say what they believe, the share who feel pressured to self‐​censor rose 12 points from 30% in 2017 to 42% in 2020. The share of moderates who self‐​censor increased 7 points from 57% to 64%, and the share of conservatives rose 70% to 77%, also a 7‐​point increase. Strong conservatives are the only group with little change. They are about as likely now (77%) to say they hold back their views as in 2017 (76%).

Self‐​censorship is widespread across demographic groups as well. Nearly two‐​thirds of Latino Americans (65%) and White Americans (64%) and nearly half of African Americans (49%) have political views they are afraid to share. Majorities of men (65%) and women (59%), people with incomes over $100,000 (60%) and people with incomes less than $20,000 (58%), people under 35 (55%) and over 65 (66%), religious (71%) and non‐​religious (56%) all agree that the political climate prevents them from expressing their true beliefs.

50% of Staunch Liberals Support the Firing of Trump Donors

Nearly a third (31%) of Americans say they’d support firing a business executive who personally donated to Donald Trump’s re‐​election campaign for president. This share rises to 50% among strong liberals who support firing business executives who personally donate to Trump.

36% of Staunch Conservatives Support Firing Biden Donors

The survey finds that “cancel culture” goes both ways. Nearly a quarter (22%) of Americans support firing a business executive who personally donates to Democratic presidential candidate Joe Biden’s campaign. This share rises to 36% among strong conservatives who support firing Biden donors. These results are particularly notable given that most personal campaign contributions to political candidates are public knowledge and can easily be found online.

32% Worry Their Political Views Could Harm Their Employment

32% worry they could  miss out on job opportunities or get fired if their political views became known

Nearly a third (32%) of employed Americans say they are worried about missing out on career opportunities or losing their job if their political opinions became known. Americans across the political spectrum share these concerns: 31% of liberals, 30% of moderates, and 34% of conservatives are worried their political views could get them fired or harm their career trajectory. This suggests that it’s not necessarily just one particular set of views that has moved outside of acceptable public discourse. Instead these results are more consistent with a “walking on eggshells” thesis that people increasingly fear a wide range of political views could offend others or negatively impact themselves.

These concerns cut across demographics and partisan lines: 28% of Democrats, 31% of independents, 38% of Republicans, 38% of Hispanic Americans, 22% of African Americans, 31% of White Americans, 35% of men, 27% of women, 36% of households earning less than $20,000 a year, and 33% of households earning more than $100,000 a year fear their political opinions could impact their career trajectories.

Read the full survey report and results here.

The topline questionnaire, crosstabs, full methodology, and analysis of the survey findings can be found here.

Methodology:

The Cato Institute Summer 2020 National Survey was designed and conducted by the Cato Institute in collaboration with YouGov. YouGov collected responses online during July 1–6, 2020 from a national sample of 2,000 Americans 18 years of age and older. Restrictions are put in place to ensure that only the people selected and contacted by YouGov are allowed to participate. The margin of error for the survey is +/- 2.36 percentage points at the 95% level of confidence.

July 7, 2020 4:26PM

A Twitter Alternative, If They Can Keep It

After an influx of new users, Parler, a social media platform offered as liberally governed alternative to Twitter, implemented a few basic rules. These rules prompted frustration and gloating from the platform’s respective fans and skeptics, but Parler is simply advancing along a seemingly inevitable content moderation curve. Parler’s attempt to create a more liberal social media platform is commendable, but the natural demands of moderation at scale will make this commitment difficult to keep.

Parler was initially advertised as moderated in line with First Amendment precedent and Federal Communications Commission broadcast guidelines, which differ considerably from each other. However, the platform’s terms of service reserve the right to “remove any content and terminate your access to the Services at any time and for any reason or no reason,” and include a provision requiring users to cover Parler’s legal fees in suits over user speech. While the latter section is unlikely to hold up in court, the entirely standard reservation of the right to exclude was seen as a betrayal of Parler’s promise of an unobstructed platform for free speech.

Parler’s decision to ban trolls who had begun impersonating MAGA celebrities and implement a few rules which, while quite limited, went beyond what might be expected of a First Amendment standard, contributed to the perception of hypocrisy.

Rules posted by Parler's John Matze

Although these rules may prohibit what the First Amendment would protect, they are largely in keeping with Parler’s more implicit founding promise; not to be a place for all speech, but to offer a home for conservatives put off by Twitter’s ostensibly overbearing moderation. Parler’s experience illustrates the impossibility and undesirability of preventing alleged bias by governing differing social media platforms under a one‐​size fits all First Amendment standard. For those who imagined that Parler would strictly apply the First Amendment, or that such a platform would be practicable or manageable, the developments were disappointing. However, for those who simply want an alternative to Twitter that governs with conservative values in mind, Parler’s growing pains present a difficult but viable path forward. Parler’s incentives to moderate will only increase as it grows, and scale will make transparent, context‐​aware moderation all the more difficult.

Platforms usually develop new policies in response to crises or unforeseen, unwanted platform uses. One of the first recorded user bans, within a text based multiplayer game called LambdaMOO, came in response to the unanticipated use of a voodoo doll item to narrate obscene actions by other players. After the livestreamed Christchurch shooting, Facebook placed new restrictions of Facebook live use. These changes can be substantial, Reddit once committed to refrain from removing “distasteful” subreddits but changed its stance in response to user communities dedicated to violence against women and upskirt photos.

Because Parler is a small, young platform, the sorts of outrageous, low probability events that have driven rule changes and moderation elsewhere just haven’t happened there yet. This week saw the first of these rule‐​building incidents. Trolls hoping to test the boundaries of Parler’s commitment to free speech and get a rise out of its users began, often vulgarly, impersonating prominent Trump supporters and conservative publications. Whether recent banning of Chapo Trap House subreddit contributed to the number of bored leftists trolling Parler is an interesting, but probably unanswerable question. Drawing a line between parody and misleading impersonation is a problem that has long bedeviled content moderators. Twitter has developed policies governing impersonation, as well as rules for parody and fan accounts. Twitter may have drawn its lines imperfectly, but it drew them in light of a decade of experience with novel forms of impersonation.

Parler responded by banning the impersonating accounts, including some inoffensive parodies. Devin Nunes’ Cow, a long running Twitter parody account and the subject of a lawsuit by Rep. Nunes (R-CA) against Twitter, followed the Congressman to Parler and was subsequently banned. The fact that Parler has responded in such a blunt fashion is unsurprising. As a small platform, they have limited moderation resources to address impersonation claims, and the vitality of their network somewhat relies upon the presence of prominent Trump supporters like Rep. Nunes. While the platform has called for volunteer moderators, likening the work to service on a jury, this may introduce biases of its own.

Banning these sorts of impersonators is unlikely to upset Parler’s core userbase of Twitter‐​skeptical conservatives. A seeming blanket ban of Antifa supporters from the platform, announced by founder John Matze, might be similarly understood as an antiharassment measure. However, if the platform’s governance is beholden to the tastes of a specific community or the whims of its founders it may have difficulty attracting a wider audience and maintaining some sort of First Amendment standard. These are not unique problems: Pinterest has long labored to shed the perception that it’s just for women, while content delivery network Cloudflare struggled with ramifications of its founder’s unilateral ability to remove hateful content such as the white supremacist site Stormfront. Parler may simply aspire to be a social media platform for conservatives. Niche platforms like JDate, a Jewish dating site, and Ravelry, a social network for knitters, have undoubtedly been successful. If Parler pursues this path, it will provide a safe space for Trump supporters, but won’t have the level of cultural impact enjoyed by Twitter. It may also simply be boring if it becomes an ingroup echo chamber without opportunities for engagement, conversation, or conflict with ideological others.

Parler is working to move beyond some of its small platform problems. Matze’s profile now includes the line “Official Parler statements come from @Parler,” And the platform issued its first set of community guidelines. The guidelines illustrate the unavoidable tensions between Parler’s desire to utilize a First Amendment standard, and create a social media platform that is both governable and enjoyable. In many cases, the guidelines make reference to specific Supreme Court precedent, clearly trying to mirror the strictures of the First Amendment, however, they also include a blanket prohibition on spam, which, while a bedrock norm of the internet, has no support in First Amendment caselaw. In practice, reliance on areas of law that have essentially been allowed to lie fallow, often to the frustration of conservatives, produces vague guidance such as: “Do not use language/​visuals that are offensive and offer no literary, artistic, political, or scientific value.” In time, Parler will need offer more specific rules for its platform, rules applicable at scale while still reflective of its values. As a model for social media, a First Amendment standard with additions driven by contingency may work well. But in time, it may look more like the rules of other platforms than intended and will ultimately reflect the fact that the First Amendment is made practicable by the varied private rules layered on top of it.

As Parler grows, it will face the large platform problem of enforcing its rules at scale. This is a big platform problem. Parler may be able to derive a standard of obscenity that works for its current userbase, but new users and communities will discover new ways to contest it. Millions of users will find more ways to use and misuse a platform than hundreds of thousands of users, and network growth usually outstrips increases in moderation capacity. Moderators will have to make more moderation decisions in less time, and often with less context. If Parler hosts multiple communities of interest, it will need to understand their distinct cultures and idiosyncrasies of language and mediate between their different norms.

While a larger Parler may hope to maintain a liberal or constitutional approach to moderation, this will become difficult when, at any moment, thousands of people are using the platform to sell stolen antiquities, violate copyright, and discuss violence in languages its moderators don’t understand. Add the pressure of the press and politicians demanding that the platform do something or face nebulous regulation, and liberalism becomes harder to maintain. In the face of these novel problems of scale, platforms with different purposes and structures all seem to suffer from opaque decision making and overreliance on external expertise. Even if they wish to govern differently, they don’t have time to explain every moderation decision or the in‐​house expertise to understand every new unanticipated misuse. Individual users also matter less when platforms have millions or billions of them. While this frees platforms from reliance on particular superusers, it erodes the feedback loops that help to rein in overbroad moderation.

Parler’s early commitment to free speech, and the charges of hypocrisy which attended its initial rulemaking, may help it to resist later demands for increased moderation. However, platforms have long gone back on earlier assurances of liberal governance: over the past decade, Reddit went from maintaining a commitment to refrain from removing subreddits, to removing hundreds of subreddits in a single day. While Parler hopes to resist the novel demands of moderation at scale, other platforms have been unable to do so. Though they may yet succeed, it is likely that these constraints are structural, and will only be surmounted by a commitment to decentralization in the architecture and code of the platform itself. Human commitments, even commitments made with the best of intentions, are simply too vulnerable to the unanticipated pressures of governance at scale.

June 25, 2020 10:25AM

Too Many Cooks Spoil the Internet

My colleague Matthew Feeney and I have previously written about the EARN-IT Act, noting that it could be used as a vehicle to fulfill Department of Justice demands for encryption backdoors. That concern has only increased in the wake of the DOJ’s publication of a list of proposed changes to Section 230, which include evidence preservation requirements prohibitive of encryption. However, the EARN IT Act relies on rulemaking by committee to derive the best practices on which it would condition Section 230’s protections. While a charge to establish sweeping best practices may threaten encryption, the proposed committee’s membership selection and voting structure has problems of its own; it seems designed to encourage gridlock and the erosion of civil liberties.

The committee is composed of 19 members selected for five‐​year terms, 14 of them must approve a set of best practices to send to Congress for a fast‐​track vote without amendments. Three are federal officers: the attorney general, the secretary of homeland security and the chairman of the Federal Trade Commission. The rest are selected in a bipartisan fashion, the majority and minority leaders in the House and Senate each get four picks. However, each quartet must include one member with law enforcement or prosecutorial experience addressing Child Sexual Abuse Material (CSAM), one who is either a survivor of sexual exploitation or works with survivors, one expert in either constitutional law or computer science, and one member who works at either a large or small tech firm covered by the bill.

Yes, it sounds a bit like how Rings of Power are distributed, but it creates a scenario in which the path to an approving majority of 14 runs roughshod over Americans’ privacy. The committee is stacked in favor of intrusive action. A party that controls the executive branch need only win over three appointees of the other side to win approval. It seems likely that limits on encryption will receive some cross‐​party support from either law enforcement or representatives of anti‐​abuse non‐​governmental organizations. If all of the minority party’s law enforcement and NGO representatives join the proposal, the majority needs only one technical expert vote to ignore all of the tech firm representatives altogether.

Making this more likely, the technical experts need only “experience in matters related to constitutional law, consumer protection, or privacy,” or “experience in computer science or software engineering related to matters of cryptography, data security, or artificial intelligence.” These stipulations do not ensure that the selected experts are friendly to Americans’ civil liberties, merely that they have some experience with cryptography or the law. Work experience at the DOJ or a facial recognition firm such as Clearview would fit the bill, but wouldn’t ensure the committee appreciates the effect of their best practices on Americans’ privacy.

While the inclusion of representatives from specific separate firms may be intended to increase the representative nature of the committee, in practice it creates dangerous opportunities for anticompetitive “best practices.” Because the bill includes two spots for large firms and two spots for small firms, selected by majority and minority leaders, it creates opportunities for politicians to reward favored firms at the expense of others. Because not all business models will be represented, those with a seat on the committee will find it easier to comply with the resultant rules. Imagine Facebook handicapping the competing Snapchat by proposing prohibitions on disappearing video as an evidence retention mechanism. Not all ill‐​effects need be intentional, firms on the committee are simply more likely to have their unique concerns heard and reflected in the approved practices.

Although the committee could be expanded to include more perspectives, this may make it unwieldy, and cannot escape the problems inherent to replacing a general rule, equally applicable to all, with a set of specific, evolving prohibitions. The latter model, particularly with an explicitly partisan firm selection process, will always create more opportunities for corruption and capture than the simple, universally applicable regime of Section 230.

June 18, 2020 3:27PM

Coronavirus Misinformation Hearing Cannot Ignore Government’s Role

Next week, on June 24th, the House Energy and Commerce Committee will hold a joint subcommittee hearing about coronavirus misinformation. The hearing is seemingly intended to highlight social media platforms’ role in hosting false user speech about the ongoing pandemic. However, unless the hearing also addresses the impact of misinformation from official sources, Congress will achieve only a limited understanding of how misinformation hindered our nation’s response to COVID-19.

At the onset of the pandemic, prominent platforms provided millions of dollars in free ad credits to the World Health Organization (WHO) and government health agencies. They also gave these organizations top billing within their products, adding prominent links to Centers for Disease Control (CDC) resources. While this civic‐​minded act was intended to increase the reach of trustworthy health advice, early official advice regarding mask use was dangerously wrong.

On February 29th, U.S. Surgeon General Jerome Adams tweeted:

The CDC also discouraged mask use, writing on Facebook that “CDC does not recommend that people who are well wear facemasks to protect themselves from COVID-19 while traveling.” They also tweeted, “CDC does not currently recommend the use of facemasks to help prevent novel #coronavirus.”

On March 8th, in an interview with 60 Minutes’ Jonathan LaPook, Director of the National Institute of Allergy and Infectious Diseases Dr. Anthony Fauci discouraged the general use of face masks, saying; “The masks are important for someone who is infected to prevent them from infecting someone else, now when you see people, and look at the films in China and South Korea or whatever where everyone is wearing a mask, right now in the United States, people should not be walking around with masks.”

Given that the CDC justified a quarantine of Wuhan evacuees in late January by citing the risk of asymptomatic transmission, the suggestion that only those known to be ill should wear masks made little sense. Nevertheless, YouTube users have viewed the section of the interview addressing mask use more than a million times. Although the page now includes a link to current CDC guidance supportive of mask use, the video’s content doubtlessly misled many.

Whether the result of expert incompetence or a misguided “noble lie” intended to preserve masks for others, the effect was the same: Americans were told by their government not only that they should not buy masks, but that masks didn’t work. Individuals responsibly following the official advice refrained from wearing masks, greatly increasing their chances of transmitting COVID-19 to others. In concert with poor advice from other government officials, such as New York City Mayor Bill de Blasio’s tweet on March 2nd encouraging New Yorkers to attend the cinema, the results were lethal.

Unlike conspiratorial publications like Infowars or Zero Hedge or the musings of armchair epidemiologists on Facebook, official misinformation is likely to be widely believed, making it more dangerous. Its endorsement by government gives it a perceived legitimacy that most sources of misinformation lack. When revealed as erroneous, official misinformation gives cover to conspiracy by eroding public trust in expert advice.

Democrats have criticized platform efforts to limit the spread of misinformation as anemic, condemning firms’ failure to fully enforce their community standards. This push poses two dangers. Firstly, platforms cannot simply choose to enforce their standards more efficiently; they review millions of pieces of content per day, often with the aid of sorting algorithms. Mistakes are unavoidable. As such, they must make trade‐​offs between false positives and false negatives. If platforms attempt to catch more violative speech, they will inevitably sweep up more innocent speech along with it. Politicians risk silencing their constituents when they put a thumb on this scale.

Secondly, and perhaps more concerningly, most platforms have ostensibly committed to remove coronavirus‐​related speech at odds with official health guidance. If the official guidance is wrong, the effective removal of conflicting information leaves little opportunity for the eventual correction of official mistakes.

At a George Washington University Conference on Tuesday, Consumer Protection and Commerce Subcommittee Chair Rep. Jan Schakowsky (D-IL) suggested that our national conversation would improve if platforms were “forced to follow their community standards, and if the enforcement agencies would get to work,” portending a future in which platforms are required to inflexibly apply their stated rules, regardless of the costs. Imagine if, rather than simply paying lip service to the official guidance, platforms had diligently removed calls for general mask wearing because they conflicted with erroneous government advice discouraging mask use. How much longer might it have taken to correct the official line? If #MasksForAll had been removed for contravening official health advice, it would have been harder for tireless advocates of general mask use to share their vital message. We should all appreciate platforms’ restraint in this area.

While the presence of health misinformation on platforms is concerning, bottom up‐​misinformation is often unbelievable and frequently removed; more effective removals are not without trade‐​offs. Official misinformation presents a greater threat, both because of its perceived legitimacy and the extent to which platforms take their cues from public health officials. If Congress wishes to examine the spread of coronavirus misinformation, before blaming platforms, it must first grapple with the outsize role of official misinformation in delaying the widespread use of face masks.

May 28, 2020 12:51PM

Trump’s Social Media Order Rewrites Internet Law by Decree

Author’s Note: This post originally concerned a draft executive order. What follows is a discussion of the final order. The original analysis can be found below that.

Yesterday, I wrote about a draft of the President’s executive order, which he went on to sign in the afternoon. The White House released a final version of the order last night. It differs significantly from the draft in verbiage, though not in effect.

In some instances, the language has been watered down. However, crucially, the final order contains the same unsupported contention that the protections offered by 230 (c)(1) are contingent upon platforms moderating in accordance with some stricter understanding of (c)(2)(A).

It is the policy of the United States to ensure that, to the maximum extent permissible under the law, this provision is not distorted to provide liability protection for online platforms that — far from acting in “good faith” to remove objectionable content — instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.

It is the policy of the United States that the scope of that immunity should be clarified: the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.

In some ways, these claims are more limited than those in the draft. However, the “distortion” and “extension” of Section 230 described in the final order is, in fact, the longstanding, textually supported reading of the law. As I outlined yesterday, (c)(1) and (c)(2)(A) protections are separate. It is not an extension of the law to apply them separately, and any “clarification” otherwise would amount to an amendment.

Confusingly, the final order contains a paragraph that might more strongly assert a connection between the first and second subsections; however, the second time it refers to (c)(2)(A), it does so in a context in which only (c)(1) would make sense:

When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

The liability faced by traditional publishers, which prescreen material rather than moderating ex‐​post, is foreclosed by (c)(1), not (c)(2)(A). If, as is likely, this line was meant to reference (c)(1), the order more stridently misinterprets Section 230. The protections offered by the first and second subsections are entirely separate, making the President’s directive to NTIA, instructing them to petition the FCC to examine connections between (c)(1) and (c)(2)(A), facially absurd.

… requesting that the FCC expeditiously propose regulations to clarify: (i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher

There are no circumstances under which a provider that restricts access in a manner unprotected by (c)(2)(A) loses (c)(1) protections. (c)(1) protections are lost when a platform authors content, making it the platform’s content rather than that of a third party. (c)(1) is not in any way contingent on (c)(2)(A). The order invites the FCC to make a miraculous discovery completely at odds with settled law or return a pointless null result.

Finally, the order directs the FTC to investigate platforms for moderating in a manner inconsistent with their stated terms of service:

The FTC shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce … Such unfair or deceptive acts or practice may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.

Platform terms of service or community standards are not binding contracts. They lay out how a platform intends to govern user speech but often change in response to new controversies, and automated moderation at scale is frequently imprecise. In light of Trump’s recent personal spats with social media firms, any subsequent FTC action may appear politically motivated.

In sum, the order makes a number of sweeping, unfounded claims about the breadth and intent of Section 230. The declarations of government policy are concerning: “all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section.” However, the administration’s proposed interpretation is so at odds with a plain reading of the statute and controlling precedent that courts are unlikely to uphold decisions based on this official misinterpretation.

The order’s substantive elements require action on the part of the FCC and FTC. Their response will largely determine the order’s scope and effect. The FCC could nonsensically determine that (c)(1) had been contingent on (c)(2)(A) all along and the FTC could aggressively pursue tech firms for moderation inconsistent with their terms of service but, given the likelihood of judicial resistance, a hard‐​charging response is improbable. Like so much else from the Trump administration, it may turn out to be another order full of sound and fury that ultimately delivers nothing in the way of substantive change. Nevertheless, even if the order is ineffective, it represents a worrying belief that the President can twist and reinterpret even long‐​settled law to fit his political agenda.


President Trump has escalated his war of words with America’s leading technology firms. After threatening to “close down” social media platforms, he announced that he would issue an executive order concerning Section 230 of the Communications Decency Act, a bedrock intermediary liability protection for internet platforms. However, a draft of the forthcoming executive order seems to slyly misunderstand Section 230, reading contingency into its protections. Let’s take a look at the statute and the relevant sections of the proposed executive order to see how its interpretation errs.

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

  1. any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

or

  1. any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

The statute contains two parts, (c)(1) and (c)(2). Subsection (c)(1) prevents providers of an “interactive computer service,” be they Twitter, or a blog with a comments sections, from being treated as the publisher of their users’ speech. 230 (c)(2) separately addresses providers’ civil liability for actions taken to moderate or remove content.

The executive order obfuscates this distinction, presenting (c)(1) as contingent on (c)(2). The EO contends that “subsection (c)(2) qualifies that principle when the provider edits content provided by others.” This is simply incorrect. Subsection (c)(2) protects platforms from a different source of liability entirely. While the first subsection stops platforms from being treated as the publishers of user speech, (c)(2) prevent platforms from being sued for filtering or removal. Its protections are entirely separate from those of (c)(1); dozens of lawsuits have attempted to treat platforms as the publishers of user speech, none have first asked if platforms’ moderation is unbiased or conducted in good faith. Even if a provider’s moderation were found to breach the statute’s “good faith” element, it would merely render them liable for their moderation of the content in question, it wouldn’t make them a publisher writ large.

The executive order makes its misunderstanding even more explicit as it orders the various organs of the federal government to similarly misinterpret Section 230.

When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. By making itself an editor of content outside the protections of subparagraph (c)(2)(A), such a provider forfeits any protection from being deemed a “publisher or speaker” under subsection 230(c)(1), which properly applies only to a provider that merely provides a platform for content supplied by others. It is the policy of the United Sates that all departments and agencies should apply section 230(c) according to the interpretation set out in this section.

The order goes on to direct the National Telecommunications and Information Administration to petition the FCC, technically an independent agency, to promulgate regulations determining what sort of moderation breaches the good faith aspect of (c)(2), and, according to the administration’s erroneous reading of the statute, triggers the forfeiture of (c)(1) protections against being treated as a publisher.

Clearly, none of this is actually in Section 230. Far from expecting websites to “merely provide a platform,” (c)(2)(A) explicitly empowers them to remove anything they find “otherwise objectionable.” Our president seems to have decided that Section 230(c)(1) only “properly applies” to social media platforms that refrain from responding to his outlandish claims. Republicans might want to amend Section 230 so that it only applies to conduit‐​like services, however, any attempt to do so would face stiff opposition from democrats who want platforms to moderate more strictly. Like Obama before him, President Trump may have a pen, but he cannot rewrite statutes at will. As drafted, his order’s reasoning is at odds with congressional intent, a quarter century of judicial interpretation, and any reasonable reading of the statute’s plain language.