Tag: free speech

Keep Government Away From Twitter

Twitter recently re-activated Jesse Kelly’s account after telling him that he was permanently banned from the platform. The social media giant informed Kelly, a conservative commentator, that his account was permanently suspended “due to multiple or repeat violations of the Twitter rules.” Conservative pundits, journalists, and politicians criticized Twitter’s decision to ban Kelly, with some alleging that Kelly’s ban was the latest example of perceived anti-conservative bias in Silicon Valley. While some might be infuriated with what happened to Kelly’s Twitter account, we should be wary of calls for government regulation of social media and related investigations in the name of free speech or the First Amendment. Companies such as Twitter and Facebook will sometimes make content moderation decisions that seem hypocritical, inconsistent, and confusing. But private failure is better than government failure, not least because unlike government agencies, Twitter has to worry about competition and profits.

It’s not immediately clear why Twitter banned Kelly. A fleeting glance of Kelly’s Twitter feed reveals plenty of eye roll-worthy content, including his calls for the peaceful breakup of the United States and his assertion that only an existential threat to the United States can save the country. His writings at the conservative website The Federalist include bizarre and unfounded declarations such as, “barring some unforeseen awakening, America is heading for an eventual socialist abyss.” In the same article he called for his readers to “Be the Lakota” after a brief discussion about how Sitting Bull and his warriors took scalps at the Battle of Little Bighorn. In another article Kelly made the argument that a belief in limited government is a necessary condition for being a patriot.

I must confess that I didn’t know Kelly existed until I learned the news of his Twitter ban, so it’s possible that those backing his ban from Twitter might be able to point to other content that they consider more offensive that what I just highlighted. But, from what I can tell Kelly’s content hardly qualifies as suspension-worthy.

Some opponents of Kelly’s ban (and indeed Kelly himself) were quick to point out that Nation of Islam leader Louis Farrakhan still has a Twitter account despite making anti-semitic remarks. Richard Spencer, the white supremacist president of the innocuously-named National Policy Institute who pondered taking my boss’ office, remains on Twitter, although his account is no longer verified.

All of the of the debates about social media content moderation have produced some strange proposals. Earlier this year I attended the Lincoln Network’s Reboot conference and heard Dr. Jerry A. Johnson, the President and Chief Executive Officer of the National Religious Broadcasters, propose that social media companies embrace the First Amendment as a standard. Needless to say, I was surprised to hear a conservative Christian urge private companies to embrace a content moderation standard that would require them to allow animal abuse videos, footage of beheadings, and pornography on their platforms. Facebook, Twitter, and other social media companies have sensible reasons for not using the First Amendment as their content moderation lodestar.

Rather than turning to First Amendment law for guidance, social media companies have developed their own standards for speech. These standards are enforced by human beings (and the algorithms human beings create) who make mistakes and can unintentionally or intentionally import their biases into content moderation decisions. Another Twitter controversy from earlier this year illustrates how difficult it can be to develop content moderation policies.

Shortly after Sen. John McCain’s death a Twitter user posted a tweet that included a doctored photo of Sen. McCain’s daughter, Meghan McCain, crying over her father’s casket. The tweet included the words “America, this ones (sic) for you” and the doctored photo, which showed a handgun being aimed at the grieving McCain. McCain’s husband, Federalist publisher Ben Domenech, criticized Twitter CEO Jack Dorsey for keeping the tweet on the platform. Twitter later took the offensive tweet down, and Dorsey apologized for not taking action sooner.

The tweet aimed at Meghan McCain clearly violated Twitter’s rules, which state: “You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.”

Twitter’s rules also prohibit hateful conduct or imagery, as outlined in its “Hateful Conduct Policy.” The policy seems clear enough, but a look at Kelly’s tweets reveal content that someone could interpret as hateful, even if some of the tweets are attempts at humor. Is portraying Confederate soldiers as “poor Southerners defending their land from an invading Northern army” hateful? What about a tweet bemoaning women’s right to vote? Or tweets that describe our ham-loving neighbors to the North as “garbage people” and violence as “underrated”? None of these tweets seem to violate Twitter’s current content policy, but someone could write a content policy that would prohibit such content.

Imagine developing a content policy for a social media site and your job is to consider whether content identical to the tweet targeting McCain and content identical to Kelly’s tweet concerning violence should be allowed or deleted. You have four policy options:

     
  Delete Tweet Targeting McCain Allow Tweet Targeting McCain
Delete Kelly’s Tweet

1

2

Allow Kelly’s Tweet

3

4

 

Many commentators seem to back option 3, believing that the tweet targeting McCain should’ve been deleted while Kelly’ tweet should be allowed. That’s a reasonable position. But it’s not hard to see how someone could come to the conclusion that 1 and 4 are also acceptable options. Of all four options only option 2, which would lead to the deletion of Kelly’s tweet but also allow the tweet targeting McCain, seems incoherent on its face.

Social media companies can come up with sensible-sounding policies, but there will always be tough calls. Having a policy that prohibits images of nude children sounds sensible, but there was an outcry after Facebook removed an Anne Frank Center article, which had as its feature image a photo of nude children who were victims of the Holocaust. Facebook didn’t disclose whether an algorithm or a human being had flagged the post for deletion.

In a similar case, Facebook initially defended its decision to remove Nick Ut’s Pulitzer Prize-winning photo “The Terror of War,” which shows a burned, naked nine year old Vietnamese girl fleeing the aftermath of an South Viernamese napalm attack in 1972. Despite the photo’s fame and historical significance Facebook told The Guardian, “While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.” Facebook eventually changed course, allowing users to post the photo, citing the photo’s historical significance:

Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image on Facebook where we are aware it has been removed.

What about graphic images of contemporary and past battles? On the one hand, there is clear historic value to images from the American Civil War, the Second World War, and the Vietnam War, some of which include graphic violent content. A social media company implementing a policy prohibiting graphic depictions of violence sounds sensible, but like a policy banning images of nude children it will not eliminate difficult choices or the possibility that such a policy will yield results many users will find inconsistent and confusing.

Given that whoever is developing content moderation policies will be put in the position of making tough choices it’s far better to leave these choices in the hands of private actors rather than government regulators. Unlike the government, Twitter has a profit motive and competition. As such, it is subject to far more accountability that the government. We may not always like the decisions social media companies make, but private failure is better than government failure. An America where unnamed bureaucrats, not private employees, determine what can be posted on social media is one where free speech is stifled.

To be clear, calls for increased government intervention and regulation of social media platforms is a bipartisan phenomenon. Sen. Mark Warner (D-VA) has discussed a range of possible social media policies, including a crackdown on anonymous accounts and regulations modeled on the European so-called “right to be forgotten.” If such policies were implemented (the First Amendment issues notwithstanding), they would inevitably lead to valuable speech being stifled. Sen. Ron Wyden (D-OR) has said that he’s open to carve-outs of Section 230 of the Communications Decency Act, which protects online intermediaries such as Facebook and Twitter from liability for what users post on their platforms.

When it comes to possibly amending Section 230 Sen. Wyden has some Republican allies. Never mind that some of these Republicans don’t seem to fully understand the relevant parts of Section 230.

That social media giants are under attack from the left and the right is not an argument for government intervention. Calls for Section 230 amendment or “anti-censorship” legislation are a serious risk to free speech. If Section 230 is amended to increase social media companies’ risk of liability suits we should expect these companies to suppress more speech. Twitter users may not always like what Twitter does, but calls for government intervention are not the remedy.

Facebook and the Future of Free Speech

Britain First is a far-right ultranationalist group” hostile to Muslim immigrants in the United Kingdom. They are active online with significant consequences for their leaders if not for British elections. The leaders of Britain First, Paul Golding and Jayda Fransen, were incarcerated recently for distributing leaflets and posting online videos that reflected their extreme antipathy to Muslims. Fransen received a 36 week sentence, Golding 18 weeks. Britain First was banned from Twitter in late 2017. Now Facebook has taken down both the official Facebook page of the group and those of its two leaders.

Like many European nations, Great Britain has much more narrow protections for freedom of speech than the United States. The United States does not recognize a “hate speech” exception to the First Amendment. Great Britain criminalizes and sanctions such speech. This case is much more interesting, however, than this familiar distinction. The Britain First takedown offers a glimpse of the future of speech everywhere.

The leaders of Facebook did not just wake up on the wrong side of the bed and decide to take down Britain First’s page. Its official statement about the ban says from the start: “we are very careful not to remove posts or Pages just because some people don’t like them.” In this case, the page violated Facebook’s Community Standards against speech “designed to stir up hatred against groups in our society.” The statement does not say which posts led to the ban but The Guardian reports they “included one comparing Muslim immigrants to animals, another labelling the group’s leaders ‘Islamophobic and proud,’ and videos created to incite hateful comments against Muslims.” I understand also that Facebook gave due notice to the group of their infractions. That seems plausible. Almost three months have passed since Twitter banned Britain First. Perhaps Facebook eventually concluded Britain First had no intention of complying with their rules.

You might think Facebook has violated the freedom of speech. But that’s not the case. The First Amendment states that Congress (and by extension, government at all levels) “…shall make no law abridging the freedom of speech.” If the United States government had banned an America First! website, the First Amendment would be relevant. But Facebook is not the government even though they must govern a platform for free speech. But that platform is owned by Facebook. They can govern it as they wish. Most likely they will govern it to maximize shareholder value.

New York Attorney General Schneiderman Goes After Citizens United’s Donors

New York Attorney General Eric Schneiderman demands out-of-state charities disclose all donors for his inspection. He does not demand this of all charities, only those he decides warrant his special scrutiny. Schneiderman garnered national attention for his campaign to use the powers of his office to harass companies and organizations who do not endorse his preferred policies regarding climate change. Now, it seems he seeks to do the same to right-of-center organizations that might displease him. Our colleague Walter Olson has cataloged Schneiderman’s many misbehaviors.

He’s currently set his sights on Citizens United, a Virginia non-profit that produces conservative documentaries. While Citizens United has solicited donations in New York for decades without any problem, Schneiderman now demands that they name names, telling him who has chosen to support the group. Citizens United challenged this demand in court, arguing that to disclose this information would risk subjecting their supporters to harassment and intimidation.

These fears are not mere hyperbole. If the name Citizens United rings a bell, it’s because the organization, and the Supreme Court case of the same name, has become the Emmanuel Goldstein of the American left, complete with Democratic senators leading a ritualistic two minutes hate on the Senate floor. In 2010, the Supreme Court upheld its right to distribute Hillary: The Movie, and ever since “Citizens United” has been a synecdoche for what Democrats consider to be the corporate control of America. Is it unwarranted to think that their donors might be subjected to the sort of targeted harassment suffered by lawful gun owners, or that Schneiderman might “accidentally” release the full donor list to the public, as Obama’s IRS did with the confidential filings of gay marriage opponents?

The Supreme Court has long recognized the dangers inherent in applying the power of the state against the right of private association. The cornerstone here is 1958’s NAACP v Alabama. For reasons that hardly need be pointed out, the NAACP did not trust the state of Alabama, in the 1950s, to be good stewards of its membership lists. “Inviolability of privacy in group association may in many circumstances be indispensable to preservation of freedom of association, particularly where a group espouses dissident beliefs,” wrote Justice John Marshall Harlan II, who went as far as to compare such demands to a “requirement that adherents of particular religious faiths or political parties wear identifying arm-bands.” More recently, Justice Alito pointed out in a similar context that while there are undoubted purposes served by reasonable, limited disclosure requirements, the First Amendment requires that “speakers must be able to obtain an as-applied exemption without clearing a high evidentiary hurdle” regarding the potential harms of disclosure.

But the Second Circuit Court of Appeals has decided it knows better than the Supremes. On Thursday, it ruled that Citizen United’s challenge should be thrown out without even an opportunity to prove their case. In the process, it effectively turned NAACP into a “Jim Crow” exception to a general rule of unlimited government prerogative to panoptic intrusion into citizen’s political associations. While there can be no doubt that the struggle for civil rights presented a unique danger for its supporters, this should not mean that only such perils warrant First Amendment protection.

82% Say It’s Hard to Ban Hate Speech Because People Can’t Agree What Speech Is Hateful

An overwhelming majority (82%) of Americans agree that “it would be hard to ban hate speech because people can’t agree what speech is hateful,” the Cato 2017 Free Speech and Tolerance Survey finds. Seventeen percent (17%) disagree. Majorities across partisan and demographic groups alike agree that hate speech is hard to define and thus may be hard to regulate.

Full survey results and report found here.

How Do Americans Define Hate Speech?

When presented with specific statements and ideas, Americans can’t agree on what speech is hateful, offensive, or simply a political opinion

Besides slurs and biological racism, Americans are strikingly at odds over what speech and ideas constitute hate.[1] For instance, a majority of Democrats (52%) believe saying that transgender people have a mental disorder is hate speech. Only 17% of Republicans agree. On the other hand, 42% of Republicans believe it’s hateful to say that the police are racist, while only 19% of Democrats agree.

Among all Americans, majorities agree that calling a racial minority a racial slur (61%), saying one race is genetically superior to another (57%), or calling gays and lesbians vulgar names (56%) is not just offensive, but is hate speech. Interestingly a majority do not think calling a woman a vulgar name is hateful (43%), but most would say it’s offensive (51%). Less than half believe it’s hateful to say that all white people are racist (40%), transgender people have a mental disorder (35%), America is an evil country (34%), homosexuality is a sin (28%), the police are racist (27%), or illegal immigrants should be deported (24%). Less than a fifth believe it’s hateful to say Islam is taking over Europe (18%) or that women should not fight in military combat roles (15%).

20% of College Students Say College Faculty Has Balanced Mix of Political Views

The Cato 2017 Free Speech and Tolerance Survey finds only 20% of current college and graduate students believe their college or university faculty has a balanced mix of political views. A plurality (39%) say most college and university professors are liberal, 27% believe most are politically moderate, and 12% believe most are conservative.

College Democrats Less Likely Than Republicans to Think Faculty Is Liberal

Democratic and Republican students see their college campuses very differently. A majority (59%) of Republican college students believe that most faculty members are liberal. In contrast, only 35% of Democratic college students agree most professors are liberal. Democratic students are also about twice as likely as Republican students to think their professors are moderate (32% vs. 16%) or conservative (14% vs. 9%).

Full survey results and report found here.

College Students Agree Student Body is Liberal

Current students believe that most of their campus’ student body is liberal. Fifty-percent (50%) believe that most students at their college or university are liberal, 21% believe most are moderate, 8% believe most are conservative, and 19% believe there is a balanced mix of political views.

Democratic and Republican students largely agree on the ideological composition of their campus student body.

Consequences of Campus Political Climate

These perceptions of ideological homogeneity on college campuses may explain why 72% of Republican college students say the political climate prevents them from saying things they believe because others might find them offensive. About a quarter (26%) of Republican college students feel they can share their political views.

51% of Strong Liberals Say It’s Morally Acceptable to Punch Nazis

Is violence an appropriate response to hate speech? The Cato 2017 Free Speech and Tolernace Survey finds most Americans say no. More than two-thirds (68%) of Americans say it is not morally acceptable to punch a Nazi in the face. About a third (32%), however, say it is morally acceptable.[1] 

Strong liberals stand out with a slim majority (51%) who say it’s moral to punch Nazis. Only 21% of strong conservatives agree.

Full survey results and report found here.

Strong liberals’ approval of Nazi-punching is not representative of Democrats as a whole. A majority (56%) of Democrats believe it is not morally acceptable to punch a Nazi. Thus, tolerance of violence as a response to offensive speech and ideas is found primarily on the far Left.

The survey found liberals were more likely to consider upsetting and controversial ideas “hateful” rather than simply “offensive.” This may help partially explain why staunch liberals are more comfortable than the average American with using violence against Nazis.

Is Supporting Racists’ Free Speech Rights the Same as Being a Racist?

Student protesters at the College of William and Mary recently shut down a campus speaker from the ACLU invited (ironically) to speak about “Students and the First Amendment.” Students explained their shut down was in retaliation for the ACLU’s defense of white nationalists’ free speech rights in Charlottesville, Virginia where a white nationalist rally recently took place. What motivated the students?

The Black Lives Matter of William and Mary student group wrote on their Facebook page, where they live-streamed their shut down of the event: “We want to reaffirm our position of zero tolerance for white supremacy no matter what form it decides to masquerade in.” From these students’ perspective, the ACLU supporting someone’s right to say racist things was as bad as being a racist organization.

The Cato 2017 Free Speech and Tolerance Survey helps shed light on these students’ reasoning. First, nearly half (49%) of current college and graduate students believe that “supporting someone’s right to say racist things is as bad as holding racist views yourself.” This share rises to nearly two-thirds among African Americans (65%) and Latinos (61%) who agree. Far fewer white Americans (34%) share this view.

Pages