Tag: Twitter

Keep Government Away From Twitter

Twitter recently re-activated Jesse Kelly’s account after telling him that he was permanently banned from the platform. The social media giant informed Kelly, a conservative commentator, that his account was permanently suspended “due to multiple or repeat violations of the Twitter rules.” Conservative pundits, journalists, and politicians criticized Twitter’s decision to ban Kelly, with some alleging that Kelly’s ban was the latest example of perceived anti-conservative bias in Silicon Valley. While some might be infuriated with what happened to Kelly’s Twitter account, we should be wary of calls for government regulation of social media and related investigations in the name of free speech or the First Amendment. Companies such as Twitter and Facebook will sometimes make content moderation decisions that seem hypocritical, inconsistent, and confusing. But private failure is better than government failure, not least because unlike government agencies, Twitter has to worry about competition and profits.

It’s not immediately clear why Twitter banned Kelly. A fleeting glance of Kelly’s Twitter feed reveals plenty of eye roll-worthy content, including his calls for the peaceful breakup of the United States and his assertion that only an existential threat to the United States can save the country. His writings at the conservative website The Federalist include bizarre and unfounded declarations such as, “barring some unforeseen awakening, America is heading for an eventual socialist abyss.” In the same article he called for his readers to “Be the Lakota” after a brief discussion about how Sitting Bull and his warriors took scalps at the Battle of Little Bighorn. In another article Kelly made the argument that a belief in limited government is a necessary condition for being a patriot.

I must confess that I didn’t know Kelly existed until I learned the news of his Twitter ban, so it’s possible that those backing his ban from Twitter might be able to point to other content that they consider more offensive that what I just highlighted. But, from what I can tell Kelly’s content hardly qualifies as suspension-worthy.

Some opponents of Kelly’s ban (and indeed Kelly himself) were quick to point out that Nation of Islam leader Louis Farrakhan still has a Twitter account despite making anti-semitic remarks. Richard Spencer, the white supremacist president of the innocuously-named National Policy Institute who pondered taking my boss’ office, remains on Twitter, although his account is no longer verified.

All of the of the debates about social media content moderation have produced some strange proposals. Earlier this year I attended the Lincoln Network’s Reboot conference and heard Dr. Jerry A. Johnson, the President and Chief Executive Officer of the National Religious Broadcasters, propose that social media companies embrace the First Amendment as a standard. Needless to say, I was surprised to hear a conservative Christian urge private companies to embrace a content moderation standard that would require them to allow animal abuse videos, footage of beheadings, and pornography on their platforms. Facebook, Twitter, and other social media companies have sensible reasons for not using the First Amendment as their content moderation lodestar.

Rather than turning to First Amendment law for guidance, social media companies have developed their own standards for speech. These standards are enforced by human beings (and the algorithms human beings create) who make mistakes and can unintentionally or intentionally import their biases into content moderation decisions. Another Twitter controversy from earlier this year illustrates how difficult it can be to develop content moderation policies.

Shortly after Sen. John McCain’s death a Twitter user posted a tweet that included a doctored photo of Sen. McCain’s daughter, Meghan McCain, crying over her father’s casket. The tweet included the words “America, this ones (sic) for you” and the doctored photo, which showed a handgun being aimed at the grieving McCain. McCain’s husband, Federalist publisher Ben Domenech, criticized Twitter CEO Jack Dorsey for keeping the tweet on the platform. Twitter later took the offensive tweet down, and Dorsey apologized for not taking action sooner.

The tweet aimed at Meghan McCain clearly violated Twitter’s rules, which state: “You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.”

Twitter’s rules also prohibit hateful conduct or imagery, as outlined in its “Hateful Conduct Policy.” The policy seems clear enough, but a look at Kelly’s tweets reveal content that someone could interpret as hateful, even if some of the tweets are attempts at humor. Is portraying Confederate soldiers as “poor Southerners defending their land from an invading Northern army” hateful? What about a tweet bemoaning women’s right to vote? Or tweets that describe our ham-loving neighbors to the North as “garbage people” and violence as “underrated”? None of these tweets seem to violate Twitter’s current content policy, but someone could write a content policy that would prohibit such content.

Imagine developing a content policy for a social media site and your job is to consider whether content identical to the tweet targeting McCain and content identical to Kelly’s tweet concerning violence should be allowed or deleted. You have four policy options:

     
  Delete Tweet Targeting McCain Allow Tweet Targeting McCain
Delete Kelly’s Tweet

1

2

Allow Kelly’s Tweet

3

4

 

Many commentators seem to back option 3, believing that the tweet targeting McCain should’ve been deleted while Kelly’ tweet should be allowed. That’s a reasonable position. But it’s not hard to see how someone could come to the conclusion that 1 and 4 are also acceptable options. Of all four options only option 2, which would lead to the deletion of Kelly’s tweet but also allow the tweet targeting McCain, seems incoherent on its face.

Social media companies can come up with sensible-sounding policies, but there will always be tough calls. Having a policy that prohibits images of nude children sounds sensible, but there was an outcry after Facebook removed an Anne Frank Center article, which had as its feature image a photo of nude children who were victims of the Holocaust. Facebook didn’t disclose whether an algorithm or a human being had flagged the post for deletion.

In a similar case, Facebook initially defended its decision to remove Nick Ut’s Pulitzer Prize-winning photo “The Terror of War,” which shows a burned, naked nine year old Vietnamese girl fleeing the aftermath of an South Viernamese napalm attack in 1972. Despite the photo’s fame and historical significance Facebook told The Guardian, “While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.” Facebook eventually changed course, allowing users to post the photo, citing the photo’s historical significance:

Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image on Facebook where we are aware it has been removed.

What about graphic images of contemporary and past battles? On the one hand, there is clear historic value to images from the American Civil War, the Second World War, and the Vietnam War, some of which include graphic violent content. A social media company implementing a policy prohibiting graphic depictions of violence sounds sensible, but like a policy banning images of nude children it will not eliminate difficult choices or the possibility that such a policy will yield results many users will find inconsistent and confusing.

Given that whoever is developing content moderation policies will be put in the position of making tough choices it’s far better to leave these choices in the hands of private actors rather than government regulators. Unlike the government, Twitter has a profit motive and competition. As such, it is subject to far more accountability than the government. We may not always like the decisions social media companies make, but private failure is better than government failure. An America where unnamed bureaucrats, not private employees, determine what can be posted on social media is one where free speech is stifled.

To be clear, calls for increased government intervention and regulation of social media platforms is a bipartisan phenomenon. Sen. Mark Warner (D-VA) has discussed a range of possible social media policies, including a crackdown on anonymous accounts and regulations modeled on the European so-called “right to be forgotten.” If such policies were implemented (the First Amendment issues notwithstanding), they would inevitably lead to valuable speech being stifled. Sen. Ron Wyden (D-OR) has said that he’s open to carve-outs of Section 230 of the Communications Decency Act, which protects online intermediaries such as Facebook and Twitter from liability for what users post on their platforms.

When it comes to possibly amending Section 230 Sen. Wyden has some Republican allies. Never mind that some of these Republicans don’t seem to fully understand the relevant parts of Section 230.

That social media giants are under attack from the left and the right is not an argument for government intervention. Calls for Section 230 amendment or “anti-censorship” legislation are a serious risk to free speech. If Section 230 is amended to increase social media companies’ risk of liability suits we should expect these companies to suppress more speech. Twitter users may not always like what Twitter does, but calls for government intervention are not the remedy.

Senate Intelligence Committee Ends Efforts To Turn Social Media Companies Into Government Spies

Earlier this year, Senator Diane Feinstein (D-CA) inserted language into the annual Intelligence Authorization bill that would have forced social media companies like Twitter to act as de facto law enforcement agents and censors of the users of their service. The language in question read as follows:

SEC. 603. Requirement to report terrorist activities and the unlawful distribution of information relating to explosives.

(a) Duty To report.—Whoever, while engaged in providing an electronic communication service or a remote computing service to the public through a facility or means of interstate or foreign commerce, obtains actual knowledge of any terrorist activity, including the facts or circumstances described in subsection (c) shall, as soon as reasonably possible, provide to the appropriate authorities the facts or circumstances of the alleged terrorist activities.

(b) Attorney General determination.—The Attorney General shall determine the appropriate authorities under subsection (a).

(c) Facts or circumstances.—The facts or circumstances described in this subsection, include any facts or circumstances from which there is an apparent violation of section 842(p) of title 18, United States Code, that involves distribution of information relating to explosives, destructive devices, and weapons of mass destruction.

(d) Protection of privacy.—Nothing in this section may be construed to require an electronic communication service provider or a remote computing service provider—

(1) to monitor any user, subscriber, or customer of that provider; or

(2) to monitor the content of any communication of any person described in paragraph (1).

In a social media context, what constitutes “terrorist activity”? And how would a social media company “obtain knowledge” of undefined “terrorist activity” absent active monitoring of all of is users?

Feinstein’s proposal was constitutionally dubious and wildly impractical. It also generated strong opposition from social media and tech companies, the privacy and civil liberties community, and some of her own Senate colleagues.

Big Governments Are Vastly More Dangerous to the Citizenry than Big Corporations

It’s hard to prove or disprove statements of broad social sweep, but we do know one thing: Nicholas Nassim Taleb will not defend his assertion that big corporations are “vastly more dangerous” than big governments.

With notable frequency, people assume that I’m a reader of Taleb’s books. Evidently my thinking and his align in important ways. It’s made me mildly interested in reading him, though time constraints (or time mismanagement) have not yet allowed it.

My minor affinity with Taleb caused me to focus just a little more than I otherwise would have on a tweet of his the other day.

That premise really caught my eye. What is the relative danger posed by governments and corporations? Are corporations “vastly more dangerous”?

I’d thought that the jury was pretty much in on that question. With hundreds of millions killed outright by government action in the 20th century alone, the quantum of death and destruction wrought by governments is almost certainly greater than corporations’ destructive work.

Like any tool, corporations are dangerous. Death and injury is a byproduct of their delivery of food, shelter, transportation, entertainment, and every other want and need of consumers, because they often miscalculate risk or just make stupid mistakes.

#CatoSOTU: A Libertarian Take on the State of the Union Address

On Tuesday night, President Obama delivered his sixth annual State of the Union address. Cato scholars took to Twitter to live-tweet not only the President’s address, but also the Republican and Tea Party responses—delivered by Sen. Joni Ernst and Rep. Curt Clawson respectively—focusing, as always, on what the policies being discussed would mean for the future of liberty. 

Many on Twitter joined the discussion, which was billed as a chance to ask experts what to expect from the policy world in 2015; the hashtag #CatoSOTU has been used over 4,400 times since Tuesday, a number which will likely continue to grow as Cato scholars and members of the public continue the online conversation.

Over the years, the State of the Union has become an annual spectacle much larger than the founding fathers would ever have expected, and Cato scholars were quick to put it in context:

TONIGHT: Cato Scholars Live-Tweet the State of the Union

#CatoSOTU

Tonight at 9 p.m. EST, President Obama will lay out his plans for the upcoming year in his sixth annual State of the Union (SOTU) address. What will the President’s words mean for liberty? 

Find out tonight: Cato scholars will be live-tweeting their reactions to what the president says—and what he leaves out. Following the President’s address, stay tuned for commentary on the Republican and Tea Party responses. Featured scholars will include everyone from David Boaz to Mark Calabria, Walter Olson to Alex Nowrasteh….and many, many more.

This is your chance to ask the experts what to expect from the policy world in 2015—and to add your two cents to the discussion. Follow @CatoInstitute on Twitter and join the conversation using #CatoSOTU

Transparency Is Breaking Out All Over!

On Monday, Cato is hosting a briefing on Capitol Hill about congressional Wikipedia editing. Over a recent 90-day period, there were over 400,000 hits on Wikipedia articles about bills pending in Congress. If congressional staff were to contribute more to those articles, the amount of information available to interested members of the public would soar. Data that we produce at Cato go into the “infoboxes” on dozens and dozens of Wikipedia articles about bills in Congress.

A popular Twitter ‘bot called @congressedits recently created a spike in interest about congressional Wikipedia editing. It puts a slight negative spin on the practice because it tracks anonymous edits coming from Hill IP addresses, which are more likely to be inappropriate. But Congress can do a lot of good in this area, so Cato intern Zach Williams built a Twitter ‘bot that shows all edits to articles about pending federal legislation. This should draw attention to the beneficial practice of informing the public before bills become law. Meet @Wikibills!

Also, as of this week, Cato data are helping to inform some 26 million visitors per year to Cornell Law’s Legal Information Institute about what Congress is doing. Thanks to Tom Bruce and Sara Frug for adding some great content to the LII site.

Let’s say you’re interested in 18 U.S. Code § 2516, the part of the U.S. code that authorizes interception of wire, oral, or electronic communications. Searching for it online, you’ll probably reach the Cornell page for that section of the code. In the right column, a box displays “Related bills now in Congress,” linking to relevant bills in Congress.

Those hyperlinks are democratic links, letting people know what Congress is doing, so people can look into it and have their say. Does liberty automatically break out thanks to those developments? No. But public demands of all types—including for liberty and limited government—are frustrated now by the utter obscurity in which Congress acts. We’re lifting the curtain, providing the data that translates into a better informed public, a public better equipped to get what it wants.

The path to liberty goes through transparency, and transparency is breaking out all over!

#Escape2010

In response to my “Twitter fight!” blog post from Wednesday, Harvard Law Professor Lawrence Lessig charges me (in a post entitled “#Escapethe1990s”) with living in the campaign finance debates of the 1990s. There’s a better knock on me: I live in the 1790s, when the Bill of Rights was adopted, like some kinda freak!

Lessig really wants me to rely on modern Supreme Court precedents to argue that public funding of electioneering is unconstitutional: “And I challenge Harper to offer one bit of actual authority to counter that statement beyond his ‘this is the way I wish the Constitution were interpreted’ mode of argument,” he says, in “I-really-mean-it” bold.

I’ve had similar challenges to my starry-eyed and—I’ll confess—ideologically driven view of the Constitution. (I’m biased in favor of liberty.) For about a year, supporters of NSA spying bandied Smith v. Maryland “Supreme Court law,” saying that a person has no Fourth Amendment interest in phone calling data—until Judge Leon undercut them. Needless to say, the Court got its rationale wrong in Smith. Applying Smith to NSA spying is wrong. To the extent precedents might allow public funding of electioneering, they are wrong, too.

Professor Lessig devotes a good deal of time to the compromise he and others have made with conservative opponents since the ’90s. Perhaps because I’m not a conservative, but a libertarian, I don’t feel as though I owe it to them to come their way. To Lessig’s credit, he is not doubling-down on a bad idea, as others are, by seeking a constitutional amendment to allow government regulation of political speech. (The bill at the link was introduced Tuesday.)

What is most interesting is his utter certainty that an intricate scheme to mask government subsidy for political speech is good enough to slide over the First Amendment’s bar on “abridging the freedom of speech.” I thought I did a pretty good job on the subsidy question the first time, but I’ll do it again: Under Lessig’s plan, if you give money to a politician, you pay less in taxes. If you don’t give money to a politician, you pay more in taxes. Government tax policy would funnel money to politicians for their campaigns. That’s subsidy.

Pages