July 23, 2019 12:22PM

False Assumptions Behind the Current Drive to Regulate Social Media

In the early days of the Internet, citing concerns about pedophiles and hackers, parents would worry about their children’s engagement on unfamiliar platforms. Now, those same parents have Facebook accounts and get their news from Twitter. However, one look at a newspaper shows op-eds aplenty castigating the platforms that host an ever-growing share of our social lives. Even after more than a decade of social media use, prominent politicians and individuals who lack critical awareness of the realities and limitations of social media platforms choose to scapegoat platforms—rather than people—for a litany of social problems. Hate speech on Facebook? Well, it’s obviously Facebook’s fault. Fake news? Obviously created by Twitter.

But, what if these political concerns are misplaced? In a new Cato Policy Analysis, Georgia Tech’s Milton Mueller argues that that the moral panic attending social media is misplaced. Mueller contends that social media “makes human interactions hypertransparent,” rendering hitherto unseen social and commercial interactions visible and public. This newfound transparency “displace[s] the responsibility for societal acts from the perpetrators to the platform that makes them visible.” Individuals do wrong, platforms are condemned. This makes no political nor moral sense. Social media platforms are blamed for a diverse array of social ills, ranging from hate speech and addiction to mob violence and terrorism. In the wake of the 2016 U.S. presidential election, foreign electoral interference and the spread of misinformation were also laid at their feet. However, these woes are not new, and the form and tenor of concerns about social media misuse increasingly resembles a classic moral panic. Instead of appreciating that social media has revealed misconduct previously ignored, tolerated, or swept under the rug, social media is too often understood as the root cause of these perennial problems. 

People behaved immorally long before the advent of social media platforms and will continue to do so long after the current platforms are replaced by something else. Mueller argues that today’s misplaced moral panic is “based on a false premise and a false promise.” The false premise? Social media controls human behavior. The false promise? Imposing new rules on intermediaries will solve what are essentially human problems.

Mueller’s examination of Facebook’s role in hosting genocidal Burmese propaganda drives this point home. When Burmese authorities began using Facebook to encourage violence against the country’s Muslim Rohingya minority, Facebook was slow to act— it had few employees who could read Burmese, rendering the identification of offending messages difficult. However, Facebook has since been blamed for massacres of Rohingya carried out by Myanmar’s military and Buddhist extremists. While Facebook provided a forum for this propaganda, it cannot be seen as having caused violence that was prompted and supported by state authorities. We should be glad that Facebook could subsequently prevent the use of its platform by Myanmar’s generals, but we cannot expect Facebook to singlehandedly stop a sovereign state from pursuing a policy of mass murder. Myanmar’s government, not Facebook, is responsible for its messaging and the conduct of its armed forces.

Mueller shows that technologies enhancing transparency get blamed for the problems they reveal. The advent of the printing press, radio, television, and even inexpensive comic books were all followed by moral panics and calls for regulation. The ensuing regulation caused unforeseen harm. Mueller finds that “the federal takeover of the airwaves led to a systemic exclusion of diverse voices,” while recent German social media regulation “immediately resulted in suppression of various forms of politically controversial online speech.” Acting on the false premise that social media is responsible for grievances expressed through it, regulation intended to stamp out hate merely addresses its visible symptoms.

Contra these traditional, counterproductive responses, Mueller advocates greater personal responsibility; if we do not like what we see on social media we should remember that it is the speech of our fellow users, not that of the platforms themselves. He also urges resistance to government attempts to regulate social media, either by directly regulating speech, or by altering intermediary liability regimes to encourage more restrictive private governance. Proceeding from the false premise that a “broken” social media is responsible for the ills it reveals, regulation will simply suppress speech. Little will be gained, and much may be lost.

June 17, 2019 11:40AM

Will “Internet Addiction” Be Our Next “Crisis?”

report on National Public Radio’s Morning Edition today discusses growing concerns about “internet addiction,” especially among adolescents. The reporter mentions that “internet addiction,” sometimes called “social media addiction,” is not recognized as a mental health disorder in the US, where the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) published by the American Psychiatric Association categorizes it as a “condition for further study.” This is not insignificant in light of the strong economic incentives for the psychiatric profession to medicalize behavioral problems. 

The World Health Organization recently recognized “internet gaming disorder” as an addiction in its eleventh revision of the International Classification of Diseases (ICD-11), and China, South Korea, Japan, and other countries now consider “internet addiction” a mental health disorder.

The NPR report interviews a psychiatrist who believes that internet addiction is indeed a mental health disorder and laments the paucity of programs available to treat afflicted adolescents. Because it is not recognized as a disease in the US, treatment is not usually covered by health insurance. The psychiatrist tells the reporter that some clinicians creatively assign as a diagnosis one of the psychiatric co-morbidities that accompany almost all of their patients with internet addiction, in order to get insurance to pay for it. The fact that almost all cases come with attached co-morbidities creates a “chicken or egg” situation that is one of the reasons why many researchers are reluctant to conclude internet addiction is a distinct disorder. 

Therapists usually engage in treatment techniques that are similar for the treatment of other addictive disorders. The report highlights a 12-step program (similar to Alcoholics Anonymous) recently begun in Minnesota, whose director encourages the recognition of internet or social media addiction as a disease, in order to promote the proliferation of affordable rehab and other treatment programs across the country that would be covered by health insurance.

I have written about the dangerous tendency to medicalize behavioral patterns, so-called “social media addiction” in particular, and to overuse the label of addiction. This report from a respected media source can be expected to fuel more animated discussions about internet or social media addiction in the public square.

As I pointed out in a recent article at Reason.com, a meticulous examination of the evidence is crucial before concluding internet/social media addiction is an actual disorder. Such a determination may not just impact the fiscal stability of the health care system but, more importantly, may pose a potential threat to freedom of speech.

May 15, 2019 10:40AM

The Federal Election Commission Is Bad Enough

Chris Hughes, a founder of Facebook, has proposed Congress create a new agency to "create guidelines for acceptable speech on social media.”

As Hughes notes, this proposal “may seem un-American.” That’s because it is. At the very least, Hughes’ plan contravenes the past fifty years of American constitutional jurisprudence, and the deeply held values that undergird it. Let’s examine his case for such a momentous change.

He notes that the First Amendment does not protect all speech. Child porn and stock fraud are not protected by the First Amendment. True threats as harassment are also illegal. Incitement to violence as understood by the courts can also be criminalized. All true, though more complex than he admits.

The fact that the courts have exempted some speech from First Amendment protection does not mean judges should create new categories of unprotected speech. Hughes needs to make a case for new exemptions from the First Amendment. He does not do so. Instead he calls for an agency to regulate online speech. But, barring drastic changes to First Amendment jurisprudence, his imagined agency would not have the authority to broadly regulate Americans’ speech online.

However, Hughes’ old firm, Facebook, can and does regulate speech that is protected from government restrictions. In particular, Facebook suppresses or marginalizes extreme speech (sometimes called “hate speech”), “fake news” about political questions, and the speech of foreign nationals.

Facebook is not covered by the First Amendment.  You can support or decry their decisions about such speech, but it would be “un-American” to say Facebook and other private companies do not have the power to regulate such speech on their platforms. And I might add that you can exit Facebook and speak in other forums, online and off. A federal speech agency would not be so easily avoided.

Hughes may think that Facebook is doing a poor job of regulation and that its efforts require the help of a new government agency (which would be subject to the First Amendment). But if we ended up with government speech codes imposed by private companies, the courts might well swing into action on the side of free speech. In that sense, the new agency would actually vitiate private efforts at content moderation. We might well end up with more of the online speech Hughes and other critics want to restrict.

In sum, Hughes’ agency is a bad idea in itself. It is unlikely to accomplish his goals. The agency might even weaken private efforts to limit some extreme speech. Of course, if judicial doctrines changed to accommodate new speech restrictions imposed by this new agency, America really would change for the worse. It is encouraging, however, how little support and how much criticism Hughes’ proposal has received from his fellow Progressives. (See the critiques by Ari Roberts and Daphne Keller linked here).   Conservatives should feel free to chime in.

November 29, 2018 10:23AM

Keep Government Away From Twitter

Twitter recently re-activated Jesse Kelly’s account after telling him that he was permanently banned from the platform. The social media giant informed Kelly, a conservative commentator, that his account was permanently suspended “due to multiple or repeat violations of the Twitter rules.” Conservative pundits, journalists, and politicians criticized Twitter’s decision to ban Kelly, with some alleging that Kelly’s ban was the latest example of perceived anti-conservative bias in Silicon Valley. While some might be infuriated with what happened to Kelly’s Twitter account, we should be wary of calls for government regulation of social media and related investigations in the name of free speech or the First Amendment. Companies such as Twitter and Facebook will sometimes make content moderation decisions that seem hypocritical, inconsistent, and confusing. But private failure is better than government failure, not least because unlike government agencies, Twitter has to worry about competition and profits.

It’s not immediately clear why Twitter banned Kelly. A fleeting glance of Kelly’s Twitter feed reveals plenty of eye roll-worthy content, including his calls for the peaceful breakup of the United States and his assertion that only an existential threat to the United States can save the country. His writings at the conservative website The Federalist include bizarre and unfounded declarations such as, “barring some unforeseen awakening, America is heading for an eventual socialist abyss.” In the same article he called for his readers to “Be the Lakota” after a brief discussion about how Sitting Bull and his warriors took scalps at the Battle of Little Bighorn. In another article Kelly made the argument that a belief in limited government is a necessary condition for being a patriot.

I must confess that I didn’t know Kelly existed until I learned the news of his Twitter ban, so it’s possible that those backing his ban from Twitter might be able to point to other content that they consider more offensive that what I just highlighted. But, from what I can tell Kelly’s content hardly qualifies as suspension-worthy.

Some opponents of Kelly’s ban (and indeed Kelly himself) were quick to point out that Nation of Islam leader Louis Farrakhan still has a Twitter account despite making anti-semitic remarks. Richard Spencer, the white supremacist president of the innocuously-named National Policy Institute who pondered taking my boss' office, remains on Twitter, although his account is no longer verified.

All of the of the debates about social media content moderation have produced some strange proposals. Earlier this year I attended the Lincoln Network’s Reboot conference and heard Dr. Jerry A. Johnson, the President and Chief Executive Officer of the National Religious Broadcasters, propose that social media companies embrace the First Amendment as a standard. Needless to say, I was surprised to hear a conservative Christian urge private companies to embrace a content moderation standard that would require them to allow animal abuse videos, footage of beheadings, and pornography on their platforms. Facebook, Twitter, and other social media companies have sensible reasons for not using the First Amendment as their content moderation lodestar.

Rather than turning to First Amendment law for guidance, social media companies have developed their own standards for speech. These standards are enforced by human beings (and the algorithms human beings create) who make mistakes and can unintentionally or intentionally import their biases into content moderation decisions. Another Twitter controversy from earlier this year illustrates how difficult it can be to develop content moderation policies.

Media Name: screen_shot_2018-11-29_at_09.27.23.png

Shortly after Sen. John McCain’s death a Twitter user posted a tweet that included a doctored photo of Sen. McCain’s daughter, Meghan McCain, crying over her father’s casket. The tweet included the words “America, this ones (sic) for you” and the doctored photo, which showed a handgun being aimed at the grieving McCain. McCain’s husband, Federalist publisher Ben Domenech, criticized Twitter CEO Jack Dorsey for keeping the tweet on the platform. Twitter later took the offensive tweet down, and Dorsey apologized for not taking action sooner.

The tweet aimed at Meghan McCain clearly violated Twitter’s rules, which state: “You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.”

Twitter’s rules also prohibit hateful conduct or imagery, as outlined in its “Hateful Conduct Policy.” The policy seems clear enough, but a look at Kelly’s tweets reveal content that someone could interpret as hateful, even if some of the tweets are attempts at humor. Is portraying Confederate soldiers as “poor Southerners defending their land from an invading Northern army” hateful? What about a tweet bemoaning women’s right to vote? Or tweets that describe our ham-loving neighbors to the North as “garbage people” and violence as “underrated”? None of these tweets seem to violate Twitter’s current content policy, but someone could write a content policy that would prohibit such content.

Imagine developing a content policy for a social media site and your job is to consider whether content identical to the tweet targeting McCain and content identical to Kelly’s tweet concerning violence should be allowed or deleted. You have four policy options:

     
  Delete Tweet Targeting McCain Allow Tweet Targeting McCain
Delete Kelly's Tweet

1


2

Allow Kelly's Tweet

3


4

 

Many commentators seem to back option 3, believing that the tweet targeting McCain should’ve been deleted while Kelly’ tweet should be allowed. That’s a reasonable position. But it’s not hard to see how someone could come to the conclusion that 1 and 4 are also acceptable options. Of all four options only option 2, which would lead to the deletion of Kelly’s tweet but also allow the tweet targeting McCain, seems incoherent on its face.

Social media companies can come up with sensible-sounding policies, but there will always be tough calls. Having a policy that prohibits images of nude children sounds sensible, but there was an outcry after Facebook removed an Anne Frank Center article, which had as its feature image a photo of nude children who were victims of the Holocaust. Facebook didn’t disclose whether an algorithm or a human being had flagged the post for deletion.

Media Name: screen_shot_2018-11-29_at_09.32.12.png

In a similar case, Facebook initially defended its decision to remove Nick Ut’s Pulitzer Prize-winning photo “The Terror of War,” which shows a burned, naked nine year old Vietnamese girl fleeing the aftermath of an South Viernamese napalm attack in 1972. Despite the photo’s fame and historical significance Facebook told The Guardian, “While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.” Facebook eventually changed course, allowing users to post the photo, citing the photo’s historical significance:

Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image on Facebook where we are aware it has been removed.

What about graphic images of contemporary and past battles? On the one hand, there is clear historic value to images from the American Civil War, the Second World War, and the Vietnam War, some of which include graphic violent content. A social media company implementing a policy prohibiting graphic depictions of violence sounds sensible, but like a policy banning images of nude children it will not eliminate difficult choices or the possibility that such a policy will yield results many users will find inconsistent and confusing.

Given that whoever is developing content moderation policies will be put in the position of making tough choices it’s far better to leave these choices in the hands of private actors rather than government regulators. Unlike the government, Twitter has a profit motive and competition. As such, it is subject to far more accountability than the government. We may not always like the decisions social media companies make, but private failure is better than government failure. An America where unnamed bureaucrats, not private employees, determine what can be posted on social media is one where free speech is stifled.

To be clear, calls for increased government intervention and regulation of social media platforms is a bipartisan phenomenon. Sen. Mark Warner (D-VA) has discussed a range of possible social media policies, including a crackdown on anonymous accounts and regulations modeled on the European so-called “right to be forgotten.” If such policies were implemented (the First Amendment issues notwithstanding), they would inevitably lead to valuable speech being stifled. Sen. Ron Wyden (D-OR) has said that he’s open to carve-outs of Section 230 of the Communications Decency Act, which protects online intermediaries such as Facebook and Twitter from liability for what users post on their platforms.

When it comes to possibly amending Section 230 Sen. Wyden has some Republican allies. Never mind that some of these Republicans don’t seem to fully understand the relevant parts of Section 230.

That social media giants are under attack from the left and the right is not an argument for government intervention. Calls for Section 230 amendment or “anti-censorship” legislation are a serious risk to free speech. If Section 230 is amended to increase social media companies’ risk of liability suits we should expect these companies to suppress more speech. Twitter users may not always like what Twitter does, but calls for government intervention are not the remedy.

September 13, 2018 9:49AM

Addiction Abuse

Hardly a day goes by without a report in the press about some new addiction. There are warnings about addiction to coffee. Popular psychology publications talk of “extreme sports addiction.” Some news reports even alert us to the perils of chocolate addiction. One gets the impression that life is awash in threats of addiction. People tend to equate the word “addiction” with “abuse.” Ironically, “addiction” is a subject of abuse.

The American Society of Addiction Medicine defines addiction as a “chronic disease of brain reward, motivation, memory and related circuitry…characterized by the inability to consistently abstain, impairment in behavioral control, craving” that continues despite resulting destruction of relationships, economic conditions, and health. A major feature is compulsiveness. Addiction has a biopsychosocial basis with a genetic predisposition and involves neurotransmitters and interactions within reward centers of the brain. This compusliveness is why alcoholics or other drug addicts will return to their substance of abuse even after they have been “detoxed” and despite the fact that they know it will further damage their lives. 

Addiction is not the same as dependence. Yet politicians and many in the media use the two words interchangeably. Physical dependence represents an adaptation to the drug such that abrupt cessation or tapering off too rapidly can precipitate a withdrawal syndrome, which in some cases can be life-threatening. Physical dependence is seen with many categories of drugs besides drugs commonly abused. It is seen for example with many antidepressants, such as fluoxetine (Prozac) and sertraline (Zoloft), and with beta blockers like atenolol and propranolol, used to treat a variety of conditions including hypertension and migraines. Once a patient is properly tapered off of the drug on which they have become physically dependent, they do not feel a craving or compulsion to return to the drug.

Some also confuse tolerance with addiction. Similar to dependency, tolerance is another example of physical adaptation. Tolerance refers to the decrease in one or more effects a drug has on a person after repeated exposure, requiring increases in the dose.

Science journalist Maia Szalavitz, writing in the Columbia Journalism Review, ably details how journalists perpetuate this lack of understanding and fuel misguided opioid policies.

Many in the media share responsibility for the mistaken belief that prescription opioids rapidly and readily addict patients—despite the fact that Drs. Nora Volkow and Thomas McLellan of the National Institute on Drug Abuse point out addiction is very uncommon, “even among those with preexisting vulnerabilities.” Cochrane systematic studies in 2010 and 2012 of chronic pain patients found addiction rates in the 1 percent range, and a report on over 568,000 patients in the Aetna database who were prescribed opioids for acute postoperative pain between 2008 and 2016 found a total “misuse” rate of 0.6 percent. 

Equating dependency with addiction caused lawmakers to impose opioid prescription limits that are not evidence-based, and is making patients suffer needlessly after being tapered too abruptly or cut off entirely from their pain medicine. Many, in desperation, seek relief in the black market where they get exposed to heroin and fentanyl. Some resort to suicide. There have been enough reports of suicides that the US Senate is poised to vote on opioid legislation that “would require HHS and the Department of Justice to conduct a study on the effect that federal and state opioid prescribing limits have had on patients — and specifically whether such limits are associated with higher suicide rate.” And complaints about the lack of evidence behind present prescribing policy led Food and Drug Administration Commissioner Scott Gottlieb to announce plans last month for the FDA to develop its own set of evidence-based guidelines.

Now there is talk in media and political circles about the threats of “social media addiction.” But there is not enough evidence to conclude that spending extreme amounts of time on the internet and with social media is an addictive disorder. One of the leading researchers on the subject stresses that most reports on the phenomenon are anecdotal and peer-reviewed scientific research is scarce. A recent Pew study found the majority of social media users would not find it difficult to give it up. The American Psychiatric Association does not consider social media addiction or “internet addiction” a disorder and does not include it in its Diagnostic and Statistical Manual of Mental Disorders (DSM), considering it an area that requires further research.

This doesn’t stop pundits from warning us about the dangers of social media addiction. Some warnings might be politically motivated. Recent reports suggest Congress might soon get into the act. If that happens, it can threaten freedom of speech and freedom of the press. It can also generate biliions of dollars in government spending on social media addiction treatment.

Before people see more of their rights infringed or are otherwise harmed by unintended consequences, it would do us all a great deal of good to be more accurate and precise in our terminology. It would also help if lawmakers learned more about the matters on which they create policy.

December 28, 2016 10:42AM

Thrown in Jail for Surfing the Web

Lester Packingham beat a parking ticket and celebrated on his Facebook page by proclaiming, “God is good! . . . Praise be to GOD, WOW! Thanks JESUS!” For this post, he was sentenced to prison—because he was a registered sex offender and a North Carolina statute bans such people from accessing a wide variety of websites. (Packingham took “indecent liberties with a minor” when he was 21, receiving a suspended sentence and probation, which he had completed.)

The law is meant to prevent communications between sex offenders and minors, but it sweeps so broadly that it conflicts with basic First Amendment principles. It doesn’t even require the state to prove that the accused had contact with (or gathered information about) a minor, or intended to do so, or accessed a website for any other illicit purpose.

After the state court of appeals overturned Packingham’s conviction—finding the criminal “access” provision unconstitutional—the North Carolina Supreme Court, over vigorous dissent, reversed and reinstated the conviction and sentence. The U.S. Supreme Court took the case and now Cato, joined by the ACLU, has filed an amicus brief supporting Packingham’s position.

The North Carolina law bans access not just to what people consider to be social-media sites, but also any sites that enable some form of connection between visitors, which would include YouTube, Wikipedia, and even the New York Times. The statute is also vague, in that it covers websites that “permit” minor children to create profiles or pages—and you can’t even find out what a website “permits” without first looking at its terms of service—itself a violation of the statute. Even if the site purports to stop minors from accessing its content, it’s impossible for someone to know whether and how that contractual provision is enforced in practice. Someone subject to this law literally can’t know what he can’t do or say; the police themselves aren’t sure!

The statute also fails constitutional scrutiny because it criminalizes speech based on the identity of the speaker. It’s well established that a state may not burden “a narrow class of disfavored speaker,” but that’s exactly what happens here. The very purpose of the First Amendment is to protect the speech of disfavored minorities—which sex offenders certainly are. Signaling out this speech for prosecution—without any allegation that it relates to conduct or motive—should earn the Tar Heel State a big “dislike” from the Supreme Court.

The Court hears argument in Packingham v. North Carolina on February 27.

December 15, 2015 3:30PM

Secret Policy to Ignore Social Media? Not So Fast

There is a supposed “secret policy” that prevented consular and immigration officers from checking Tashfeen Malik’s social media accounts where she wrote about jihad (possibly under a pseudonym or in personal messages).  If her statements were discovered then she would have been denied a visa, preventing the atrocity.  This is getting a lot of attention on blogs and Secretary Jeh Johnson responded by saying that there are certain limits that probably apply to personal messages although he's unclear. 

After following this controversy, I heard from six different immigration attorneys that there is no such secret policy and their clients’ routinely have their social media accounts checked by immigration officials – or at least have heard of it happening. 

Read the rest of this post »