Tag: Facebook

The Federal Election Commission Is Bad Enough

Chris Hughes, a founder of Facebook, has proposed Congress create a new agency to “create guidelines for acceptable speech on social media.”

As Hughes notes, this proposal “may seem un-American.” That’s because it is. At the very least, Hughes’ plan contravenes the past fifty years of American constitutional jurisprudence, and the deeply held values that undergird it. Let’s examine his case for such a momentous change.

He notes that the First Amendment does not protect all speech. Child porn and stock fraud are not protected by the First Amendment. True threats as harassment are also illegal. Incitement to violence as understood by the courts can also be criminalized. All true, though more complex than he admits.

The fact that the courts have exempted some speech from First Amendment protection does not mean judges should create new categories of unprotected speech. Hughes needs to make a case for new exemptions from the First Amendment. He does not do so. Instead he calls for an agency to regulate online speech. But, barring drastic changes to First Amendment jurisprudence, his imagined agency would not have the authority to broadly regulate Americans’ speech online.

However, Hughes’ old firm, Facebook, can and does regulate speech that is protected from government restrictions. In particular, Facebook suppresses or marginalizes extreme speech (sometimes called “hate speech”), “fake news” about political questions, and the speech of foreign nationals.

Facebook is not covered by the First Amendment.  You can support or decry their decisions about such speech, but it would be “un-American” to say Facebook and other private companies do not have the power to regulate such speech on their platforms. And I might add that you can exit Facebook and speak in other forums, online and off. A federal speech agency would not be so easily avoided.

Hughes may think that Facebook is doing a poor job of regulation and that its efforts require the help of a new government agency (which would be subject to the First Amendment). But if we ended up with government speech codes imposed by private companies, the courts might well swing into action on the side of free speech. In that sense, the new agency would actually vitiate private efforts at content moderation. We might well end up with more of the online speech Hughes and other critics want to restrict.

In sum, Hughes’ agency is a bad idea in itself. It is unlikely to accomplish his goals. The agency might even weaken private efforts to limit some extreme speech. Of course, if judicial doctrines changed to accommodate new speech restrictions imposed by this new agency, America really would change for the worse. It is encouraging, however, how little support and how much criticism Hughes’ proposal has received from his fellow Progressives. (See the critiques by Ari Roberts and Daphne Keller linked here).   Conservatives should feel free to chime in.

Roger McNamee’s Facebook Critique

In a recent Time magazine article, Roger McNamee offers an agitated criticism of Facebook, adapted from his book Zucked: Waking Up to the Facebook Catastrophe.  Facebook “has a huge impact on politics and social welfare,” he claims, and “has done things that are truly horrible.”  Facebook, he says, is “terrible for America.”

McNamee suggests his “history with the company made me a credible voice.” From 2005 to 2015, McNamee was one of a half dozen managing directors of Elevation Partners, an $1.9 billion private equity firm that bought and sold  shares in eight companies, including such oldies as Forbes and Palm.  U2 singer Bono was a co-founder. Other partners included two former executives from Apple and one from Yahoo.  Another is married to the sister of Facebook’s COO.  Such investors are not necessarily disinterested observers, much less policy experts.

Between November 2009 and June 2010 Elevation Partners invested $210 million for 1% of Facebook.  That was early, but two years after Microsoft made a larger investment.  Back then, McNamee and other investors had facetime with Zuckerberg. 

McNamee supposedly became alarmed while perusing “Bay Area for Bernie” on Facebook and finding suspicious memes critical of Hillary.  Later, he imagined the Brexit vote must be due to misleading Facebook posts (as if British tabloids and TV were silent).  “Brexit happens in June,” he says, “and then I think, Oh my god, what if it’s possible that in a campaign setting, the candidate that has the more inflammatory message gets a structural advantage from Facebook? And then in August, we hear about Manafort, so we need to introduce the Russians into the equation.” 

He suggests goofy Facebook ads by Russian trolls stole the U.S. election from Clinton. Actually, the Mueller indictment said the Internet Research Agency “allegedly used social media and other internet platforms to address a wide variety of topicsto inflame political debates, frequently taking both sides of divisive issues.  Such political trolling for fun and profit (clicks generate advertising money) is commonplace in Russia, and also at home in the USA.

What’s Missing from Facebook’s Oversight Board

Facebook has set out a draft charter for an “Oversight Board for Content Decisions.” This document represents the first concrete step yet toward the “Supreme Court” for content moderation suggested by Mark Zuckerberg. The draft charter outlines the board itself and poses several related questions for interested parties. I will offer thoughts on those questions in upcoming blog posts. I begin here not with a question posed by Facebook, but rather by discussing two values I think get too little attention in the charter: legitimacy and deliberation.

The draft charter mentions “legitimacy” once: “The public legitimacy of the board will grow from the transparent, independent decisions that the board makes.” Legitimacy is commonly defined as conforming to law or existing rules (see, for example, the American Heritage Dictionary). But Facebook is clearly thinking more broadly, and they are wise to do so. Those who remove content from Facebook (and the board that judges the propriety of those removals) have considerable power. The authors of banned content acquire at least a certain stigma and may incur a broader social censure. Facebook has every legal right to remove the content, but they also need public acceptance of this power to impose costs on others. Absent that acceptance, the oversight board might become just another site of irreconcilable political differences or worse, “the removed” will call in the government to make things right. The oversight board should achieve many goals, but its architects might think first about its legitimacy.

The term “deliberation” also gets one mention in the draft charter: “Members will commit themselves not to reveal private deliberations except as expressed in official board explanations and decisions.” So there will be deliberations, and they will not be public (more on this in later posts about transparency). The case for deliberation is strengthened by considering its absence.

The draft could have said “members will commit themselves not to reveal private voting….” In a pure stakeholder model of the board, members would accurately represent the Facebook community (that is, they would be diverse). Members would consider the case before them and vote to advance the interests of those they represent. No deliberation would be necessary, though talk among members might be permitted. And, of course, such voting could be both transparent and independent. But the decision would be a mere weighing of interests rather than a consideration of reasons.

Why would those disappointed by the decision nonetheless consider it legitimate? Facebook could say to the disappointed: The board has final say on appeals of content moderation (after all, it’s in the terms of service you signed), and this is their decision. Logically that deduction might do the trick, but I think a somewhat different process might increase the legitimacy of the content moderation in the eyes of the disappointed. 

Consider a deliberative model for the board. A subset of the board meets and discusses the case before them. Arguments are offered, values probed, and conclusions reached. But the votes on the case would be informed by the prior deliberation. Members will represent the larger community in its many facets, but the path from representation to voting will include a collective giving and taking of reasons. That difference, I think, makes the deliberative model more likely to gain legitimacy. Simply losing a vote can seem like an expression of power. Losing an argument is more acceptable, and later the argument might be renewed with a different outcome.

The importance of deliberation implicates other values in the charter, especially independence. The draft places great weight on the independence of the board from Facebook. That emphasis is understandable. Critics have said Facebook will turn a blind eye to dangerous speech because it attracts attention and thereby, advertising dollars (Mark Zuckerberg has rebutted this criticism). The emphasis on independence from the business contains a truth: a board dedicated to maximizing Facebook’s quarterly returns might have a hard time gaining legitimacy. But the board’s deliberations should not be completely independent of Facebook. Facebook needs to make money to exist. Doing great harm to Facebook as a business cannot be part of the remit of the board. 

Here, as so often in life, James Madison has something valuable to add. In Federalist 10, Madison argues that political institutions should be designed to protect the rights of citizens and to advance “the permanent and aggregate interests of the community.” Facebook is a community. The Community Standards (and the board’s interpretation of them) should serve the permanent and aggregate interests of that community. The prosperity of the company (though perhaps not necessarily at every margin) is surely in the interest of the community. The interests represented on the board are a starting point for understanding the interests of that community, but in themselves they are not enough for that.  Deliberation might be the bridge between those interests and the “permanent and aggregate interests of the community.” Looked at that way, most users would have a reason to believe in the legitimacy of a deliberative board as opposed to a board of stakeholders.

Facebook’s draft charter evinces hard work and thought. But it could benefit from more focus on the conditions for the legitimacy of the oversight board. Deliberation (rather than simple interest representation) is part of the answer to the legitimacy question. As deliberations go forward, perhaps the charter’s framers might give more attention to how institutional design can foster deliberation.

Keep Government Away From Twitter

Twitter recently re-activated Jesse Kelly’s account after telling him that he was permanently banned from the platform. The social media giant informed Kelly, a conservative commentator, that his account was permanently suspended “due to multiple or repeat violations of the Twitter rules.” Conservative pundits, journalists, and politicians criticized Twitter’s decision to ban Kelly, with some alleging that Kelly’s ban was the latest example of perceived anti-conservative bias in Silicon Valley. While some might be infuriated with what happened to Kelly’s Twitter account, we should be wary of calls for government regulation of social media and related investigations in the name of free speech or the First Amendment. Companies such as Twitter and Facebook will sometimes make content moderation decisions that seem hypocritical, inconsistent, and confusing. But private failure is better than government failure, not least because unlike government agencies, Twitter has to worry about competition and profits.

It’s not immediately clear why Twitter banned Kelly. A fleeting glance of Kelly’s Twitter feed reveals plenty of eye roll-worthy content, including his calls for the peaceful breakup of the United States and his assertion that only an existential threat to the United States can save the country. His writings at the conservative website The Federalist include bizarre and unfounded declarations such as, “barring some unforeseen awakening, America is heading for an eventual socialist abyss.” In the same article he called for his readers to “Be the Lakota” after a brief discussion about how Sitting Bull and his warriors took scalps at the Battle of Little Bighorn. In another article Kelly made the argument that a belief in limited government is a necessary condition for being a patriot.

I must confess that I didn’t know Kelly existed until I learned the news of his Twitter ban, so it’s possible that those backing his ban from Twitter might be able to point to other content that they consider more offensive that what I just highlighted. But, from what I can tell Kelly’s content hardly qualifies as suspension-worthy.

Some opponents of Kelly’s ban (and indeed Kelly himself) were quick to point out that Nation of Islam leader Louis Farrakhan still has a Twitter account despite making anti-semitic remarks. Richard Spencer, the white supremacist president of the innocuously-named National Policy Institute who pondered taking my boss’ office, remains on Twitter, although his account is no longer verified.

All of the of the debates about social media content moderation have produced some strange proposals. Earlier this year I attended the Lincoln Network’s Reboot conference and heard Dr. Jerry A. Johnson, the President and Chief Executive Officer of the National Religious Broadcasters, propose that social media companies embrace the First Amendment as a standard. Needless to say, I was surprised to hear a conservative Christian urge private companies to embrace a content moderation standard that would require them to allow animal abuse videos, footage of beheadings, and pornography on their platforms. Facebook, Twitter, and other social media companies have sensible reasons for not using the First Amendment as their content moderation lodestar.

Rather than turning to First Amendment law for guidance, social media companies have developed their own standards for speech. These standards are enforced by human beings (and the algorithms human beings create) who make mistakes and can unintentionally or intentionally import their biases into content moderation decisions. Another Twitter controversy from earlier this year illustrates how difficult it can be to develop content moderation policies.

Shortly after Sen. John McCain’s death a Twitter user posted a tweet that included a doctored photo of Sen. McCain’s daughter, Meghan McCain, crying over her father’s casket. The tweet included the words “America, this ones (sic) for you” and the doctored photo, which showed a handgun being aimed at the grieving McCain. McCain’s husband, Federalist publisher Ben Domenech, criticized Twitter CEO Jack Dorsey for keeping the tweet on the platform. Twitter later took the offensive tweet down, and Dorsey apologized for not taking action sooner.

The tweet aimed at Meghan McCain clearly violated Twitter’s rules, which state: “You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.”

Twitter’s rules also prohibit hateful conduct or imagery, as outlined in its “Hateful Conduct Policy.” The policy seems clear enough, but a look at Kelly’s tweets reveal content that someone could interpret as hateful, even if some of the tweets are attempts at humor. Is portraying Confederate soldiers as “poor Southerners defending their land from an invading Northern army” hateful? What about a tweet bemoaning women’s right to vote? Or tweets that describe our ham-loving neighbors to the North as “garbage people” and violence as “underrated”? None of these tweets seem to violate Twitter’s current content policy, but someone could write a content policy that would prohibit such content.

Imagine developing a content policy for a social media site and your job is to consider whether content identical to the tweet targeting McCain and content identical to Kelly’s tweet concerning violence should be allowed or deleted. You have four policy options:

     
  Delete Tweet Targeting McCain Allow Tweet Targeting McCain
Delete Kelly’s Tweet

1

2

Allow Kelly’s Tweet

3

4

 

Many commentators seem to back option 3, believing that the tweet targeting McCain should’ve been deleted while Kelly’ tweet should be allowed. That’s a reasonable position. But it’s not hard to see how someone could come to the conclusion that 1 and 4 are also acceptable options. Of all four options only option 2, which would lead to the deletion of Kelly’s tweet but also allow the tweet targeting McCain, seems incoherent on its face.

Social media companies can come up with sensible-sounding policies, but there will always be tough calls. Having a policy that prohibits images of nude children sounds sensible, but there was an outcry after Facebook removed an Anne Frank Center article, which had as its feature image a photo of nude children who were victims of the Holocaust. Facebook didn’t disclose whether an algorithm or a human being had flagged the post for deletion.

In a similar case, Facebook initially defended its decision to remove Nick Ut’s Pulitzer Prize-winning photo “The Terror of War,” which shows a burned, naked nine year old Vietnamese girl fleeing the aftermath of an South Viernamese napalm attack in 1972. Despite the photo’s fame and historical significance Facebook told The Guardian, “While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.” Facebook eventually changed course, allowing users to post the photo, citing the photo’s historical significance:

Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image on Facebook where we are aware it has been removed.

What about graphic images of contemporary and past battles? On the one hand, there is clear historic value to images from the American Civil War, the Second World War, and the Vietnam War, some of which include graphic violent content. A social media company implementing a policy prohibiting graphic depictions of violence sounds sensible, but like a policy banning images of nude children it will not eliminate difficult choices or the possibility that such a policy will yield results many users will find inconsistent and confusing.

Given that whoever is developing content moderation policies will be put in the position of making tough choices it’s far better to leave these choices in the hands of private actors rather than government regulators. Unlike the government, Twitter has a profit motive and competition. As such, it is subject to far more accountability than the government. We may not always like the decisions social media companies make, but private failure is better than government failure. An America where unnamed bureaucrats, not private employees, determine what can be posted on social media is one where free speech is stifled.

To be clear, calls for increased government intervention and regulation of social media platforms is a bipartisan phenomenon. Sen. Mark Warner (D-VA) has discussed a range of possible social media policies, including a crackdown on anonymous accounts and regulations modeled on the European so-called “right to be forgotten.” If such policies were implemented (the First Amendment issues notwithstanding), they would inevitably lead to valuable speech being stifled. Sen. Ron Wyden (D-OR) has said that he’s open to carve-outs of Section 230 of the Communications Decency Act, which protects online intermediaries such as Facebook and Twitter from liability for what users post on their platforms.

When it comes to possibly amending Section 230 Sen. Wyden has some Republican allies. Never mind that some of these Republicans don’t seem to fully understand the relevant parts of Section 230.

That social media giants are under attack from the left and the right is not an argument for government intervention. Calls for Section 230 amendment or “anti-censorship” legislation are a serious risk to free speech. If Section 230 is amended to increase social media companies’ risk of liability suits we should expect these companies to suppress more speech. Twitter users may not always like what Twitter does, but calls for government intervention are not the remedy.

Will Regulations Create Big Marijuana?

I wrote last month that new regulations and taxes in California’s legalized marijuana regime are likely to result in a situation in which

a few people are going to get rich in the California marijuana industry, and fewer small growers are going to earn a modest but comfortable income. Just one of the many ways that regulation contributes to inequality.

Now the East Bay Express in Oakland offers a further look at the problem:

East Bay ExpressAsk the people who grow, manufacture, and sell cannabis about the end of prohibition and you’ll hear two stories. One is that legalization is ushering a multibillion-dollar industry into the light. Opportunities are boundless and green-friendly cities like Oakland are going to benefit enormously. There will be thousands of new jobs, millions in new tax revenue, and a drop in crime and incarceration.

But increasingly you’ll hear another story. The state of California and the city of Oakland blew it. The new state and city cannabis regulations are too complicated, permits are too difficult and time consuming to obtain, taxes are too high, and commercial real estate is scarce and expensive. As a result, many longtime cannabis entrepreneurs are either giving up or they’re burrowing back into the underground economy, out of the taxman’s reach, and unfortunately, further away from the social benefits legal pot was supposed to deliver….

Some longtime farmers, daunted by the regulated market’s heavy expenses, taxes, and low-profit predictions, have shrugged and gone back to the black market where they can continue to grow as they always have: illegally but free of hassle from the state’s new pot bureaucrats armed with pocket protectors and clipboards.

Not all the complaints in the two-part investigation are about taxes and overregulation. Some, especially in part 1, are about “loopholes” in the regulations that allow large corporations to get into the marijuana business and about “dramatic changes to Humboldt County’s cannabis culture, which had an almost pagan worship of a plant that created an alternative lifestyle in the misty hills north of the ‘Redwood Curtain.’”

WSJ on RegulationBut there’s plenty of evidence that regulations are more burdensome on newer and smaller companies than on large, established companies. Indeed, regulatory processes are oftencaptured” by the affected interest groups. The Wall Street Journal confirmed this just yesterday, reporting that “some of the restrictions [in Europe’s GDPR online privacy regulations] are having an unintended consequence: reinforcing the duopoly of Facebook Inc. and Alphabet Inc.’s Google.”

The Foot in the Door on Internet Speech Regulation

Campaign finance has captured Congress’s attention once again, which rarely bodes well for democracy. Senators Amy Klobuchar, Mark Warner, and (of course) John McCain have introduced the Honest Ads Act. The bill requires “those who purchase and publish [online political advertisements]to disclose information about the advertisements to the public…”

Specifically, the bill requires those who paid for an online ad to disclose their name and additional information in the ad itself or in another fashion that can be easily accessed. The bill takes several pages to specify exactly how these disclosures should look or sound. The bill also requires those who purchase $500 or more of ads to disclose substantial information about themselves; what must be disclosed takes up a page and a half of the bill.

The Federal Election Commission makes disclosed campaign contributions public. With this bill, large Internet companies (that is, platforms with 50 million unique visitors from the United States monthly) are given that task. They are supposed to maintain records about ads that concern “any political matter of national importance.” This category goes well beyond speech seeking to elect or defeat a candidate for office.

Why does the nation need this new law? The bill discusses Russian efforts to affect the 2016 election. It mentions the $100,000 spent by “Russian entities” to purchase 3,000 ads. The bill does not mention that Mark Penn, a former campaign advisor to Bill and Hillary Clinton, has estimated that only $6,500 of the $100,000 actually sought to elect or defeat a candidate for office. It also omits Penn’s sense of perspective:

Hillary Clinton’s total campaign budget, including associated committees, was $1.4 billion. Mr. Trump and his allies had about $1 billion. Even a full $100,000 of Russian ads would have erased just 0.025% of Hillary’s financial advantage. In the last week of the campaign alone, Mrs. Clinton’s super PAC dumped $6 million in ads into Florida, Pennsylvania and Wisconsin.

California’s ObamaCare Exchange Costs 56 Times More to Launch than Facebook

Robert Laszewski notes that launching California’s ObamaCare “Exchange” is so far costing taxpayers 56 times as much as it cost to launch Facebook, while its marketing budget is 8 times what Sen. Barbara Boxer (D-CA) spent on her reelection bid (adjusted for inflation):

So far California has received $910 million in federal grants to launch its new health insurance exchange under the Affordable Care Act (“Obamacare”).

The California exchange, “Covered California,” has so far awarded a $183 million contract to Accenture to build the website, enrollment, and eligibility system and another $174 million to operate the exchange for four years.

The state will also spend $250 million on a two-year marketing campaign. By comparison California Senator Barbara Boxer spent $28 million on her 2010 statewide reelection campaign while her challenger spent another $22 million…

Privately funded Esurance began its multi-product national web business in 1998 with an initial $5.5 million round of venture fund investment in 1999 and a second round of $34 million a few months later.

The start-up experience of other major web companies is also instructive. Facebook received $13.7 million to launch in 2005. eBay was founded in 1995 and received its first venture money in 1997––$6.7 million in 1997.

Even doubling these investments for inflation still leaves quite a gap.

The California Exchange officials also say they need 20,000 part time enrollers to get everybody signed up––paying them $58 for each application. Having that many people out in the market creates quality control issues particularly when these people will be handling personal information like address, birth date, and social security number. California Blue Shield, by comparison has 5,000 employees serving 3.5 million members.

New York is off to a similar start. New York has received two grants totaling $340 millionagain just to set up an enrollment and eligibility process.

I thought it was notable that the Obama Administration has issued grants totaling $174 million to a non-profit group––Freelancers––for the purpose of setting up a new full service health plan in New York under the Affordable Care Act’s health insurance co-op program.

So, the Obama administration thinks it costs $174 million to set up a full service health insurance company in New York (including the significant cost of premium reserves) compared to $340 million to set up just a statewide insurance exchange to do eligibility and enrollment?

As many as 17 states are going to be setting up their own health insurance exchanges under the new law and the feds have so far released $3.4 billion to the states to build them. Little Vermont has received $124 million so far, Kentucky $253 million, and Oregon $242 million, for example. I wonder what the per person cost of exchange enrollment in Vermont will be?

Read the whole thing.

Pages