Telecom, Internet & Information Policy

August 25, 2019 3:30PM

FamilyTreeDNA, Government Overreach, and Unethical Nudges

A disturbing story about FamilyTreeDNA highlights issues about consumer privacy, government collaboration, and poor stewardship by a private company. Digging deeper, the story also highlights how behavioral economics can go awry, through self-serving choices by a moralistic CEO that violate basic ethical principles of choice architecture design. Bad nudges is an issue I have highlighted before in the context of Kentucky Medicaid plan choice and state-run auto-IRAs.

The short version: FamilyTreeDNA’s database contains more than 1.5 million customers, and the FBI approached company president Bennett Greenspan in late 2017 and early 2018 to access those records in hopes of finding genetic links for some unsolved crimes. As the Wall Street Journal notes:

He didn’t tell the FBI attorney to come back with a court order. He didn’t stop to ponder the moral quandaries. He said yes on the spot. “I have been a CEO for a long time,” said Mr. Greenspan, 67 years old, who founded the Houston-based company in 1999. “I have made decisions on my own for a long time. In this case, it was easy. We were talking about horrendous crimes. So I made the decision.”

Any libertarian would certainly agree that consumers and companies should be free to come to any agreement they want on sacrificing personal privacy for other product characteristics (including lower prices). Even with an open-ended user agreement, it is hard to fathom that even the most imaginative users from 15 years ago would have envisioned the sort of law enforcement overreach that we see today. If informed, some subset of customers would likely support FamilyTreeDNA’s collaboration with the FBI. The user agreement did not require the company to inform customers that the FBI was searching their records, and the company did not inform customers until after Buzzfeed revealed the collaboration in January 2019.

Although the CEO appears to be an enthusiastic participant in the FBI’s dragnet, this may be the exception, rather than the rule. Other DNA testing companies – such as 23andMe, Ancestry, and MyHeritage – do not collaborate with law enforcement unless legally required to do so. One must wonder how much extra legal costs are borne by private companies from law enforcement overreach like this, and how much it would cost a company to vigorously fight back against the fishing expeditions? Surely, the cost of law enforcement overreach is passed on to customers who pay more in submission fees, in order to have their privacy invaded.

It is important to put the company’s subsequent response to the fallout into a behavioral economic lens.

In March (2019), FamilyTreeDNA said it figured out a way to allow customers to opt out of law-enforcement matching but still see if they matched with regular customers. … (Mr. Greenspan) said less than 2% of customers have requested opting out of law-enforcement searches.

In his pioneering work, Prof. Cass Sunstein lays out ethical considerations for choice architecture. He argues that the objective of nudging is to “influence choices in a way that will make the choosers better off, as judged by themselves.” In this context, when confronted with obvious outrage and bad publicity, FamilyTreeDNA had important decisions to make. Sunstein’s “as judged by themselves” principle would suggest the opposite choice architecture: the company should have set the default as automatic opt-out of law enforcement matching, and allowed users to opt-in to law enforcement matching if they so decided. Many of the 1.5 million customers are likely infrequent, inactive users of the website, and many were likely unaware of the FBI collaboration, even after the news broke. They would be appalled by the collaboration. Mr. Greenspan’s opt-out figure of 2% strikes me as a very large response, given that FamilyTreeDNA has customers going back 20 years, and many likely ignore emails and news stories about this scandal.

The criticism of this company’s choice architecture – and feature stories in prominent newspapers – of course would not exist without unabated government overreach. From a handful of inquiries in early 2018, there are now 50 law-enforcement agencies requesting matching from FamilyTreeDNA. Buyer beware.

August 8, 2019 3:53PM

Dennis Prager, Big Government Conservative

Dennis Prager recently made a case for government management of social media in the Wall Street Journal. Prager is a conservative so it might seem odd to find him plumping for government control of private businesses. But he is a part of a new conservatism that rejects the older tradition of laissez-faire that informed the right. What could justify Big Government regulation for tech companies?

Prager argues that the companies have a legal obligation to moderate their platforms without political bias. He thinks they are biased and thus fail to meet their obligation. But the companies have no such obligation and to be charitable, it is far from clear that they are biased against conservative content.

Let’s look at the law first. Section 230(c) of the Communications Decency Act of 1996 says:

1)…No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) …No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

Section 230 did seek to promote free speech and to empower companies to moderate content on their platforms. Prager doesn’t see that promoting free speech requires exempting the companies from the legal obligations of publishers and speakers. A newspaper can review and edit material it publishes and thus should take responsibility for the harms it might do. Social media giants like Facebook have billions of users producing content. They can hardly review all of it before it appears. If they were held liable for the content, the companies would likely take no chances and suppress all content that might cause harm and legal liability. In other words, absent immunity from liability, the companies would sharply restrict speech on their platforms; the liability exemption thereby promotes free speech.

Of course, if they did not nothing at all, the platforms would become much less valuable to users. The law also empowers the platforms to restrict content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Prager notices the obscenity part, but somehow misses the words “otherwise objectionable.” If YouTube decided Prager’s videos were neither violent nor obscene but were “otherwise objectionable,” the company could restrict access to them. In other words, the law empowers YouTube to be biased against Prager if they wish. And Prager thinks they do have it in for him and other conservatives. As you might have guessed by now, there is lot less to this claim than meets the eye.

Consider what Prager himself tells us: YouTube now hosts 320 Prager University videos that get a billion views a year. Indeed, a new video goes up every week. Not exactly the Gulag is it? He complains that 56 of those 320 videos are on YouTube’s “restricted list” which means (according to Prager) “any home, institution or individual using a filter to block pornography and violence cannot see those videos. Nor can any school or library.” In other words, YouTube has “restricted access” to materials on its site its managers consider “otherwise objectionable.” Was YouTube biased against Prager and other conservatives? Prager himself notes leftwing sites also ended up on the restricted list. But that’s different, he says, because their videos are violent or obscene while his are not.
Prager fails to mention that videos from The History Channel are restricted at twice the rate of his films. Hardly a bastion of left-wing vulgarity, The History Channel’s videos often discuss historical atrocities and totalitarian regimes. While these clips may be educational, Google seems to believe that the 1.5% of YouTube users who voluntarily opt-in to restricted mode wish to avoid even educational discussions of atrocity. Dennis Prager’s video about the Ten Commandments is restricted for similar discussions of the Nazi’s Godless regime. It is far from unreasonable to allow parents to decide how their children are taught about such horrors. A reasonable conservative might even applaud such support for the family.

Who gets to decide whether left wing videos or historical documentaries are different than Prager’s videos? The law says YouTube gets to decide. Imagine you took the words “otherwise objectionable” out of Section 230. Who would then decide about restricting material? In the past, conservatives would have said the owners and managers of private property decide how best to use their assets. But the times seem to be changing.

Prager cites a study by a Columbia University researcher that purports to prove online bias against conservatives. The study cites 22 cases covered by the press in which Twitter suspended the accounts of individuals, almost all of whom were Trump supporters. Prager is arguing that a sample of just 22 cases from Twitter alone proves systemic bias against…whom? Conservatives? The list of 22 suspended accounts is, “a who’s who of outspoken or accused white nationalists, neo-Confederates, holocaust deniers, conspiracy peddlers, professional trolls, and other alt-right or fringe personalities...It does not include any mainstream conservatives, unless, I suppose, you count recently-indicted Trump campaign advisor and ‘dirty trickster’ Roger Stone.” Is Prager broadening the big tent of conservatism here? If not, what does this study, as limited as it is, prove about bias against conservatives?

Prager’s other argument about bias comes in the form of a question: “Do they [conservatives skeptical of his views] think Google, Facebook and Twitter—the conduits of a vast proportion of the free world’s public information—don’t act on their loathing of conservatives?” This question nicely combines two kinds of logical fallacies. It assumes the truth of what is to be proved, the fallacy of pettio princippi while appealing to confirmation bias among Prager’s readers. Sadly, this may be the most effective rhetoric in the essay, notwithstanding its logical faults. But it does nothing for his case against Google.

The final paragraph of the essay is perhaps the most revealing about the new conservatism. He draws an analogy to the airlines who are treated as common carriers and expected to provide service to all. Of course, the airlines were also heavily regulated for many decades to the detriment of consumers. Prager gestures in this way toward a future of heavy regulation for social media, a future that will be novel if not conservative. “Not conservative” that is, if what conservatives said in my lifetime about the proper limits of government ever had any meaning at all beyond the moment they were spoken.

 

August 7, 2019 3:24PM

WSJ, WaPo, NYT Spread False Internet Law Claims

Media Name: prager_u.jpeg

Section 230 of the Communications Decency Act is much debated and under bipartisan attack. The legislation, which includes the “26 words that created the Internet,” is attacked from the right by those who complain about alleged “Big Tech” anti-conservative bias and from the left by those bemoaning the spread of extremist content. Accordingly, Section 230 has been the topic of much discussion in newspaper pages. Unfortunately, the Fourth Estate has recently allowed misinformation about Section 230 to spread, which is especially regrettable given that falsehoods about Section 230 are already ubiquitous.

The most recent example of such misinformation is an op-ed in The Wall Street Journal by the conservative commentator and Prager University founder Dennis Prager. The first falsehood appears in the subhed: “Big tech companies enjoy legal immunity premised on the assumption they’ll respect free speech.”

This is not true. Congress did not pass Section 230 on the understanding that Internet companies would engage in minimal moderation and “respect free speech.” In fact, §230(c)(2)(A) of the CDA states the following:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” (emphasis mine).

This portion of Section 230 explicitly states that companies as large as Facebook and those as small as a local bakery that includes a comments section on its WordPress site can take “any action” to remove content they consider objectionable. I am at a loss trying to figure out where Prager got the idea that Section 230 was premised on Internet companies respecting “free speech.” It’s possible that he’s considering one of Section 230’s findings:

The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.

While this finding emphasizes the value of the Internet as an ecosystem capable of hosting  a diverse range of political views, it does not encourage specific sites to adopt politically neutral content moderation policies. Nor does it make such neutrality a necessary condition for Section 230 immunity.

We should keep in mind that oped page contributors rarely write their own subheds. But Prager makes the point explicitly in his oped’s text:

The clear intent of Section 230—the bargain Congress made with the tech companies—was to promote free speech while allowing companies to moderate indecent content without being classified as publishers.

But Google and the others have violated this agreement. They want to operate under a double standard: censoring material that has no indecent content—that is, acting like publishers—while retaining the immunity of nonpublishers.

There was no such agreement or bargain. Section 230 was passed in an explicit attempt to encourage moderation of speech. This portion of Prager’s oped also raises another myth that abounds in Section 230 debates: the “platform” vs. “publisher” distinction.

There is no legal difference between a “platform” and a “publisher.” Indeed, publishers enjoy Section 230 protection. The Wall Street Journal is a publisher and can be held liable for defamatory content oped contributors write. But The Wall Street Journal also hosts moderated comment sections, which do enjoy Section 230 protection. The comments section below Prager’s oped says as much.

Media Name: screen_shot_2019-08-07_at_12.00.16_pm.png

When Prager writes, “But Google, YouTube and Facebook choose not to be regarded as ‘publishers’ because publishers are liable for what they publish and can be sued for libel” he is making a significant error. Google, Youtube, and Facebook did not “choose” to not be considered publishers. Social media sites don't "choose" not to be regarded as publishers. CDA§230(c)(1) makes the decision for them:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

These “26 words that created the Internet” include no provision that allows social media site to “choose” whether they’re “publishers” or “platforms.”

Section 230 was passed to encourage Internet companies to moderate user content and does not classify such companies as “publishers” or “platforms.”

Prager’s last important claim is that the fact that Youtube users who have opted into “restricted mode” can’t access dozens of Prager U videos because of an anti-conservative bias at Youtube. This claim doesn’t withstand scrutiny. Research by NetChoice demonstrates that that only 12 percent of Prager U’s videos are in “restricted mode,” compared to 54 percent of Daily Show videos and 71 percent of Young Turks videos. Prager may be correct to point out that these videos are restricted because of expletives, but expletives aren't the only kind of content that can result in videos being restricted. "Mature subjects," "violence," and "sexual situations" can also result in videos being unavailable to users in restricted mode. Anyone who takes a look at Prager U's Youtube channel can see content that would understandably put it out of reach of users who have opted into “restricted mode.”

It’s bad enough to get the facts of Section 230 wrong. Spreading falsehoods in pursuit of an agenda that isn’t supported by what the facts reveal is worse.

Dennis Prager is the most recent proliferator of Section 230 misinformation, but he’s hardly the only one.

Yesterday, The New York Times published a Section 230-heavy article which featured on the front page of its business section. The headline read: “Why Hate Speech on the Internet Is a Never-Ending Problem.” To its credit, The New York Times has since issued a correction and fixed the headline. As I discussed above, Section 230 doesn’t protect hate speech per se. You can thank the First Amendment for that. Rather, it allows companies to remove speech that violates their content moderation rules without becoming liable for everything posted by users. Footage of a white nationalist murdering someone would run afoul of Facebook’s content moderation policy, but it might be left up by moderators of a neo-nazi forum. Section 230 allows Facebook to remove the footage, which is legal under the First Amendment.

Last month, The Washington Post published an oped by Charlie Kirk, founder of Turning Point USA. The Kirk oped repeats the same “publisher” v. “platform” error often seen in Section 230 debates. But it also includes the following claim:

Social media companies have leveraged Section 230 to great effect, and astounding profits, by claiming they are platforms — not publishers — thereby avoiding under the law billions of dollars in potential copyright infringement and libel lawsuits.

Kirk’s comment on  Section 230’s interaction with copyright law is the opposite of the truth. Section 230(e)(2) reads:

Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.

At the time of publication neither The Washington Post nor The Wall Street Journal have issued corrections, editorial notes, or retractions related to the Prager or Kirk opeds. 

Useful policy debates only happen when participants can agree on facts. Sadly, some of the most reputable newspapers in the country have allowed for misinformation about an important piece of legislation to spread without correction. Anyone hoping for the quality of Section 230 debates to improve any time soon will be disappointed.

July 31, 2019 4:07PM

Some Proposals to Represent Users on Facebook’s Board

Recently I argued that Facebook needed to focus more on representation in creating its oversight board for appeals from its content moderation.  If the Board is to be legitimate, it must be representative as well as law-like and independent. How might it be representative?

First: who should be represented on the Board? Facebook users create much of the content that generates the data used to improve both advertising and the company’s bottom line. They are the community often mentioned by the leaders of Facebook. The Board should, among other values, represent users.

Initially Facebook officials will select Board members in part because they represent users. This means Board members will vary along various familiar dimensions: race, gender, culture and so on. Of course, it will also mean the Board will be multi-national since Facebooks has over two billion users in many countries. Such representation is to be expected, and given the consultation process surrounding the creation of the Board, we might expect good results.

Other factors are encouraging. Facebook is a business. That means the company must make as large a profit as possible. For that reason, Facebook wants as many users as possible; that desire means the company must satisfy as many people as possible regarding its Board. All things being equal, Facebook’s Board membership will look like its users for business reasons.

But there are risks ahead. The easiest way to make the Board appear representative would be to appoint well-known members. Both the member and what they “stand for” would be quickly recognized by users. But well-known people have many demands on their time. Making the Facebook Board a success will demand time and effort from its founding members? Will the work done by the Board take priority for the famous? If not, many of the actual decisions by the Board will be profoundly influenced by someone other than Board members. Who?

The staff of any organization tends to run it; hence, the famous principal-agent problem wherein the staff (the agent) acts on its own interests rather than the interests of Facebook users (the principal here). Of course, the Board’s staff could also be selected to be representative of Facebook users. But that selection will be much less public than the selection of Board members. Organized groups with an interest in what the Board does are likely to seek influence on staff selections. But there is a risk here. The Board might end up appearing to represent almost all users through its well-known members while actually representing organized interests concerned about Facebook policies.

I see two ways for the concerns of the broader Facebook user community to inform Board deliberations. First, Facebook could survey users about their views about content moderation. For example, a survey could ask users to strike a balance between voice and safety in concrete cases similar to problems confronted by Facebook’s content moderators. Survey findings would not decide questions of content moderation. Rather, they could inform decisions about the Community Standards and their applications if the Board wished. In other words, surveys would represent user opinions about the questions posed to the Board. It would make the information available to the Board more representative even if the final decision contravened the preferences of most users.

Instead of surveying users, why not just allow users to elect Board members?  Facebook has tried voting by users in the past, but poor turnout compromised the legitimacy of these efforts. Why risk another failure with turnout that might raise questions about the legitimacy of the Board? The Board itself is a remarkable innovation; trying to do too much at once, even in the name of representation, may limits its chances for success. However, if the Oversight board surveyed samples of users to receive feedback, the views of users would be known rather precisely given the likely size of the samples.

But Facebook might consider other limited but intriguing innovations to foster representation. Selection of members and surveys are both passive representation of users. Facebook officials choose representatives for the larger group of users. Board members select user opinions to inform their decisions. Might Facebook users be more active in these matters?

As mentioned in my previous post, juries (including grand juries) represent the broader public in the legal system. In fact, the fifth, sixth, and seventh amendments to the republican Constitution of the United States require juries for decisions about criminal and common law. It might be too much to expect that user juries would decide whether some future Alex Jones has violated Facebook’s rules. But juries might decide some cases where decisions have been appealed to the Board. I am thinking here of juries, not voters. We should not dismiss the possibility that jurors could deliberate about questions of content moderation. Deliberation is a social act whether in the jury room or online. No doubt the structure and rules for such deliberation require much attention. Facebook has enormous technical capacity and much experience with such deliberation within the company. It should experiment with a jury model either privately or publicly.

Gaining legitimacy for Facebook’s Board of Oversight will be hard. The Board needs to be both judicial and representative, two values that can conflict. In the United States such conflicts are fought out between the branches of government. At Facebook, the struggle will take place within this new Board. Yet the Board must be both judicial and representative if its judgments are to be accepted by its users and by larger publics.

July 24, 2019 11:05AM

The Limits of Law for Facebook’s Legitimacy

Facebook expects to set up its Oversight Board for content moderation by the end of the year. Soon the company will fill in the final details for the Board. One big question: who should serve on the Board? Facebook will initially select specific individuals for the Board; it’s likely that the initial choices will then select their successors. At this point, we might ask a more general question: what kind of individuals should be selected? By that I mean: what should be their background and qualifications for service?

Law professor Noah Feldman has advised Facebook about its Board, and he has some definite ideas about who should serve. Writing in 2018, Feldman outlined a system of appellate review for Facebook that looks a lot like the U.S. courts system. He foresaw a penultimate level of review (Tier 4) and something like a Supreme Court (Tier 5). He wrote, “Tier 4 judges...should be paid, independent contractors with careers of their own, generally in law and (perhaps) legal scholarship or (sometimes) other analogous disciplines.” Feldman says Tier 5 judges should

enjoy national or international public reputations within their professions. They could be of various ages and career stages, but they should bring prestige to the Facebook Supreme Court at least as much as they derive prestige from it. In some cases, they may be retired judges of constitutional or other courts or high-profile former government lawyers. They should reflect a broad political range and should come from a range of geographic and cultural backgrounds. (emphasis added)

Feldman assumes that most members of the Facebook Board will be lawyers. He is, after all, outlining a judicial model of content moderation review.

A judicial model staffed mostly by lawyers has some appeal. Most people recognize the importance of the rule of law and in this instance, contrast it favorably with “the rule of Mark.” The rule of law might extend from Facebook’s basic law (its Community Standards) down to its content moderation.  Appeals from content decisions then go to a Board staffed by experts trained in understanding and applying law. This “legal” process might appropriate the legitimacy accorded to law and the courts in the offline world. And legal expertise might foster acceptance by “the governed.” Absent user acceptance, Facebook’s Board will likely fail.

Is process enough for legitimacy? Surely who is on the Board matters. Whom do they represent? U.S. courts can answer that question. The U.S. Supreme Court answers to “We, the People” and their text, the Constitution. True, it does not represent the will of the current electorate, nationally or in the states. To make the courts responsive to current majorities would make them less independent, a dangerous turn since courts are called upon to constrain current majorities speaking through representative institutions. But the Court can claim to represent the enduring commitments of American citizens.

The United States also has robust representative institutions. U.S. courts are independent of the legislative and executive branches of government, both of which are elected and have strong claims to represent various majorities. And those institutions (along with the courts) were created by a Constitution that grew out of a deliberative constitutional convention, ratification debates and votes, and an amendment process, all of which represent the considered will of “We, the People.”

Facebook lacks such robust representative institutions. Facebook policymakers created the Community Standards and the amendments thereto. Facebook’s users did not ratify either the basic rules or later amendments to them. No doubt the Community Standards and the amendments did take the concerns of users into account. Facebook policymakers did not simply impose their preferences on the users through the Community Standards. They sought to attract users to a multi-sided platform; the Community Standards were and are guesses about what users might want in the way of rules. Users who do not like the rules can refuse to consent to the rules. But then, of course, they cannot join Facebook.

The quality of consent matters here. At the proposal stage, Facebook’s Community Standards are much like the U.S. Constitution: an elite deliberated and proposed the document. But the U.S. basic law also involved deliberative state-by-state voting by the broadest franchise then on offer. Deliberating about whether to join Facebook and accept its rules is rather different. Do many people consider the obligation to obey the Community Standards when they join Facebook? Consent to the rules is not absent, but it is also not deliberative and ultimately, an ambiguous ratification of the rules.

None of this means Facebook should throw out its Community Standards, call a “constitutional convention” of users, and thereafter hold ratification votes. The more pragmatic question would be how Facebook’s rules and their application might evolve to gain legitimacy. How might Facebook users think of the rules as their own? The contrast with the U.S. Constitution indicates representation warrants Facebook’s attention. Its Board’s search for legitimacy requires balancing representation, the rule of law, and independence.

Up to this point, I have contrasted the rule of law and representation. But that contrast misleads. The rule of law itself is not just about judges and courts. Juries are drawn from the community at large. Why? A body of experts might well more accurately determine the guilt or innocence of the indicted. A jury (and grand juries) are supposed to represent the wider community both in determining guilt and innocence and in ensuring laws accord with the community’s norms and conception of justice. Such broader participation fosters wider legitimacy for the rule of law. Such participation fully accords with Facebook’s business model. Facebook users are active; they engage with others and thereby provide valuable information to the company. Given this “spirit of participation,” shouldn’t users also shape the rules of their community in some way? If not, can those rules and their application attain legitimacy?

Finally, a practical consideration favoring attention to the value of representation. Many on the left and the right will fault Facebook’s content moderation and the decisions of its Board. Facebook might reply that those decisions reflect the companies’ rules as applied by experts. In our world, Facebook’s critics would respond by condemning the “rule of elites” bent on suppressing the voice of the people. If the Board gives the correct weight to representation, Facebook can then reply: “Our process is not driven by elites. Our rules and their application represent our users.”

In sum, Professor Feldman overstates his case for lawyers on Facebook’s Board. Expertise in law, and the judicial model more generally, will increase the likelihood the Board will be accepted. But representation matters too. The risk now is not that the judicial model will be slighted in favor of pure democracy. Rather, the undoubted appeal of the rule of law may obscure the importance of representation of the users who, after all, will be governed by the Facebook Board. A subsequent post will take up ways the Board can honor the value of representation.

July 23, 2019 12:22PM

False Assumptions Behind the Current Drive to Regulate Social Media

In the early days of the Internet, citing concerns about pedophiles and hackers, parents would worry about their children’s engagement on unfamiliar platforms. Now, those same parents have Facebook accounts and get their news from Twitter. However, one look at a newspaper shows op-eds aplenty castigating the platforms that host an ever-growing share of our social lives. Even after more than a decade of social media use, prominent politicians and individuals who lack critical awareness of the realities and limitations of social media platforms choose to scapegoat platforms—rather than people—for a litany of social problems. Hate speech on Facebook? Well, it’s obviously Facebook’s fault. Fake news? Obviously created by Twitter.

But, what if these political concerns are misplaced? In a new Cato Policy Analysis, Georgia Tech’s Milton Mueller argues that that the moral panic attending social media is misplaced. Mueller contends that social media “makes human interactions hypertransparent,” rendering hitherto unseen social and commercial interactions visible and public. This newfound transparency “displace[s] the responsibility for societal acts from the perpetrators to the platform that makes them visible.” Individuals do wrong, platforms are condemned. This makes no political nor moral sense. Social media platforms are blamed for a diverse array of social ills, ranging from hate speech and addiction to mob violence and terrorism. In the wake of the 2016 U.S. presidential election, foreign electoral interference and the spread of misinformation were also laid at their feet. However, these woes are not new, and the form and tenor of concerns about social media misuse increasingly resembles a classic moral panic. Instead of appreciating that social media has revealed misconduct previously ignored, tolerated, or swept under the rug, social media is too often understood as the root cause of these perennial problems. 

People behaved immorally long before the advent of social media platforms and will continue to do so long after the current platforms are replaced by something else. Mueller argues that today’s misplaced moral panic is “based on a false premise and a false promise.” The false premise? Social media controls human behavior. The false promise? Imposing new rules on intermediaries will solve what are essentially human problems.

Mueller’s examination of Facebook’s role in hosting genocidal Burmese propaganda drives this point home. When Burmese authorities began using Facebook to encourage violence against the country’s Muslim Rohingya minority, Facebook was slow to act— it had few employees who could read Burmese, rendering the identification of offending messages difficult. However, Facebook has since been blamed for massacres of Rohingya carried out by Myanmar’s military and Buddhist extremists. While Facebook provided a forum for this propaganda, it cannot be seen as having caused violence that was prompted and supported by state authorities. We should be glad that Facebook could subsequently prevent the use of its platform by Myanmar’s generals, but we cannot expect Facebook to singlehandedly stop a sovereign state from pursuing a policy of mass murder. Myanmar’s government, not Facebook, is responsible for its messaging and the conduct of its armed forces.

Mueller shows that technologies enhancing transparency get blamed for the problems they reveal. The advent of the printing press, radio, television, and even inexpensive comic books were all followed by moral panics and calls for regulation. The ensuing regulation caused unforeseen harm. Mueller finds that “the federal takeover of the airwaves led to a systemic exclusion of diverse voices,” while recent German social media regulation “immediately resulted in suppression of various forms of politically controversial online speech.” Acting on the false premise that social media is responsible for grievances expressed through it, regulation intended to stamp out hate merely addresses its visible symptoms.

Contra these traditional, counterproductive responses, Mueller advocates greater personal responsibility; if we do not like what we see on social media we should remember that it is the speech of our fellow users, not that of the platforms themselves. He also urges resistance to government attempts to regulate social media, either by directly regulating speech, or by altering intermediary liability regimes to encourage more restrictive private governance. Proceeding from the false premise that a “broken” social media is responsible for the ills it reveals, regulation will simply suppress speech. Little will be gained, and much may be lost.

July 15, 2019 5:04PM

Misleading Project Veritas Accusations of Google “Bias” Could Prompt Bad Law

Tomorrow, the Senate’s Judiciary Committee’s Subcommittee on The Constitution will hold a hearing on Google’s alleged anti-conservative bias and “censorship.”  In a video released last month, James O’Keefe, a conservative activist, interviews an unnamed Google insider. The film, which has been widely shared by conservative outlets and cited by Sen. Ted Cruz (R-TX) and President Donald Trump, stitches a narrative of Orwellian, politically-motivated algorithmic bias out of contextless hidden camera footage, anodyne efforts to improve search results, and presumed links between unrelated products. Although the film’s claims are misleading and its findings unconvincing, they are taken seriously by lawmakers who risk using such claims to justify needless legislation and regulation. As such, they are worth engaging (the time stamps throughout this post refer to the Project Veritas video that can be viewed here).

Search algorithms use predefined processes to sift through the universe of available data to locate specific pieces of information. Simply put, they sort information in response to queries, surfacing whatever seems most relevant according to their preset rules. Algorithms that make use of artificial intelligence and machine learning draw upon past inputs to increase the accuracy of their results over time. These technologies have been adopted to improve the efficacy of search, particularly in relation to the gulf between how users are expected to input search queries, and the language they actually use to do so. They are only likely to be adopted to the extent that they improve the user’s search experience. When someone searches for something on Google, it is in the interest of both Google and the user for Google to return the most pertinent and useful results.

Board game enthusiasts, economics students, and those taking part in furious public policy debates over dinner all may have reasons to search for “Monopoly.” A company that makes it the easiest for such a diverse group of people to find what they’re looking for will enjoy increased traffic and profit than competitors. Search histories, location, trends, and additional search terns (e.g. “board game,” “antitrust”) help yield more tailored, helpful results.

Project Veritas’ film is intended to give credence to the conservative concern that culturally liberal tech firms develop their products to exclude and suppress the political right. While largely anecdotal, this concern has spurred hearings and regulatory proposals. Sen. Josh Hawley (R-MO) recently introduced legislation that would require social media companies to prove their political neutrality in order to receive immunity from liability for their users speech. Last week, President Trump hosted a social media summit featuring prominent conservative activists and conspiracy theorists who claim to have run afoul of politically biased platform rules.

The film begins by focusing on Google’s efforts to promote fairer algorithms, which are treated as attempts to introduce political bias into search results. The insider claims that while working at Google, he found “a machine learning algorithm called ML fairness, ML standing for machine learning, and fairness meaning whatever they want to define as fair.” (6:34) The implication being that Google employees actively take steps to ensure that Google search results yield anti-conservative content rather than what a neutral search algorithm would. Unfortunately, what a “neutral” algorithm would look like is not discussed.

Although we’re living in the midst of a new tech-panic, we should remember that questions about bias in machine learning and attempts to answer them are not new, nor are they merely a concern of the right. Rep. Alexandria Ocasio-Cortez (D-NY) and the International Committee of the Fourth International have expressed concerns about algorithmic bias. Adequate or correct representation is subjective, and increasingly a political subject. In 2017, the World Socialist Web Site sent a letter to Google, bemoaning the tech giant’s “anti-left bias” and claiming that “Google is “’disappearing’ the WSWS from the results of search requests.”

However, despite the breathlessness with which O’Keefe “exposes” Google’s efforts to reduce bias in its algorithms, he doesn’t bring us much new information. The documents he presents alongside contextless hidden camera clips of Google employees fail to paint a picture of fairness in machine learning run amok.

One of the key problems with O’Keefe’s video is that he creates a false dichotomy between pure, user created signals and machine learning inputs that have been curated to eliminate eventual output bias. The unnamed insider claims that attempts to rectify algorithmic bias are equivalent to vandalism: “because that source of truth (organic user input) has been vandalized, the output of the algorithm is also reflecting that vandalism” (8:14).

But there is little reason to presumptively expect organic data to generate more “truthful” or “correct” outputs than training data that has been curated in some fashion. Algorithms sort and classify data, rendering raw input useful. Part of tuning any given machine learning algorithm is providing it with training data, looking at its output, and then comparing that output to what we already know to be true.

Read the rest of this post »