Latest Cato Research on Telecom Regulation https://www.cato.org/ en Nevada’s T‑Mobile Deal and the State of Antitrust Law https://www.cato.org/blog/nevadas-t-mobile-deal-state-antitrust-law Walter Olson <p>T‑Mobile and Sprint, the #3 and #4 wireless carriers, would like to combine so as to more effectively compete with Verizon and AT&amp;T, the two dominant players in the cellular service market. Various states went to court against the merger,&nbsp;arguing (dubiously) that the combination would harm consumers and drive up prices. And now, via <a href="https://www.reuters.com/article/us-sprint-m-a-tmobile/texas-settles-with-t-mobile-sprint-over-merger-statement-idUSKBN1XZ1VG">Reuters</a>, this:</p> <blockquote><p>Also on Monday, Nevada said it would withdraw from the lawsuit in exchange for early deployment of the next generation of wireless in the state, creation of 450 jobs for six years and a $30 million donation to be distributed by Nevada Attorney General Aaron Ford and aimed at helping women and minorities, Ford’s office said.</p> </blockquote> <p>How blatant can you get? The best touch, of course, is the $30 million fund with which to ingratiate lucky beneficiaries around the state. (“The recipients of these grants for the use of the charitable contribution <a href="http://ag.nv.gov/News/PR/2019/Attorney_General_Ford_Negotiates_Settlement_for_T-Mobile-Sprint_Merger_Prioritizing_Nevada_Jobs/">will be at the discretion of Nevada’s attorney general</a>” — that is, the same AG Ford who filed and settled the state’s case, and from whose press release is excerpted that sentence.) It looks a&nbsp;lot like the familiar cozytown set‐​up in many cities in which permission to build a&nbsp;large development or win a&nbsp;public contract just might call for a&nbsp;hefty donation to a&nbsp;local nonprofit with ties to the mayor and council.</p> <p>Notwithstanding the best efforts from some quarters to develop <em>per se</em> rules in hopes of generating clear and predictable legal outcomes, antitrust law remains a&nbsp;world of subjective interpretation in which government office‐​holders are left with great discretion regarding how and against whom to wield enforcement power. Whether you want to call it logrolling or use a&nbsp;less flattering term like “extortion,” the temptation is to trade off antitrust leniency for some of the other sorts of favors business might be able to render government actors.</p> <p>All of which&nbsp;brings us to presidential candidate Elizabeth Warren’s and other candidates’ new proposals for antitrust, which a&nbsp;CNBC headline <a href="https://www.cnbc.com/2019/12/07/warrens-antitrust-bill-would-boost-government-control-over-biggest-companies.html">accurately</a> reports (as to Warren’s) “would dramatically enhance government control over the biggest U.S. companies.” In particular, the proposals would invite the government far more deeply into oversight of business practices, including refusal to share “essential” facilities with competitors, pricing goods below the cost of production, and much more, as well as mergers and acquisitions.</p> <p>It’s hard to know whether Sen. Warren sees all this new arbitrary discretion as a&nbsp;bug, or a&nbsp;feature, in her enormous plan. Either way, an accumulation&nbsp;of power that tempting will sooner or later attract appointees seeking either a&nbsp;political whip hand over the U.S. corporate sector, a&nbsp;source of payouts like that in Nevada, or both.</p> Mon, 09 Dec 2019 14:48:53 -0500 Walter Olson https://www.cato.org/blog/nevadas-t-mobile-deal-state-antitrust-law John Samples participates in the event, “The Tech Giants, Monopoly Power, and Public Discourse: Panel 3,” hosted by the Knight First Amendment Institute https://www.cato.org/multimedia/media-highlights-tv/john-samples-participates-event-tech-giants-monopoly-power-public Mon, 18 Nov 2019 10:42:08 -0500 John Samples https://www.cato.org/multimedia/media-highlights-tv/john-samples-participates-event-tech-giants-monopoly-power-public What’s the Frequency, Berkeley? https://www.cato.org/blog/whats-frequency-berkeley Ilya Shapiro, Michael Collins <p>Did you know that cell phone radio frequency (RF) exposure causes cancer? It doesn’t. The Federal Communications Commission has concluded there’s “no scientific evidence” linking “wireless device uses and cancer or other illnesses.” Despite the FCC’s scientific findings, however, the city of Berkeley, California, requires every cell phone realtor provide a&nbsp;notice informing customers that, if they carry cell phones in a “pants or shirt pocket or tucked into a&nbsp;bra” when the phone is on and connected, they “<em>may </em>exceed the federal guidelines for exposure to RF radiation.”</p> <p>That statement is technically true; “may” just means something is possible, not necessarily likely. Phones <em>may</em> exceed federal guidelines; likewise, phones <em>may</em> spontaneously combust. What Berkeley says is technically correct, just misleading (the unexplained acronym also sounds scary). CTIA, the wireless industry’s trade group, sued Berkeley for compelling speech in violation of the First Amendment.</p> <p>The right to speak necessarily entails the right to remain silent. This principle ensures the freedom of conscience and prevents citizens from being conscripted to serve as unwilling bullhorns for government communications. Likewise, it is a&nbsp;foundational principle of the First Amendment that content‐​based restrictions of speech must survive the strictest scrutiny—meaning the government needs a&nbsp;really good reason and can’t achieve its goal any other way.</p> <p>But the Supreme Court has ruled that regulations of “commercial speech” need not meet the same rigorous standards of review as other types of speech. The Court created this narrow exception in <em>Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio</em> (1985). The <em>Zauderer</em> test applies lesser standards for mandatory disclosures, when the speech is “purely factual and uncontroversial information” and when the disclosure is not “unduly burdensome” and “reasonably related to the State’s interest in preventing deception of consumers.”</p> <p>The Supreme Court previously remanded <em>CTIA v. Berkeley</em> in light of <em>NIFLA v. Becerra</em> (2018), where the Court clarified that the ambit of speech covered by <em>Zauderer</em>’s exception is narrow, and that governments did not have free reign provide scripts for commercial businesses. On remand, however, the U.S. Court of Appeals for the Ninth Circuit eroded <em>Zauderer</em>. The Ninth Circuit found that compelling speech content posed no constitutional issues because mandated disclosures need only be reasonably related to “non‐​trivial” government interests. This decision is another in a&nbsp;line of confused applications of <em>Zauderer</em> by the lower courts.</p> <p>CTIA, represented by former Solicitor General Ted Olson, is again petitioning the Supreme Court to review that flawed decision. As we have at <a href="https://www.cato.org/sites/cato.org/files/pubs/pdf/ctia-cert-stage.pdf">previous stages</a> of litigation, Cato has filed an <a href="https://www.cato.org/sites/cato.org/files/2019-11/CTIA%20cert-stage%20again.pdf">amicus brief</a> supporting that petition. We argue that this important area of law desperately needs clarification, particularly at a&nbsp;time when compelled‐​disclosure regimes have proliferated and some courts have distorted the already insufficient <em>Zauderer</em> standard beyond recognition. To that end, the Court should apply strict scrutiny to review laws that force market participants to disparage their own products and participate in policy debates they wish to avoid.</p> <p>The Supreme Court will decide whether to take up&nbsp;<em>CTIA v. Berkeley</em>&nbsp;later this fall.</p> Fri, 01 Nov 2019 16:51:40 -0400 Ilya Shapiro, Michael Collins https://www.cato.org/blog/whats-frequency-berkeley Facebook Deserves More Credit… Our Data Is Not “the Product” https://www.cato.org/publications/commentary/facebook-deserves-more-credit-our-data-not-product Ryan Bourne <div class="lead text-default"> <p>It’s the must‐​say cliché about Big Tech. On a&nbsp;new Netflix documentary, The Great Hack, digital media professor David Carroll repeated that, when it comes to social media platforms, “we are the commodity.”</p> </div> , <div class="text-default"> <p>Such thinking is common in writing about Google and Facebook. Tropes such as “you’re not the consumer, you’re the product” are repeated ad nauseam. The idea is the same. Since Facebook and Google do not charge us for services, it’s said <a href="https://www.telegraph.co.uk/technology/2019/02/20/google-facebooks-grip-digital-advertising-shows-signs-slipping/" rel="noopener noreferrer" target="_blank" data-auth="NotApplicable">we pay through giving up our information, in turn sold to third parties for well‐​targeted advertising</a>.</p> <p>So entrenched is this belief, a&nbsp;Financial Times editorial this week advocated overhauling competition laws to acknowledge that “digital services increasingly cost long‐​term privacy rather than money”.</p> <p>Whoa, there. Before good policy is sacrificed to this meme, let us pause and reflect. Yes, Facebook and Google make money through information‐​infused advertising targeted at granular user populations. But there’s nothing new about this practice. Nor does it follow that users are “the product” or that privacy is “the cost” of digital services.</p> <p>Journalists, of all people, should understand this. Free newspapers and free‐​to‐​air broadcasters, such as ITV and Channel 4, have similar business models. All seek to capture readers or viewers by providing good quality content.</p> <p>Generating these large audiences is necessary for their advertising space to generate revenue. Media companies must profile their readers or viewers, pitching this demographic information to would‐​be advertising buyers. Yet, strangely, nobody says ITV viewers or Metro readers are “the product” or a “commodity” of the companies.</p> <p>True, TV networks and papers don’t collect much individual level data or enjoy the scale of information of Google or Facebook. Yet this is a&nbsp;matter of degree, not principle. One senses the teeth gnashing comes precisely because tech has disrupted the traditional media’s approach.</p> <p>Advertisers on Facebook can now target ads at 35 to 40‐​year‐​olds, living in the Medway towns, who are interested in water polo; obtaining real‐​time feedback on the ad’s success. That obviously helps maximize the effectiveness of ad spend. None of that makes our data “the product” or privacy “the cost” of Google or Facebook though. In fact, there are three clear reasons why such claims are misguided.</p> <p>First, and most obviously, the value of <a href="https://www.telegraph.co.uk/technology/2018/10/18/dcms-chair-calls-probe-facebook-advertising-business/" rel="noopener noreferrer" target="_blank" data-auth="NotApplicable">these firms’ advertising space is dependent on strong user numbers</a>. Google must deliver an accurate search engine, and Facebook high‐​quality networking and applications, to keep us using their websites or apps. That provides an incentive to respond to our wants and needs, including on privacy. We might not be paying customers, but we are much‐​needed consumers.</p> <p>For now, these firms are successful. Alternatives are just a&nbsp;click away with Yahoo, DuckDuckGo and Bing but Google still has an 88pc UK market share in internet search. Facebook is currently the largest UK social media site in terms of reach, consumption and revenue. The point though is we aren’t powerless pawns here; we willingly use both because we like their products. If that changes and user numbers collapse, so will their business models.</p> <p>Second, perhaps less obviously, much “data” commonly described as “ours” is actually the creation of Facebook’s own structure and processing. Writing out your personal details, friends list, and upcoming social events on a&nbsp;sheet of paper might have some inherent value to certain companies, but it will be relatively small.</p> <p></p> </div> , <aside class="aside--right aside pb-lg-0 pt-lg-2"> <div class="pullquote pullquote--default"> <div class="pullquote__content h2"> <p>No, we aren’t “the product” of Facebook and Google, and privacy is not “the cost” of using them</p> </div> </div> </aside> , <div class="text-default"> <p>No, what Facebook does that adds value is processes this raw data, linking with others, in turn making predictive profiles across larger populations. It’s this new information that’s valuable for improving its consumer service and being attractive to advertisers.</p> <p><a href="https://www.telegraph.co.uk/technology/2019/03/14/facebook-new-criminal-investigation-data-sharing-deals/" rel="noopener noreferrer" target="_blank" data-auth="NotApplicable">Our data might be an input to a&nbsp;product</a>, in other words, but “we” aren’t. As George Mason University economist Alex Tarrabok has explained, those who demand “access” to “our data” or demand it be made portable for other sites miss the point. There’d be no value to a&nbsp;list of photos we’d “liked” or bars we’d “checked in” to using Facebook’s platform. Information like that becomes valuable because of how Facebook processes or aggregates it.</p> <p>Finally, the <em>Financial Times</em>’ assertion that “digital services increasingly cost long‐​term privacy” is incredibly misleading. Some people may well be “privacy fundamentalists” who see divulging any personal information as a&nbsp;cost. But 74pc of UK internet users feel confident about managing their data online. We’d be troubled if our health or financial information were exposed, obviously, but we are relaxed about tech firms knowing our click habits or on‐​site browsing history.</p> <p>Indeed, one study of nearly 1,600 internet users in the US found that “85pc are unwilling to pay anything for privacy on Google” and of the remaining 15pc, the median amount they would give up was “a paltry $20 per year”. This suggests for most people the privacy given up on Google is no cost at all</p> <p>By and large, we seem satisfied with the current grand bargain of cheap or zero‐​priced content financed by targeted ads. Is this surprising? Targeted advertising can be highly beneficial for us as customers too, alerting us to things we want to buy and avoiding wasting time scrolling elsewhere.</p> <p>None of this is to deny that <a href="https://www.telegraph.co.uk/technology/2019/04/05/year-cambridge-analytica-facebook-still-struggling-contain-problem/" rel="noopener noreferrer" target="_blank" data-auth="NotApplicable">genuine privacy breaches that break terms and conditions should have consequences</a>. It’s little surprise that Facebook usage fell following scandals associated with third parties’ wrongful access to data. More transparency on who has access might help inform users.</p> <p>But the zeitgeist here is troubling. In a&nbsp;trivial sense, Facebook and Google’s profitability of course relies on its users and their information. To declare from this that “we are the product” is to spread a&nbsp;misleading cliché that ignores the obvious value‐​added activities these companies deliver to both users and advertisers.</p> </div> Fri, 02 Aug 2019 11:04:00 -0400 Ryan Bourne https://www.cato.org/publications/commentary/facebook-deserves-more-credit-our-data-not-product Challenging the Social Media Moral Panic: Preserving Free Expression under Hypertransparency https://www.cato.org/publications/policy-analysis/challenging-social-media-moral-panic-preserving-free-expression-under Milton Mueller <div class="lead text-default"> <p>Social media are now widely criticized after enjoying a&nbsp;long period of public approbation. The kinds of human activities that are coordinated through social media, good as well as bad, have always existed. However, these activities were not visible or accessible to the whole of society. As conversation, socialization, and commerce are aggregated into large‐​scale, public commercial platforms, they become highly visible to the public and generate storable, searchable records. Social media make human interactions hypertransparent and displace the responsibility for societal acts from the perpetrators to the platform that makes them visible.</p> </div> , <div class="text-default"> <p>This hypertransparency is fostering a&nbsp;moral panic around social media. Internet platforms, like earlier new media technologies such as TV and radio, now stand accused of a&nbsp;stunning array of evils: addiction, fostering terrorism and extremism, facilitating ethnic cleansing, and even the destruction of democracy. The social‐​psychological dynamics of hypertransparency lend themselves to the conclusion that social media cause the problems they reveal and that society would be improved by regulating the intermediaries that facilitate unwanted activities.</p> <p>This moral panic should give way to calmer reflection. There needs to be a&nbsp;clear articulation of the tremendous value of social media platforms based on their ability to match seekers and providers of information in huge quantities. We should also recognize that calls for government‐​induced content moderation will make these platforms battlegrounds for a&nbsp;perpetual intensifying conflict over who gets to silence whom. Finally, we need a&nbsp;renewed affirmation of Section 230 of the 1996 Telecommunications Act, which shields internet intermediaries from liability for users’ speech. Contrary to Facebook’s call for government‐​supervised content regulation, we need to keep platforms, not the state, responsible for finding the optimal balance between content moderation, freedom of expression, and economic value. The alternative of greater government regulation would absolve social media companies of market responsibility for their decisions and would probably lead them to exclude and suppress even more legal speech than they do now. It is the moral panic and proposals for regulation that threaten freedom and democracy.</p> </div> , <div class="text-default"> <h2>Introduction</h2> <p>In a&nbsp;few short years, social media platforms have gone from being shiny new paragons of the internet’s virtue to globally despised scourges. Once credited with fostering a&nbsp;global civil society and bringing down tyrannical governments, they are now blamed for an incredible assortment of social ills. In addition to legitimate concerns about data breaches and privacy, other ills — hate speech, addiction, mob violence, and the destruction of democracy itself — are all being laid at the doorstep of social media platforms.</p> <p>Why are social media blamed for these ills? The human activities that are coordinated through social media, including negative things such as bullying, gossiping, rioting, and illicit liaisons, have always existed. In the past, these interactions were not as visible or accessible to society as a&nbsp;whole. As these activities are aggregated into large‐​scale, public commercial platforms, however, they become highly visible to the public and generate storable, searchable records. In other words, social media make human interactions hypertransparent.<sup><a id="endnote-001-backlink" href="#endnote-001">1</a></sup></p> <p>This new hypertransparency of social interaction has powerful effects on the dialogue about regulation of communications. It lends itself to the idea that social media causes the problems that it reveals and that society can be altered or engineered by meddling with the intermediaries who facilitate the targeted activities. Hypertransparency generates what I&nbsp;call the fallacy of displaced control. Society responds to aberrant behavior that is revealed through social media by demanding regulation of the intermediaries instead of identifying and punishing the individuals responsible for the bad acts. There is a&nbsp;tendency to go after the public manifestation of the problem on the internet, rather than punishing the undesired behavior itself. At its worst, this focus on the platform rather than the actor promotes the dangerous idea that government should regulate generic technological capabilities rather than bad behavior.</p> <p>Concerns about foreign interference and behavioral advertising brought a&nbsp;slowly simmering social media backlash to a&nbsp;boil after the 2016 election. As this reaction enters its third year, it is time to step back and offer some critical perspective and an assessment of where free expression fits into this picture. As hypertransparency brings to public attention disturbing, and sometimes offensive, content, a&nbsp;moral panic has ensued — one that could lead to damaging regulation and government oversight of private judgment and expression. Perhaps policy changes are warranted, but the regulations being fostered by the current social climate are unlikely to serve our deepest public values.</p> <h2>Moral Panic</h2> <p>The assault on social media constitutes a&nbsp;textbook case of moral panic. Moral panics are defined by sociologists as “the outbreak of moral concern over a&nbsp;supposed threat from an agent of corruption that is out of proportion to its actual danger or potential harm.”<sup><a id="endnote-002-backlink" href="#endnote-002">2</a></sup> While the problems noted may be real, the claims “exaggerate the seriousness, extent, typicality and/​or inevitability of harm.” In a&nbsp;moral panic, sociologist Stanley Cohen says, “the untypical is made typical.”<sup><a id="endnote-003-backlink" href="#endnote-003">3</a></sup> The exaggerations build upon themselves, amplifying the fears in a&nbsp;positive feedback loop. Purveyors of the panic distort factual evidence or even fabricate it to justify (over)reactions to the perceived threat. One of the most destructive aspects of moral panics is that they frequently direct outrage at a&nbsp;single easily identified target when the real problems have more complex roots. A&nbsp;sober review of the claims currently being advanced about social media finds that they tick off all these boxes.</p> <h3>Fake News!</h3> <p>Social media platforms are accused of generating a&nbsp;cacophony of opinions and information that is degrading public discourse. A&nbsp;quote from a&nbsp;respected media scholar summarizes the oft‐​repeated view that social media platforms have an intrinsically negative impact on our information environment:</p> </div> , <blockquote class="blockquote"> <div> <p>An always‐​on, real‐​time information tsunami creates the perfect environment for the spread of falsehoods, conspiracy theories, rumors, and “leaks.” Unsubstantiated claims and narratives go viral while fact checking efforts struggle to keep up. Members of the public, including researchers and investigative journalists, may not have the expertise, tools, or time to verify claims. By the time they do, the falsehoods may have already embedded themselves in the collective consciousness. Meanwhile, fresh scandals or outlandish claims are continuously raining down on users, mixing fact with fiction.<sup><a id="endnote-004-backlink" href="#endnote-004">4</a></sup></p> </div> </blockquote> <cite> </cite> , <div class="text-default"> <p>In this view, the serpent of social media has driven us out of an Eden of rationality and moderation. In response, one might ask: in human history, what public medium has <em>not</em> mixed fact with fiction, has <em>not</em> created new opportunities to spread falsehoods, or has <em>not</em> created new challenges for verification of fact? Similar accusations were levelled against the printing press, the daily newspaper, radio, and television; the claim that social media are degrading public discourse exaggerates both the uniqueness and the scope of the threat.</p> <h3>Addiction and Extremism</h3> <p>A variant on this theme links the ad‐​driven business model of social media platforms to an inherently pathological distortion of the information environment: as one pundit wrote, “YouTube leads viewers down a&nbsp;rabbit hole of extremism, while Google racks up the ad sales.”<sup><a id="endnote-005-backlink" href="#endnote-005">5</a></sup> A&nbsp;facile blend of pop psychology and pop economics equates social media engagement to a&nbsp;dopamine shot for the user and increasing ad revenue for the platform. The way to prolong and promote such engagement, we are told, is to steer the user to increasingly extreme content. Any foray into the land of YouTube videos is a&nbsp;one‐​way ticket to beheadings, Alex Jones, flat‐​earthism, school‐​shooting denial, Pepe the Frog, and radical vegans. No more kittens, dog tricks, or baby pictures: for some unspecified reason, those nice things are no longer what the platform delivers.</p> <p>In the quote below, an academic evokes all the classical themes of media moral panics — addiction, threats to public health, and a&nbsp;lack of confidence in the agency of common people — into a&nbsp;single indictment of YouTube algorithmic recommendations:</p> </div> , <blockquote class="blockquote"> <div> <p>Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a&nbsp;website that leads us too much in the direction of lies, hoaxes and misinformation. In effect, YouTube has created a&nbsp;restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal.<sup><a id="endnote-006-backlink" href="#endnote-006">6</a></sup></p> </div> </blockquote> <cite> </cite> , <div class="text-default"> <p>Another social media critic echoed similar claims:</p> </div> , <blockquote class="blockquote"> <div> <p>Every pixel on every screen of every Internet app has been tuned to influence users’ behavior. Not every user can be influenced all the time, but nearly all users can be influenced some of the time. In the most extreme cases, users develop behavioral addictions that can lower their quality of life and that of family members, co‐​workers and close friends.<sup><a id="endnote-007-backlink" href="#endnote-007">7</a></sup>&nbsp;</p> </div> </blockquote> <cite> </cite> , <div class="text-default"> <p>If one investigates the “science” behind these claims, however, one finds little to differentiate social media addiction from earlier panics about internet addiction, television addiction, video game addiction, and the like. The evidence for the algorithmic slide toward media fat, salt, and sugar traces back to one man, Jonathan Albright of Columbia University’s Tow Center, and it is very difficult to find any published, peer‐​reviewed academic research from Albright. All one can find is a&nbsp;blog post on <em>Medium</em>, describing “the network of YouTube videos users are exposed to after searching for ‘crisis actor’ following the Parkland event.”<sup><a id="endnote-008-backlink" href="#endnote-008">8</a></sup> In other words, the blog reports the results of one search and one selected search phrase; there is no description of a&nbsp;methodology nor is there any systematic conceptualization or argumentation about the causal linkage between YouTube’s business model and the elevation of extreme and conspiratorial content. Yet Albright’s claims echoed through the <em>New York Times</em> and dozens of other online media outlets.</p> <p>The psychological claims also seem to suffer from a&nbsp;moral panic bias. According to Courtney Seiter, a&nbsp;psychologist cited by some of the critics, the oxytocin and dopamine levels generated by social media use generate a&nbsp;positive “hormonal spike equivalent to [what] some people [get] on their wedding day.” She goes on to say that “all the goodwill that comes with oxytocin — lowered stress levels, feelings of love, trust, empathy, generosity — comes with social media, too … between dopamine and oxytocin, social networking not only comes with a&nbsp;lot of great feelings, it’s also really hard to stop wanting more of it.”<sup><a id="endnote-009-backlink" href="#endnote-009">9</a></sup> The methodological rigor and experimental evidence behind these claims seems to be thin, but even so, wasn’t social media supposed to be a&nbsp;tinderbox for hate speech? Somehow, citations of Seiter in attacks on social media seem to have left the trust, empathy, and generosity out of the picture.</p> <p>The panic about elevating conspiratorial and marginalized content is especially fascinating. We are told in terms reminiscent of the censorship rationalizations of authoritarian governments that social media empowers the fringes and so threatens social stability. Yet for decades, mass media have been accused of appealing to the mainstream taste and of marginalizing anything outside of it. Indeed, in the 1970s, progressives tried to force media outlets to include marginalized voices in their channel lineup through public access channels. Nowadays, apparently, the media system is dangerous because it does precisely the opposite.</p> <p>But the overstatement of this claim should be evident. Major advertisers come down hard on the social platforms very quickly when their pitches are associated with crazies, haters, and blowhards, leading to algorithmic adjustments that suppress marginal voices. Users’ ability to “report” offensive content is another important form of feedback. But this has proven to cut both ways: lots of interesting but racy or challenging content gets suppressed. Some governments have learned how to game organized content moderation to yank messages exposing their evil deeds. (See the discussion of Facebook and Myanmar in the next section.) In the ultramoderated world that many of the social media critics seem to be advocating, important minority‐​viewpoint content is as likely to be targeted as terrorist propaganda and personal harassment.</p> <p><strong>Murder, hate speech, and ethnic cleansing.</strong> Another key exhibit in the case against social media pins the responsibility for ethnic cleansing in Myanmar, and similar incitement tragedies in the developing world, on Facebook. In this case, as in most of the other concerns, there is substance to the claim but its use and framing in the public discourse seems both biased and exaggerated. In Myanmar, the Facebook platform seems to have been systematically utilized as part of a&nbsp;state‐​sponsored campaign to target the Rohingya Muslim minority.<sup><a id="endnote-010-backlink" href="#endnote-010">10</a></sup> The government and its allies incited hatred against them, while censoring activists and journalists documenting state violence, by reporting their work as offensive content or in violation of community standards. At the same time, the government‐​sponsored misinformation and propaganda against the Rohingya managed to avoid the scrutiny applied to the expression of human‐​rights activists. Social media critics also charged that the Facebook News Feed’s tendency to promote already popular content allowed posts inciting violence against the minority to go viral. As a&nbsp;result, Facebook is blamed for the tragedies in Myanmar. I&nbsp;have encountered people in the legal profession who would like to bring a&nbsp;human‐​rights lawsuit against Facebook.<sup><a id="endnote-011-backlink" href="#endnote-011">11</a></sup> If any criticism can be leveled at Facebook’s handling of genocidal propaganda in Myanmar, it is that Facebook’s moderation process is too deferential to governments. This, however, militates against greater state regulation.</p> <p>But these claims show just how displaced the moral panic is. Why is so much attention being focused on Facebook and not on the crimes of a&nbsp;state actor? Yes, Myanmar military officers used Facebook (and other media) as part of an anti‐​Rohingya propaganda campaign. If the Burmese generals used telephones or text messages to spread their poison, are they going to blame those service providers or technologies? How about roads, which were undoubtedly used by the military to oppress Rohingya? In fact, violent conflict between Rohingya Muslims and Myanmar’s majority population goes back to 1948, when the country achieved independence from the British and the new government denied citizenship to the Rohingya. A&nbsp;nationalist military coup in 1962 targeted them as a&nbsp;threat to the new government’s concept of national identity; the army closed Rohingya social and political organizations, expropriated Rohingya businesses, and detained dissenters. It went on to regularly kill, torture, and rape Rohingya people.</p> <p>Facebook disabled the accounts of the military propagandists once it understood the consequences of their misuse, although this happened much more slowly than critics would have liked. What’s remarkable about the discussion of Facebook, however, is the way attention and responsibility for the oppression has been diverted away from a&nbsp;military dictatorship engaged in a&nbsp;state‐​sponsored campaign of ethnic cleansing, propaganda, and terror to a&nbsp;private foreign social media platform. In some cases, the discussion seems to imply that the absence of Facebook from Myanmar would solve, or even improve, the conflict that has been going on for 70&nbsp;years. It is worth remembering that Facebook’s status as an <em>external </em>platform not under the control of the local government was the only thing that made it possible to intervene at all. Interestingly, the <em>New York Times</em> article that broke this story notes that pro‐​democracy officials in Myanmar say Facebook was essential for the democratic transition that brought them into office in 2015.<sup><a id="endnote-012-backlink" href="#endnote-012">12</a></sup> This claim is as important (and as unverified and possibly untestable) as the claim that it is responsible for ethnic cleansing. But it hasn’t gotten any play lately.</p> <p><strong>Reviving the Russian menace.</strong> Russia‐​sponsored social media use during the 2016 election provides yet another example of the moral panic around social media and the avalanche of bitter exaggeration that goes with it. Indeed, the 2016 election marks the undisputed turning point in public attitudes toward social media. For many Americans, the election of Donald Trump came as a&nbsp;shocking and unpleasant surprise. In searching for an explanation of what initially seemed inexplicable, however, the nexus between the election results, Russian influence operations, and social media has become massively inflated. It has become too convenient to overlook Trump’s complete capture of the Republican Party and his ability to capitalize on nationalistic and hateful themes that conservative Republicans had been cultivating for decades. The focus on social media continues to divert our attention from the well‐​understood negatives of Hillary Clinton as well as the documented impact of James Comey’s decision to reopen the FBI investigation of Clinton’s emails at a&nbsp;critical period in the presidential campaign. It overlooks, too, the strength of the Bernie Sanders challenge and the way the Clinton‐​controlled Democratic National Committee alienated his supporters. It also tends to downplay the linkages that existed between Trump’s campaign staff, advisers, and Russia that had nothing to do with social media influence.</p> <p>How much more comforting it was to focus on a&nbsp;foreign power and its use of social media than to face up to the realities of a&nbsp;politically polarized America and the way politicians and their crews peddle influence to a&nbsp;variety of foreign states and interests.<sup><a id="endnote-013-backlink" href="#endnote-013">13</a></sup> As this displacement of blame developed, references to Russian <em>information operations</em> uniformly became references to Russian <em>interference</em> in the elections.<sup><a id="endnote-014-backlink" href="#endnote-014">14</a></sup> Interference is a&nbsp;strong word — it makes it seem as if leaks of real emails and a&nbsp;disinformation campaign of Twitter bots and Facebook accounts were the equivalent of stuffing ballot boxes, erasing votes, hacking election machines, or forcibly blocking people from the polls. As references to foreign election interference became deeply embedded in the public discourse, the threat could be further inflated to one of national security. And so suddenly, the regulation of political speech got on the agenda of Congress, and millions of liberals and progressives became born‐​again Cold Warriors, all too willing to embrace nationalistic controls on information flows.</p> <p>In April 2016 hackers employed by the Russian government compromised several servers belonging to the Democratic National Committee, exfiltrated a&nbsp;trove of internal communications, and published them via Wikileaks using a “Guccifer 2.0” alias.<sup><a id="endnote-015-backlink" href="#endnote-015">15</a></sup> The emails leaked by the Russians were not made up by the Russians; they were real. What if they had been leaked by a&nbsp;21st‐​century Daniel Ellsberg instead of the Russians? Would that also be considered election interference? Disclosures of compromising information (e.g., Trump’s <em>Access </em>Hollywood tape) have a&nbsp;long history in American politics. Is that election interference? How much of the cut‐​and‐​thrust of an open society’s media system, and how many whistleblowers, are we willing to muzzle in this moral panic?</p> <p><strong>The Death of Democracy.</strong> Some critics go so far as to claim that democracy itself is threatened by the existence of open social media platforms. “[Facebook] has swallowed up the free press, become an unstoppable private spying operation and undermined democracy. Is it too late to stop it?” asks the subtitle of one typical article.<sup><a id="endnote-016-backlink" href="#endnote-016">16</a></sup> This critique is as common as it is inchoate. In its worst and most simple‐​minded form, the mere ability of foreign governments to put messages on social media platforms is taken as proof that the entire country is being controlled by them. These messages are attributed enormous power, as if they are the only ones anyone sees; as if foreign governments don’t routinely buy newspaper ads, hire Washington lobbyists, or fund nonprofits and university programs. Worse still, those of this mindset equate messages with weapons in ceaseless “information warfare.” It is claimed that social media are being, or have been, “weaponized” — a&nbsp;transitive verb that was popularized after being applied to the 9/11 attackers’ use of civilian aircraft to murder thousands of people.<sup><a id="endnote-017-backlink" href="#endnote-017">17</a></sup> Users of this term show not the slightest embarrassment at a&nbsp;possible overstatement implicit in the comparison.</p> <p>Cybersecurity writer Thomas Rid made the astounding assertion that the most “open and liberal social media platform” (Twitter) is “a threat to open and liberal democracy” precisely because it is open and liberal, thus implying that free expression is a&nbsp;national security threat.<sup><a id="endnote-018-backlink" href="#endnote-018">18</a></sup> In a&nbsp;<em>Time Magazine</em> cover story, a&nbsp;former Facebook executive complained that Facebook has “aggravated the flaws in our democracy while leaving citizens ever less capable of thinking for themselves.”<sup><a id="endnote-019-backlink" href="#endnote-019">19</a></sup> The nature of this threat is never scientifically documented in terms of its actual effect on voting patterns or political institutions. The only evidence offered is simple counts of the number of Russian trolls and bots and their impressions — numbers that look unimpressive compared to the spread of a&nbsp;single Donald Trump tweet. What we don’t often hear is that social media is the most important source of news for only 14 percent of the population. Research by two economists concluded that “… social media have become an important but not dominant source of political news and information. Television remains more important by a&nbsp;large margin.” They also conclude that there is no statistically significant correlation between social media use and those who draw ideologically aligned conclusions from their exposure to news.<sup><a id="endnote-020-backlink" href="#endnote-020">20</a></sup></p> <p>The most disturbing element of the “threat to democracy” argument is the way it militarizes public discourse. The view of social media as information warfare seems to go hand‐​in‐​hand with the contradictory idea that imposing more regulation by the nation‐​state will “disarm” information and parry this threat to democracy. In advancing what they think of as sophisticated claims that social media are being weaponized, the joke is on our putative cybersecurity experts: it is Russian and Chinese doctrine that the free flow of information across borders is a&nbsp;subversive force that challenges their national sovereignty. This doctrine, articulated in a&nbsp;code of conduct by the Shanghai Cooperation Organization, was designed to rationalize national blocking and filtering of internet content.<sup><a id="endnote-021-backlink" href="#endnote-021">21</a></sup> By equating the influence that occurs via exchanges of ideas, information, and propaganda with war and violence, these pundits pose a&nbsp;more salient danger to democracy and free speech than any social media platform.</p> <p>Any one of these accusations — the destruction of public discourse, responsibility for ethnic cleansing and hate speech, abetting a&nbsp;Russian national security threat, and the destruction of democracy — would be serious enough. Their combination in a&nbsp;regularly repeated catechism constitutes a&nbsp;moral panic. Moral panics should inspire caution because they produce policy reactions that overshoot the mark. A&nbsp;fearful public can be stampeded into legal or regulatory measures that serve a&nbsp;hidden agenda. Targeted actors can be scapegoated and their rights and interests discounted. Freedom‐​enhancing policies and proportionate responses to problems never emerge from moral panics.</p> <h3>Media Panics in the Past</h3> <p>One antidote to moral panic is historical perspective. Media studies professor Kirsten Drotner wrote, “[E]very time a&nbsp;new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms … In some cases, debate of a&nbsp;new medium brings about — indeed changes into — heated, emotional reactions … what may be defined as a&nbsp;media panic.”<sup><a id="endnote-022-backlink" href="#endnote-022">22</a></sup> We need to understand that we are in the midst of one of these renegotiations of the norms of public discourse and that the process has tipped over into media panic — one that demonizes social media generically.</p> <p>We can all agree that literacy is a&nbsp;good thing. In the 17th and 18th centuries, however, some people considered literacy’s spread subversive or corrupting. The expansion of literacy from a&nbsp;tiny elite to the general population scared a&nbsp;lot of conservatives. It meant not only that more people could read the Bible, but also that they could read radical liberal tracts such as Thomas Paine’s <em>Rights of Man</em>. Those who feared wider literacy believed that it generated conflict and disruption. In fact, it already had. The disintermediation of authority over the interpretation of the written word by the printing press and by wider literacy created centrifugal forces. Protestants had split with Catholics, and later, different Protestant sects formed around different interpretations of scripture. Later, in the 17th and 18th centuries, the upper class and the religious also complained about sensationalistic printed broadsheets and printed ballads that appealed to the “baser instincts” of the public. Commercial media that responded to what the people wanted were not perceived kindly by those who thought they knew best. Yet are these observations an argument for keeping people illiterate? If not, then what, exactly, do these concerns militate for? A&nbsp;controlled, censored press? A&nbsp;press licensed in “the public interest”? Who in those days would have been made the arbiter of public interest? The Pope? Absolutist kings?</p> <p>Radio broadcasting was an important revolution in mass media technology. It seems to have escaped the intense, concentrated panic we are seeing around contemporary social media, but in the United States, where broadcasting had relatively free and commercial origins, those in power felt threatened by its potential to evolve into an independent medium. Thomas Hazlett has documented the way the 1927 Federal Radio Act and the regulatory commission it created (later to become the Federal Communications Commission) nationalized the airwaves in order to keep the new medium licensed and under the thumb of Congress.<sup><a id="endnote-023-backlink" href="#endnote-023">23</a></sup> Numerous scholarly accounts have shown how the public‐​interest licensing regime erected after the federal takeover of the airwaves led to a&nbsp;systematic exclusion of diverse voices, from socialists to African Americans to labor unions.<sup><a id="endnote-024-backlink" href="#endnote-024">24</a></sup></p> <p>There is another relevant parallel between radio and social media. Totalitarian dictatorships, particularly Nazi Germany, employed radio broadcasting extensively in the 1930s. Those uses, some of which sparked the birth of modern communications effects research, were much scarier than the uses of social media by today’s dictatorships and illiberal democracies. But oddly, our current panic tends to promote and support precisely the types of regulation and control favored by those very same modern dictatorships and illiberal democracies: centralized content moderation and blocking by the state and holding social media platforms responsible for the postings of their users.</p> <p>Comic books generated a&nbsp;media panic in the 1940s and 50s.<sup><a id="endnote-025-backlink" href="#endnote-025">25</a></sup> A&nbsp;critic of American commercial culture, Frederic Wertham, believed that comic books encouraged juvenile delinquency and subverted the morality of children for the sake of profit. The presence of weirdness, violence, horror, and sexually tinged images led to charges that the comics were dangerous, addictive, and catered to baser instincts. A&nbsp;comic‐​book scare ensued, complete with a&nbsp;flood of newspaper stories, Congressional hearings, and a&nbsp;transformation of the comic book industry. The comic‐​book scare seems to have pioneered the three themes that characterize so much public discourse around new media in the 20th century: anti‐​commercialism, protecting children, and addiction. All are echoed in the current fight over social media. The same themes sounded in policy battles over television. Television’s status as a&nbsp;cause of violence was debated and researched endlessly. Its pollution of public discourse, the way it “cultivated” inaccurate and harmful stereotypes, and its addictive qualities were constant sources of discussion.<sup><a id="endnote-026-backlink" href="#endnote-026">26</a></sup> Again the similarity to current debates about social media is apparent.</p> <p>In examining historical cases, it becomes apparent that it is the retailers and instigators of media panic who generally pose the biggest threat to free expression and democracy. For at their root, attacks on new media, past and present, are expressions of fear: fear of empowering diverse and dissonant voices, the elites’ fears over losing hegemony over public discourse, and a&nbsp;lack of confidence in the ability of ordinary people to control their “baser instincts” or make sense of competing claims. The more sophisticated variants of these critiques are rationalizations of paternalism and authoritarianism. In the social media panic, we have both conservative <em>and </em>liberal elites recoiling from the prospect of a&nbsp;public sphere over which they have lost control, and both are preparing the way for regulatory mechanisms that can tame diversity, homogenize output, and maintain their established place in society.</p> <h3>What’s Broken?</h3> <p>A recent exchange on Twitter exposed the policy vacuity of those leading the social media moral panic. Kara Swisher, a&nbsp;well‐​known tech journalist with more than a&nbsp;million followers, tweeted to Jack Dorsey, the CEO of Twitter:</p> </div> , <blockquote class="blockquote"> <div> <p>Overall here is my mood and I&nbsp;think a&nbsp;lot of people when it comes to fixing what is broke about social media and tech: Why aren’t you moving faster? Why aren’t you moving faster? Why aren’t you moving faster?<sup><a id="endnote-027-backlink" href="#endnote-027">27</a></sup></p> </div> </blockquote> <cite> </cite> , <div class="text-default"> <p>Swisher’s impatient demand for fast action seemed to assume that the solutions to social media’s ills were obvious. I&nbsp;tweeted in reply, asking what “fix” she wanted to implement so quickly. There was no answer.</p> <p>Here is the diagnosis I&nbsp;would offer. What is “broken” about social media is exactly the same thing that makes it useful, attractive, and commercially successful: it is incredibly effective at facilitating discoveries and exchanges of information among interested parties at unprecedented scale. As a&nbsp;direct result of that, there are more informational interactions than ever before and more mutual exchanges between people. This human activity, in all its glory, gore, and squalor, generates storable, searchable records, and its users leave attributable tracks everywhere. As noted before, the emerging new world of social media is marked by hypertransparency.</p> <p>From the standpoint of free expression and free markets there is nothing inherently broken about this; on the contrary, most of the critics are unhappy precisely <em>because</em> the model is working: it is unleashing all kinds of expression and exchanges, and making tons of money at it to boot. But two distinct sociopolitical pathologies are generated by this. The first is that, by exposing all kinds of deplorable uses and users, it tends to funnel outrage at these manifestations of social deviance toward the platform providers. A&nbsp;man discovers pedophiles commenting on YouTube videos of children and is sputtering with rage at … YouTube.<sup><a id="endnote-028-backlink" href="#endnote-028">28</a></sup> The second pathology is the idea that the objectionable behaviors can be engineered out of existence or that society as a&nbsp;whole can be engineered into a&nbsp;state of virtue by encouraging intermediaries to adopt stricter surveillance and regulation. Instead of trying to stop or control the objectionable behavior, we strive to control the communications intermediary that was used by the bad actor. Instead of eliminating the crime, we propose to deputize the intermediary to recognize symbols of the crime and erase them from view. It’s as though we assume that life is a&nbsp;screen, and if we remove unwanted things from our screens by controlling internet intermediaries, then we have solved life’s problems. (And even as we do this, we hypocritically complain about China and its alleged development of an all‐​embracing social credit system based on online interactions.)</p> <p>The reaction against social media is thus based on a&nbsp;false premise and a&nbsp;false promise. The false premise is that the creators of tools that enable public interaction at scale are primarily <em>responsible</em> for the existence of the behaviors and messages so revealed. The false promise is that by pushing the platform providers to block content, eliminate accounts, or otherwise attack manifestations of social problems on their platforms, we are solving or reducing those problems. Combing these misapprehensions, we’ve tried to curb “new” problems by hiding them from public view.</p> <p>The major platforms have contributed to this pathology by taking on ever‐​more‐​extensive content‐​moderation duties. Because of the intense political pressure they are under, the dominant platforms are rapidly accepting the idea that they have overarching social responsibilities to shape user morals and shape public discourse in politically acceptable ways. Inevitably, due to the scale of social media interactions, this means increasingly automated or algorithmic forms of regulation, with all of its rigidities, stupidities, and errors. But it also means massive investments in labor‐​intensive manual forms of moderation.<sup><a id="endnote-029-backlink" href="#endnote-029">29</a></sup></p> <p>The policy debate on this topic is complicated by the fact that internet intermediaries cannot really avoid taking on some optional content regulation responsibilities beyond complying with various laws. Their status as multisided markets that match providers and seekers of information requires it.<sup><a id="endnote-030-backlink" href="#endnote-030">30</a></sup> Recommendations based on machine learning guide users through the vast, otherwise intractable amount of material available. These filters vastly improve the value of a&nbsp;platform to a&nbsp;user, but they also indirectly shape what people see, read, and hear. They can also, as part of their attempts to attract users and enhance the platforms’ value to advertisers, discourage or suppress messages and forms of behavior that make their platforms unpleasant or harmful places. This form of content moderation is outside the scope of the First Amendment’s legal protections because it is executed by a&nbsp;private actor and falls within the scope of editorial discretion.</p> <h3>What’s the Fix?</h3> <p>Section 230 of the Communications Decency Act squared this circle by immunizing information service providers who did nothing to restrict or censor the communications of the parties using their platforms (the classical “neutral conduit” or common‐​carrier concept), while also immunizing information service providers who assumed some editorial responsibilities (e.g., to restrict pornography and other forms of undesirable content). Intermediaries who did nothing were (supposed to be) immunized in ways that promoted freedom of expression and diversity online; intermediaries who were more active in managing user‐​generated content were immunized to enhance their ability to delete or otherwise monitor “bad” content without being classified as publishers and thus assuming responsibility for the content they did not restrict.<sup><a id="endnote-031-backlink" href="#endnote-031">31</a></sup></p> <p>It is clear that this legal balancing act, which worked so well to make the modern social media platform successful, is breaking down. Section 230 is a&nbsp;victim of its own success. Platforms have become big and successful in part because of their Section 230 freedoms, but as a&nbsp;result they are subject to political and normative pressures that confer upon them de facto responsibility for what their users read, see, and do. The threat of government intervention is either lurking in the background or being realized in certain jurisdictions. Fueled by hypertransparency, political and normative pressures are making the pure, neutral, nondiscriminatory platform a&nbsp;thing of the past.</p> <p>The most common proposals for fixing social media platforms all seem to ask the platforms to engage in <em>more</em> content moderation and to ferret out unacceptable forms of expression or behavior. The political demand for more‐​aggressive content moderation comes primarily from a&nbsp;wide variety of groups seeking to suppress specific kinds of content that is objectionable to them. Those who want less control or more toleration suffer from the diffuse costs/​concentrated benefit problem familiar to us from the economic analysis of special interest groups: that is, toleration benefits everyone a&nbsp;little and its presence is barely noticeable until it is lost; suppression, on the other hand, offers powerful and immediate satisfaction to a&nbsp;few highly motivated actors.<sup><a id="endnote-032-backlink" href="#endnote-032">32</a></sup></p> <p>At best, reformers propose to rationalize content moderation in ways designed to make its standards clearer, make their application more consistent, and make an appeals process possible.<sup><a id="endnote-033-backlink" href="#endnote-033">33</a></sup> Yet this is unlikely to work unless platforms get the backbone to strongly assert their rights to set the criteria, stick to them, and stop constantly adjusting them based on the vagaries of daily political pressures. At worst, advocates of more content moderation are motivated by a&nbsp;belief that greater content control will reflect their own personal values and priorities. But since calls for tougher or more extensive content moderation come from all ideological and cultural directions, this expectation is unrealistic. It will only lead to a&nbsp;distributed form of the heckler’s veto, and a&nbsp;complete absence of predictable, relatively objective standards. It is not uncommon for outrage at social media to lead in contradictory directions. A&nbsp;reporter for <em>The Guardian</em>, for example, is outraged that Facebook has an ad‐​targeting category for “vaccine controversies” and flogs the company for allowing anti‐​vaccination advocates to form closed groups that can reinforce those members’ resistance to mainstream medical care.<sup><a id="endnote-034-backlink" href="#endnote-034">34</a></sup> However, there is no way for Facebook to intervene without profiling their users as part of a&nbsp;specific political movement deemed to be wrong, and then suppressing their communications and their ability to associate based on that data. So, at the same time Facebook is widely attacked for privacy violations, it is also being asked to leverage its private user data to flag political and social beliefs that are deemed aberrant and to suppress users’ ability to associate, connect with advertisers, or communicate among themselves. In this combination of surveillance and suppression, what could possibly go wrong?</p> <p>What stance should advocates of both free expression and free markets take with respect to social media?</p> <p>First, there needs to be a&nbsp;clearer articulation of the tremendous value of platforms based on their ability to match seekers and providers of information. There also needs to be explicit advocacy for greater tolerance of the jarring diversity revealed by these processes. True liberals need to make it clear that social media platforms cannot be expected to bear the main responsibility for sheltering us from ideas, people, messages, and cultures that we consider wrong or that offend us. Most of the responsibility for what we see and what we avoid should lie with us. If we are outraged by seeing things we don’t like in online communities comprised of billions of people, we need to stop misdirecting that outrage against the platforms that happen to expose us to it. Likewise, if the exposed behavior is illegal, we need to focus on identifying the perpetrators and holding them accountable. As a&nbsp;corollary of this attitudinal change, we also need to show that the hypertransparency fostered by social media can have great social value. As a&nbsp;simple example of this, research has shown that the much‐​maligned rise of platforms matching female sex workers with clients is statistically correlated with a&nbsp;decrease in violence against women — precisely because it took sex work off the street and made transactions more visible and controllable.<sup><a id="endnote-035-backlink" href="#endnote-035">35</a></sup></p> <p>Second, free‐​expression supporters need to actively challenge those who want content moderation to go further. We need to expose the fact that they are using social media as a&nbsp;means of reforming and reshaping society, wielding it like a&nbsp;hammer against norms and values they want to be eradicated from the world. These viewpoints are leading us down an authoritarian blind alley. They may very well succeed in suppressing and crippling the freedom of digital media, but they will not, and cannot, succeed in improving society. Instead, they will make social media platforms battlegrounds for a&nbsp;perpetual intensifying conflict over who gets to silence whom. This is already abundantly clear from the cries of discrimination and bias as the platforms ratchet up content moderation: the cries come from both the left and the right in response to moderation that is often experienced as arbitrary.</p> <p>Finally, we need to mount a&nbsp;renewed and reinvigorated defense of Section 230. The case for Section 230 is simple: no alternative promises to be intrinsically better than what we have now, and most alternatives are likely to be worse. The exaggerations generated by the moral panic have obscured the simple fact that moderating content on a&nbsp;global platform with billions of users is an extraordinarily difficult and demanding task. Users, not platforms, are the source of messages, videos, and images that people find objectionable, so calls for regulation ignore the fact that regulations don’t govern a&nbsp;single supplier, but must govern millions, and maybe billions, of users. The task of flagging user‐​generated content, considering it, and deciding what to do about it is difficult and expensive. And is best left to the platforms.</p> <p>However, regulation seems to be coming. Facebook CEO Mark Zuckerberg has published a&nbsp;blog post calling for regulating the internet, and the UK government has released a&nbsp;white paper, “Online Harms,” that proposes the imposition of systematic liability for user‐​generated content on all internet intermediaries (including hosting companies and internet service providers).<sup><a id="endnote-036-backlink" href="#endnote-036">36</a></sup></p> <p>At best, a&nbsp;system of content regulation influenced by government is going to look very much like what is happening now. Government‐​mandated standards for content moderation would inevitably put most of the responsibility for censorship on the platforms themselves. Even in China, with its army of censors, the operationalization of censorship relies heavily on the platform operators. In the tsunami of content unleashed by social media, prior restraint by the state is not really an option. Germany responded in a&nbsp;similar fashion with the 2017 Netzwerkdurchsetzungsgesetz, or Network Enforcement Act (popularly known as NetzDG or the Facebook Act), a&nbsp;law aimed at combating agitation, hate speech, and fake news in social networks.</p> <p>The NetzDG law immediately resulted in suppression of various forms of politically controversial online speech. Joachim Steinhöfel, a&nbsp;German lawyer concerned by Facebook’s essentially jurisprudential role under NetzDG, created a “wall of shame” containing legal content suppressed by NetzDG.<sup><a id="endnote-037-backlink" href="#endnote-037">37</a></sup> Ironically, German right‐​wing nationalists who suffered takedowns under the new law turned the law to their advantage by using it to suppress critical or demeaning comments about themselves. “Germany’s attempt to regulate speech online has seemingly amplified the voices it was trying to diminish,” claims an article in <em>The Atlantic</em>.<sup><a id="endnote-038-backlink" href="#endnote-038">38</a></sup> As a&nbsp;result of one right‐​wing politician’s petition, Facebook must ensure that individuals in Germany cannot use a&nbsp;VPN to access illegal content. Yet still, a&nbsp;report by an anti‐​hate‐​speech group that supports the law argues that it has been ineffective. “There have been no fines imposed on companies and little change in overall takedown rates.”<sup><a id="endnote-039-backlink" href="#endnote-039">39</a></sup></p> <p>Abandoning intermediary immunities would make the platforms even more conservative and more prone to disable accounts or take down content than they are now. In terms of costs and legal risks, it will make sense for them to err on the safe side. When intermediaries are given legal responsibility, conflicts about arbitrariness and false positives don’t go away, they intensify. In authoritarian countries, platforms will be merely be indirect implementers of national censorship standards and laws.</p> <p>On the other hand, U.S. politicians face a&nbsp;unique and interesting dilemma. If they think they can capitalize on social media’s travails with calls for regulation, they must understand that governmental involvement in content regulation would have to conform to the First Amendment. This would mean that all kinds of content that many users don’t want to see, ranging from hate speech to various levels of nudity, could no longer be restricted because they are <em>not</em> strictly illegal. Any government interventions that took down postings or deleted accounts could be litigated based on a&nbsp;First Amendment standard. Ironically, then, a&nbsp;governmental takeover of content regulation responsibilities in the United States would have to be far more liberal than the status quo. Avoidance of this outcome was precisely why Section 230 was passed in the first place.</p> <p>From a&nbsp;pure free‐​expression standpoint, a&nbsp;First Amendment approach would be a&nbsp;good thing. But from a&nbsp;free‐​association and free‐​market standpoint, it would not. Such a&nbsp;policy would literally force all social media users to be exposed to things they didn’t want to be exposed to. It would undermine the economic value of platforms by decapitating their ability to manage their matching algorithms, shape their environment, and optimize the tradeoffs of a&nbsp;multisided market. Given the current hue and cry about all the bad things people are seeing and doing on social media, a&nbsp;legally driven, permissive First Amendment standard does not seem like it would make anyone happy.</p> <p>Advocates of expressive freedom, therefore, need to reassert the importance of Section 230. Platforms, not the state, should be responsible for finding the optimal balance between content moderation, freedom of expression, and the economic value of platforms. The alternative of greater government regulation would absolve the platforms of market responsibility for their decisions. It would eliminate competition among platforms for appropriate moderation standards and practices and would probably lead them to exclude and suppress even more legal speech than they do now.</p> <h2>Conclusion</h2> <p>Content regulation is only the most prominent of the issues faced by social media platforms today; they are also implicated in privacy and competition‐​policy controversies. But social media content regulation has been the exclusive focus of this analysis. Hypertransparency and the subsequent demand for content control it creates are the key drivers of the new media moral panic. The panic is feeding upon itself, creating conditions for policy reactions that overlook or openly challenge values regarding free expression and free enterprise. While there is a&nbsp;lot to dislike about Facebook and other social media platforms, it’s time we realized that a&nbsp;great deal of that negative reaction stems from an information society contemplating manifestations of itself. It is not an exaggeration to say that we are blaming the mirror for what we see in it. Section 230 is still surprisingly relevant to this dilemma. As a&nbsp;policy, Section 230 was not a&nbsp;form of infant industry protection that we can dispense with now, nor was it a&nbsp;product of a&nbsp;utopian inebriation with the potential of the internet. It was a&nbsp;very clever way of distributing responsibility for content governance in social media. If we stick with this arrangement, learn more tolerance, and take more responsibility for what we see and do on social media, we can respond to the problems while retaining the benefits.</p> <h2>Notes</h2> <p><sup><a id="endnote-001" href="#endnote-001-backlink">1</a></sup> Milton L. Mueller, “Hyper‐​transparency and Social Control: Social Media as Magnets for Regulation,” <em>Telecommunications Policy</em> 39, no. 9 (2015): 804–10.</p> <p><sup><a id="endnote-002" href="#endnote-002-backlink">2</a></sup> Erich Goode and Nachman Ben‐​Yehuda, “Grounding and Defending the Sociology of Moral Panic,” chap. 2&nbsp;in <em>Moral Panic and the Politics of Anxiety</em>, ed. Sean Patrick Hier (Abingdon: Routledge, 2011).</p> <p><sup><a id="endnote-003" href="#endnote-003-backlink">3</a></sup> Stanley Cohen, <em>Folk Devils and Moral Panics </em>(Abingdon: Routledge, 2011).</p> <p><sup><a id="endnote-004" href="#endnote-004-backlink">4</a></sup> Ronald J. Deibert, “The Road to Digital Unfreedom: Three Painful Truths about Social Media,”<em> Journal of Democracy</em> 30, no. 1 (2019): 25–39.</p> <p><sup><a id="endnote-005" href="#endnote-005-backlink">5</a></sup> Zeynep Tufekci, “YouTube, the Great Radicalizer,” <em>New York Times</em>, March 10, 2018.</p> <p><sup><a id="endnote-006" href="#endnote-006-backlink">6</a></sup> Tufekci, “YouTube, the Great Radicalizer.”</p> <p><sup><a id="endnote-007" href="#endnote-007-backlink">7</a></sup> Roger McNamee, “I Mentored Mark Zuckerberg. I&nbsp;Loved Facebook. But I&nbsp;Can’t Stay Silent about What’s Happening,” <em>Time Magazine</em>, January 17, 2019.</p> <p><sup><a id="endnote-008" href="#endnote-008-backlink">8</a></sup> Jonathan Albright, “Untrue‐​Tube: Monetizing Misery and Disinformation,” <em>Medium</em>, February 25, 2018.</p> <p><sup><a id="endnote-009" href="#endnote-009-backlink">9</a></sup> Courtney Seiter, “The Psychology of Social Media: Why We Like, Comment, and Share Online,” <em>Buffer</em>, August 20, 2017.</p> <p><sup><a id="endnote-010" href="#endnote-010-backlink">10</a></sup> Paul Mozur, “A Genocide Incited on Facebook, With Posts from Myanmar’s Military,” <em>New York Times</em>, October 15, 2018.</p> <p><sup><a id="endnote-011" href="#endnote-011-backlink">11</a></sup> Ingrid Burrington, “Could Facebook Be Tried for Human‐​Rights Abuses?,” <em>The Atlantic</em>, December 20, 2017.</p> <p><sup><a id="endnote-012" href="#endnote-012-backlink">12</a></sup> Burrington, “Could Facebook Be Tried for Human‐​Rights Abuses?”</p> <p><sup><a id="endnote-013" href="#endnote-013-backlink">13</a></sup> For a&nbsp;discussion of Michael Flynn’s lobbying campaign for the Turkish government and Paul Manafort’s business in Ukraine and Russia, see Rebecca Kheel, “Turkey and Michael Flynn: Five Things to Know,” <em>The Hill</em>, December 17, 2018; and Franklin Foer, “Paul Manafort, American Hustler,” <em>The Atlantic</em>, March 2018.</p> <p><sup><a id="endnote-014" href="#endnote-014-backlink">14</a></sup> See, for example, “Minority Views to the Majority‐​produced ‘Report on Russian Active Measures, March 22, 2018’” of the Democratic representatives from the United States House Permanent Select Committee on Intelligence (USHPSCI), March 26, 2018.</p> <p><sup><a id="endnote-015" href="#endnote-015-backlink">15</a></sup> Indictment at 11, <em>U.S. v. Viktor Borisovich Netyksho et al</em>., Case 1:18-cr-00032-DLF (D.D.C. filed Feb. 16, 2018).</p> <p><sup><a id="endnote-016" href="#endnote-016-backlink">16</a></sup> Matt Taibbi, “Can We Be Saved from Facebook?,” <em>Rolling Stone</em>, April 3, 2018.</p> <p><sup><a id="endnote-017" href="#endnote-017-backlink">17</a></sup> Peter W. Singer and Emerson T. Brooking, <em>LikeWar: The Weaponization of Social Media</em> (New York: Houghton Mifflin Harcourt, 2018).</p> <p><sup><a id="endnote-018" href="#endnote-018-backlink">18</a></sup> Thomas Rid, “Why Twitter Is the Best Social Media Platform for Disinformation,” <em>Motherboard</em>, November 1, 2017.</p> <p><sup><a id="endnote-019" href="#endnote-019-backlink">19</a></sup> McNamee, “I Mentored Mark Zuckerberg. I&nbsp;Loved Facebook. But I&nbsp;Can’t Stay Silent about What’s Happening.”</p> <p><sup><a id="endnote-020" href="#endnote-020-backlink">20</a></sup> Hunt Allcott and Matthew Gentzkow, “Social Media and Fake News in the 2016 Election,” <em>Journal of Economic Perspectives</em> 31, no. 2 (2017): 211–36.</p> <p><sup><a id="endnote-021" href="#endnote-021-backlink">21</a></sup> Sarah McKune, “An Analysis of the International Code of Conduct for Information Security,” CitizenLab, September 28, 2015.</p> <p><sup><a id="endnote-022" href="#endnote-022-backlink">22</a></sup> Kirsten Drotner, “Dangerous Media? Panic Discourses and Dilemmas of Modernity,” <em>Paedagogica Historica</em> 35, no. 3 (1999): 593–619.</p> <p><sup><a id="endnote-023" href="#endnote-023-backlink">23</a></sup> Thomas W. Hazlett, “The Rationality of US Regulation of the Broadcast Spectrum,”<em> Journal of Law and Economics</em> 33, no. 1 (1990): 133–75.</p> <p><sup><a id="endnote-024" href="#endnote-024-backlink">24</a></sup> Robert McChesney, <em>Telecommunications, Mass Media and Democracy: The Battle for Control of U.S. Broadcasting</em>, 1928–1935 (New York: Oxford, 1995).</p> <p><sup><a id="endnote-025" href="#endnote-025-backlink">25</a></sup> Fredric Wertham, <em>Seduction of the Innocent</em> (New York: Rinehart, 1954); and David Hajdu, <em>The Ten‐​cent Plague: The Great Comic‐​book Scare and How It Changed America</em> (New York: Picador, 2009), <a href="https://us.macmillan.com/books/9780312428235">https://​us​.macmil​lan​.com/​b​o​o​k​s​/​9​7​8​0​3​1​2​4​28235</a>.</p> <p><sup><a id="endnote-026" href="#endnote-026-backlink">26</a></sup> “Like drug dealers on the corner, [TV broadcasters] control the life of the neighborhood, the home and, increasingly, the lives of children in their custody,” claimed a&nbsp;former FCC commissioner. Minow &amp;&nbsp;LeMay, 1995. <a href="http://www.washingtonpost.com/wp-srv/style/longterm/books/chap1/abandonedinthewasteland.htm">http://​www​.wash​ing​ton​post​.com/​w​p​-​s​r​v​/​s​t​y​l​e​/​l​o​n​g​t​e​r​m​/​b​o​o​k​s​/​c​h​a​p​1​/​a​b​a​n​d​o​n​e​d​i​n​t​h​e​w​a​s​t​e​l​a​n​d.htm</a>. Newton N. Minow &amp;&nbsp;Craig L. LaMay, <em>Abandoned in the Wasteland</em> (New York: Hill and Wang, 1996)</p> <p><sup><a id="endnote-027" href="#endnote-027-backlink">27</a></sup> Kara Swisher (@karaswisher), “Overall here is my mood and I&nbsp;think a&nbsp;lot of people when it comes to fixing what is broke about social media and tech: Why aren’t you moving faster? Why aren’t you moving faster? Why aren’t you moving faster?” Twitter post, February 12, 2019, 2:03 p.m., <a href="https://twitter.com/karaswisher/status/1095443416148787202">https://​twit​ter​.com/​k​a​r​a​s​w​i​s​h​e​r​/​s​t​a​t​u​s​/​1​0​9​5​4​4​3​4​1​6​1​4​8​7​87202</a>.</p> <p><sup><a id="endnote-028" href="#endnote-028-backlink">28</a></sup> Matt Watson, “Youtube Is Facilitating the Sexual Exploitation of Children, and It’s Being Monetized,” YouTube video, 20:47, “MattsWhatItIs,” February 27, 2019, <a href="https://www.youtube.com/watch?v=O13G5A5w5P0">https://​www​.youtube​.com/​w​a​t​c​h​?​v​=​O​1​3​G​5​A​5w5P0</a>.</p> <p><sup><a id="endnote-029" href="#endnote-029-backlink">29</a></sup> Casey Newton, “The Trauma Floor: The Secret Lives of Facebook Moderators in America,” <em>The Verge</em>, February 25, 2019.</p> <p><sup><a id="endnote-030" href="#endnote-030-backlink">30</a></sup> Geoff Parker, Marshall van Alstyne, and Sangeet Choudhary, <em>Platform Revolution</em> (New York: W. W. Norton, 2016).</p> <p><sup><a id="endnote-031" href="#endnote-031-backlink">31</a></sup> The Court in <em>Zeran v. America Online, Inc.</em>, 129&nbsp;F.3d 327 (4th Cir. 1997), said Sec. 230 was passed to “remove the disincentives to self‐​regulation created by the <em>Stratton Oakmont</em> decision.” In <em>Stratton Oakmont, Inc. v. Prodigy Services Co.</em>, (N.Y. Sup. Ct. 1995), a&nbsp;bulletin‐​board provider was held responsible for defamatory remarks by one of its customers because it made efforts to edit some of the posted content.</p> <p><sup><a id="endnote-032" href="#endnote-032-backlink">32</a></sup> Robert D&nbsp;Tollison, “Rent Seeking: A&nbsp;Survey,” <em>Kyklos</em> 35, no. 4 (1982): 575–602.</p> <p><sup><a id="endnote-033" href="#endnote-033-backlink">33</a></sup> See, for example, the “Santa Clara Principles on Transparency and Accountability in Content Moderation,” May 8, 2018, <a href="https://santaclaraprinciples.org/">https://​san​taclara​prin​ci​ples​.org/</a>.</p> <p><sup><a id="endnote-034" href="#endnote-034-backlink">34</a></sup> Julia Carrie Wong, “Revealed: Facebook Enables Ads to Target Users Interested in ‘Vaccine Controversies’,” <em>The Guardian </em>(London), February 15, 2019.</p> <p><sup><a id="endnote-035" href="#endnote-035-backlink">35</a></sup> See Scott Cunningham, Gregory DeAngelo, and John Tripp, “Craigslist’s Effect on Violence against Women,” <a href="http://scunning.com/craigslist110.pdf">http://​scun​ning​.com/​c​r​a​i​g​s​l​i​s​t​1​1​0.pdf</a> (2017). See also Emily Witt, “After the Closure of Backpage, Increasingly Vulnerable Sex Workers Are Demanding Their Rights,”<em> New Yorker</em>, June 8, 2018.</p> <p><sup><a id="endnote-036" href="#endnote-036-backlink">36</a></sup> Mark Zuckerberg, “Four Ideas to Regulate the Internet,” March 30, 2019; and UK Home Office, Department for Digital, Culture, Media &amp;&nbsp;Sport, <em>Online Harms White Paper</em>, The Rt Hon. Sajid Javid MP, The Rt Hon. Jeremy Wright MP, April 8, 2019.</p> <p><sup><a id="endnote-037" href="#endnote-037-backlink">37</a></sup> Joachim Nikolaus Steinhöfel, “Blocks &amp;&nbsp;Hate Speech–Insane Censorship &amp;&nbsp;Arbitrariness from FB,” Facebook Block — Wall of Shame, <a href="https://facebook-sperre.steinhoefel.de/">https://​face​book​-sperre​.stein​hoe​fel​.de/</a>.</p> <p><sup><a id="endnote-038" href="#endnote-038-backlink">38</a></sup> Linda Kinstler, “Germany’s Attempt to Fix Facebook Is Backfiring,” <em>The Atlantic</em>, May 18, 2018.</p> <p><sup><a id="endnote-039" href="#endnote-039-backlink">39</a></sup> William Echikson and Olivia Knodt, “Germany’s NetzDG: A&nbsp;Key Test for Combatting Online Hate,” CEPS Research Report no. 2018/09, November 2018.</p> </div> Tue, 23 Jul 2019 03:00:00 -0400 Milton Mueller https://www.cato.org/publications/policy-analysis/challenging-social-media-moral-panic-preserving-free-expression-under Artificial Intelligence and Counterterrorism: Possibilities and Limitations https://www.cato.org/publications/testimony/artificial-intelligence-counterterrorism-possibilities-limitations Julian Sanchez <div class="lead text-default"> <p>My thanks to the chair, ranking member, and all members of this subcommittee for the opportunity to speak to you today.</p> </div> , <div class="text-default"> <p>As a&nbsp;firm believer in the principle of comparative advantage, I&nbsp;don’t intend to delve too deeply into the technical details of automated content filtering, which my copanelists are far better suited than I&nbsp;to address. Instead I&nbsp;want to focus on legal and policy considerations, and above all to urge Congress to resist the temptation to intervene in the highly complex — and admittedly highly imperfect — processes by which private online platforms seek to moderate both content related to terrorism and “hateful” or otherwise objectionable speech more broadly. (My colleague at the Cato Institute, John Samples, recently published a&nbsp;policy paper dealing still more broadly with issues surrounding regulation of content moderation policies, which I&nbsp;can enthusiastically recommend to the committee’s attention.<a id="#refer-1" href="#endnote-1"><sup>1</sup></a>)</p> <p>The major social media platforms all engage, to varying degrees, in extensive monitoring of user‐​posted content via, a&nbsp;combination of human and automated review, with the aim of restricting a&nbsp;wide array of speech those platforms deem objectionable, typically including nudity, individual harassment, and — more germane to our subject today — the promotion of extremist violence and, more broadly, hateful speech directed at specific groups on the basis of race, gender, religion, or sexuality. In response to public criticism, these platforms have in recent years taken steps to crack down more aggressively on hateful and extremist speech, investing in larger teams of human moderators and more sophisticated algorithmic tools designed to automatically flag such content.<a id="#refer-2" href="#endnote-2"><sup>2</sup></a></p> <p>Elected officials and users of these platforms are often dissatisfied with these efforts — both with the speed and efficacy of content removal and the scope of individual platforms’ policies. Yet it is clear that all the major platforms’ policies go far further in restricting speech than would be permissible under our Constitution via state action. The First Amendment protects hate speech. The Supreme Court has ruled in favor of the constitutional right of American neo‐​Nazis to march in public brandishing swastikas<a id="#refer-3" href="#endnote-3"><sup>3</sup></a>, and of a&nbsp;hate group to picket outside the funerals of veterans displaying incredibly vile homophobic and anti‐​military slogans.<a id="#refer-4" href="#endnote-4"><sup>4</sup></a></p> <p>While direct threats and speech that is both intended and likely to incite “imminent” violence fall outside the ambit of the First Amendment, Supreme Court precedent distinguishes such speech from “the mere abstract teaching … of the moral propriety or even moral necessity for a&nbsp;resort to force and violence,”<a id="#refer-5" href="#endnote-5"><sup>5</sup></a> which remains protected. Unsurprisingly, in light of this case law, a&nbsp;recent Congressional Research Service report found that “laws that criminalize the dissemination of the pure advocacy of terrorism, without more, would likely be deemed unconstitutional.”<a id="#refer-6" href="#endnote-6"><sup>6</sup></a></p> <p>Happily — at least, as far as most users of social media are concerned — the First Amendment does not bind private firms like YouTube, Twitter, or Facebook, leaving them with a&nbsp;much freer hand to restrict offensive content that our Constitution forbids the law from reaching. The Supreme Court reaffirmed that principle just this month, in acase involving a&nbsp;public access cable channel in New York. Yet as the Court noted in that decision, this applies only when private determinations to restrict content are truly private. They may be subject to First Amendment challenge if the private entity in question is functioning as a “state actor” — which can occur “when the government compels the private entity to take a&nbsp;particular action” or “when the government acts jointly with the private entity.“<a id="#refer-7" href="#endnote-7"><sup>7</sup></a></p> <p>Perversely, then, legislative efforts to compel more aggressive removal of hateful or extremist content risk producing the opposite of the intended result. Content moderation decisions that are clearly lawful as an exercise of purely private discretion could be recast as government censorship, opening the door to legal challenge. Should the courts determine that legislative mandates had rendered First Amendment standards applicable to online platforms, the ultimate result would almost certainly be more hateful and extremist speech on those platforms.</p> <p>Bracketing legal considerations for the moment, it is also important to recognize that the ability of algorithmic tools to accurately identify hateful or extremist content is not as great as is commonly supposed. Last year, Facebook boasted that its automated filter detected 99.5 percent of the terrorist‐​related content the company removed before it was posted, with the remainder flagged by users.<a id="#refer-8" href="#endnote-8"><sup>8</sup></a> Many press reports subtly misconstrued this claim. The <em>New York Times</em>, for example, wrote that Facebook’s “A.I. found 99.5 percent of terrorist content on the site.”<a id="#refer-9" href="#endnote-9"><sup>9</sup></a> That, of course, is a&nbsp;very different proposition: Facebook’s claim concerned the ratio of content removed after being flagged as terror‐​related by automated tools versus human reporting, which should be unsurprising given that software can process vast amounts of content far more quickly than human brains. It is <em>not</em> the claim that software filters successfully detected 99.5 percent of all terror‐​related content uploaded to the site — which would be impossible since, by definition, content not detected by either mechanism is omitted from the calculus. Nor does it tell us much about the false‐​positive ratio: How much content was misidentified as terror‐​related, or how often such content appeared in the context of posts either reporting on or condemning terrorist activities.</p> <p>There is ample reason to believe that such false positives impose genuine social cost. Algorithms may be able to determine that a&nbsp;post contains images of extremist content, but they are far less adept at reading contextual cues to determine whether the purpose of the post is to glorify violence, condemn it, or merely document it — something that may in certain cases even be ambiguous to a&nbsp;human observer. Journalists and human rights activists, for example, have complained that tech company crackdowns on violent extremist videos have inadvertently frustrated efforts to document human rights violations<a id="#refer-" href="#endnote-"><sup>10</sup></a>, and erased evidence of war crimes in Syria.<a id="#refer-11" href="#endnote-11"><sup>11</sup></a></p> <p>Just this month, a&nbsp;YouTube crackdown on white supremacist content resulted in the removal of a&nbsp;large number of historical videos posted by educational institutions, and by anti‐​racist activist groups dedicated to documenting and condemning hate speech.<a id="#refer-12" href="#endnote-12"><sup>12</sup></a></p> <p>Of course, such errors are often reversed by human reviewers — at least when the groups affected have enough know‐​how and public prestige to compel a&nbsp;reconsideration. Government mandates, however, alter the calculus. As three United Nations special rapporteurs wrote, objecting to a&nbsp;proposal in the European Union to require automated filtering, the threat of legal penalties were “likely to incentivize platforms to err on the side of caution and remove content that is legitimate or lawful.”<a id="#refer-13" href="#endnote-13"><sup>13</sup></a> If the failure to filter to the government’s satisfaction risks stiff fines, any cost‐​benefit analysis for platforms will favor significant overfiltering: Better to pull down ten benign posts than risk leaving up one that might expose them to penalties. For precisely this reason, the EU proposal has been roundly condemned by human rights activists<a id="#refer-14" href="#endnote-14"><sup>14</sup></a> and fiercely opposed by a&nbsp;wide array of civil society groups.<a id="#refer-15" href="#endnote-15"><sup>15</sup></a></p> <p>A recent high‐​profile case illustrates the challenges platforms face: The efforts by platforms to restrict circulation of video depicting the brutal mass shooting of worshippers at a&nbsp;mosque in Christchurch, New Zealand. Legal scholar Kate Klonick documented the efforts of Facebook’s content moderation team for The New Yorker<a id="#refer-16" href="#endnote-16"><sup>16</sup></a>, while reporters Elizabeth Dwoskin and Craig Timberg wrote about the parallel struggles of YouTube’s team for <em>The Washington Post</em><a id="#refer-17" href="#endnote-17"><sup>17</sup></a> — both accounts are illuminating and well worth reading.</p> <p>Though both companies were subject to vigorous condemnation by elected officials for failing to limit the video quickly or comprehensively enough, the published accounts make clear this was not for want of trying. Teams of engineers and moderators at both platforms worked around the clock to stop the spread of the video, by increasingly aggressive means. Automated detection tools, however, were often frustrated by countermeasures employed by uploaders, who continuously modified the video until it could pass through the filters. This serves as a&nbsp;reminder that even if automated detection proves relatively effective at any given time, they are in a&nbsp;perennial arms race with determined humans probing for algorithmic blind spots.<a id="#refer-18" href="#endnote-18"><sup>18</sup></a> There was also the problem of users who had — perhaps misguidedly — uploaded parts of the video in order to condemn the savagery of the attack and evoke sympathy for the victims. Here, the platforms made a&nbsp;difficult real‐​time value judgment that, in this case, the balance of equities favored an aggressive posture: Categorical prohibition of the content regardless of context or intent, coupled with tight restrictions on searching and sharing of recently uploaded video.</p> <p>Both the decisions the firms made and the speed and adequacy with which they implemented them in a&nbsp;difficult circumstance will be — and should be — subject to debate and criticism. But it would be a&nbsp;grave error to imagine that broad legislative mandates are likely to produce better results than such context‐​sensitive judgments, or that smart software will somehow obviate the need for a&nbsp;difficult and delicate balancing of competing values.</p> <p>I thank the committee again for the opportunity to testify, and look forward to your questions.</p> <h2>Notes</h2> <p><a href="#refer-1" id="#endnote-1"><sup>1</sup></a> John Samples, “Why the Government Should Not Regulate Content Moderation of Social Media” (Cato Institute) <a href="https://www.cato.org/publications/policy-analysis/whygovernment-should-not-regulate-content-moderation-social-media#full">https://​www​.cato​.org/​p​u​b​l​i​c​a​t​i​o​n​s​/​p​o​l​i​c​y​-​a​n​a​l​y​s​i​s​/​w​h​y​g​o​v​e​r​n​m​e​n​t​-​s​h​o​u​l​d​-​n​o​t​-​r​e​g​u​l​a​t​e​-​c​o​n​t​e​n​t​-​m​o​d​e​r​a​t​i​o​n​-​s​o​c​i​a​l​-​m​e​d​i​a​#full</a></p> <p><a href="#refer-2" id="#endnote-2"><sup>2</sup></a> See, e.g., Kent Walker “Four steps we’re taking today to fight terrorism online” Google (June 18, 2017) <a href="https://www.blog.google/around-the-globe/google-europe/four-stepswere-taking-today-fight-online-terror/">https://www.blog.google/around-the-globe/google-europe/four-stepswere-taking-today-fight-online-terror/</a>; Monika Bickert and Brian Fishman “Hard Questions: What Are We Doing to Stay Ahead of Terrorists?” Facebook (November 8, 2018) <a href="https://newsroom.fb.com/news/2018/11/staying-ahead-of-terrorists/">https://​news​room​.fb​.com/​n​e​w​s​/​2​0​1​8​/​1​1​/​s​t​a​y​i​n​g​-​a​h​e​a​d​-​o​f​-​t​e​r​r​o​r​ists/</a>; “Terrorism and violent extremism policy” Twitter (March 2019) <a href="https://help.twitter.com/en/rulesand-policies/violent-groups">https://​help​.twit​ter​.com/​e​n​/​r​u​l​e​s​a​n​d​-​p​o​l​i​c​i​e​s​/​v​i​o​l​e​n​t​-​g​roups</a></p> <p><a href="#refer-3" id="#endnote-3"><sup>3</sup></a> <em>National Socialist Party of America v. Village of Skokie</em>, 432 U.S. 43 (1977)</p> <p><a href="#refer-4" id="#endnote-4"><sup>4</sup></a> <em>Snyder v. Phelps</em>, 562 U.S. 443 (2011)</p> <p><a href="#refer-5" id="#endnote-5"><sup>5</sup></a> <em>U.S. v. Brandenburg</em>, 395 U.S. 444 (1969)</p> <p><a href="#refer-6" id="#endnote-6"><sup>6</sup></a> Kathleen Anne Ruane, “The Advocacy of Terrorism on the Internet: Freedom of Speech Issues and the Material Support Statutes” Congressional Research Service Report T44646 (September 8, 2016) <a href="https://fas.org/sgp/crs/terror/R44626.pdf">https://​fas​.org/​s​g​p​/​c​r​s​/​t​e​r​r​o​r​/​R​4​4​6​2​6.pdf</a></p> <p><a href="#refer-7" id="#endnote-7"><sup>7</sup></a> <em>Manhattan Community Access Corp. v. Halleck</em>, 17–1702 (2019)</p> <p><a href="#refer-8" id="#endnote-8"><sup>8</sup></a> Alex Schultz and Guy Rosen “Understanding the Facebook Community Standards Enforcement Report” <a href="https://fbnewsroomus.files.wordpress.com/2018/05/understanding_the_community_standards_enforcement_report.pdf">https://​fbnews​roomus​.files​.word​press​.com/​2​0​1​8​/​0​5​/​u​n​d​e​r​s​t​a​n​d​i​n​g​_​t​h​e​_​c​o​m​m​u​n​i​t​y​_​s​t​a​n​d​a​r​d​s​_​e​n​f​o​r​c​e​m​e​n​t​_​r​e​p​o​r​t.pdf</a></p> <p><a href="#refer-9" id="#endnote-9"><sup>9</sup></a> Sheera Frenkel, “Facebook Says It Deleted 865 Million Posts, Mostly Spam” <em>New York Times</em> (May 15, 2018). <a href="https://www.nytimes.com/2018/05/15/technology/facebook-removal-posts-fakeaccounts.html">https://​www​.nytimes​.com/​2​0​1​8​/​0​5​/​1​5​/​t​e​c​h​n​o​l​o​g​y​/​f​a​c​e​b​o​o​k​-​r​e​m​o​v​a​l​-​p​o​s​t​s​-​f​a​k​e​a​c​c​o​u​n​t​s​.html</a></p> <p><a href="#refer-10" id="#endnote-10"><sup>10</sup></a> Dia Kayyali and Raja Althaibani, “Vital Human Rights Evidence in Syria is Disappearing from YouTube” <a href="https://blog.witness.org/2017/08/vital-human-rightsevidence-syria-disappearing-youtube/">https://​blog​.wit​ness​.org/​2​0​1​7​/​0​8​/​v​i​t​a​l​-​h​u​m​a​n​-​r​i​g​h​t​s​e​v​i​d​e​n​c​e​-​s​y​r​i​a​-​d​i​s​a​p​p​e​a​r​i​n​g​-​y​o​u​tube/</a></p> <p><a href="#refer-11" id="#endnote-11"><sup>11</sup></a> Bernhard Warner, “Tech Companies Are Deleting Evidence of War Crimes,” <em>The Atlantic</em>, (May 8, 2019). <a href="https://www.theatlantic.com/ideas/archive/2019/05/facebookalgorithms-are-making-it-harder/588931/">https://​www​.the​at​lantic​.com/​i​d​e​a​s​/​a​r​c​h​i​v​e​/​2​0​1​9​/​0​5​/​f​a​c​e​b​o​o​k​a​l​g​o​r​i​t​h​ms-ar…</a></p> <p><a href="#refer-12" id="#endnote-12"><sup>12</sup></a> Elizabeth Dwoskin, “How YouTube Erased History in Its battle against White Supremacy,” <em>Washington Post</em> (June 13, 2019). <a href="https://www.washingtonpost.com/technology/2019/06/13/how-youtube-erased-historyits-battle-against-white-supremacy/?utm_term=.e5391be45aa2">https://​www​.wash​ing​ton​post​.com/​t​e​c​h​n​o​l​o​g​y​/​2​0​1​9​/​0​6​/​1​3​/​h​o​w​-​y​o​u​t​u​b​e​-​e​r​a​s​e​d​-​h​i​s​t​o​r​y​i​t​s​-​b​a​t​t​l​e​-​a​g​a​i​n​s​t​-​w​h​i​t​e​-​s​u​p​r​e​m​a​c​y​/​?​u​t​m​_​t​e​r​m​=​.​e​5​3​9​1​b​e​45aa2</a></p> <p><a href="#refer-13" id="#endnote-13"><sup>13</sup></a> David Kaye, Joseph Cannataci, and Fionnuala Ní Aoláin “Mandates of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism” <a href="https://spcommreports.ohchr.org/TMResultsBase/DownLoadPublicCommunicationFile?gId=24234">https://​spcomm​re​ports​.ohchr​.org/​T​M​R​e​s​u​l​t​s​B​a​s​e​/​D​o​w​n​L​o​a​d​P​u​b​l​i​c​C​o​m​m​u​n​i​c​a​t​i​o​n​F​i​l​e​?​g​I​d​=​24234</a></p> <p><a href="#refer-14" id="#endnote-14"><sup>14</sup></a> Faiza Patel, “EU ‘Terrorist Content’ Proposal Sets Dire Example for Free Speech Online” <em>Just Security</em>, <a href="https://www.justsecurity.org/62857/eu-terrorist-content-proposalsets-dire-free-speech-online/">https://​www​.just​se​cu​ri​ty​.org/​6​2​8​5​7​/​e​u​-​t​e​r​r​o​r​i​s​t​-​c​o​n​t​e​n​t​-​p​r​o​p​o​s​a​l​s​e​t​s​-​d​i​r​e​-​f​r​e​e​-​s​p​e​e​c​h​-​o​n​line/</a></p> <p><a href="#refer-15" id="#endnote-15"><sup>15</sup></a> “Letter to Ministers of Justice and Home Affairs on the Proposed Regulation on Terrorist Content Online,” <a href="https://cdt.org/files/2018/12/4-Dec-2018-CDT-Joint-LetterTerrorist-Content-Regulation.pdf">https://cdt.org/files/2018/12/4‑Dec-2018-CDT-Joint-LetterTerrorist-Content-Regulation.pdf</a></p> <p><a href="#refer-16" id="#endnote-16"><sup>16</sup></a> Kate Klonick, “Inside the Team at Facebook That Dealt With the Christchurch Shooting,” <em>The New Yorker</em> (April 25, 2019), <a href="https://www.newyorker.com/news/newsdesk/inside-the-team-at-facebook-that-dealt-with-the-christchurch-shooting">https://​www​.newyork​er​.com/​n​e​w​s​/​n​e​w​s​d​e​s​k​/​i​n​s​i​d​e​-​t​h​e​-​t​e​a​m​-​a​t​-​f​a​c​e​b​o​o​k​-​t​h​a​t​-​d​e​a​l​t​-​w​i​t​h​-​t​h​e​-​c​h​r​i​s​t​c​h​u​r​c​h​-​s​h​o​oting</a></p> <p><a href="#refer-17" id="#endnote-17"><sup>17</sup></a> Elizabeth Dwoskin and Craig Timberg “Inside YouTube’s Struggles to Shut Down Video of the New Zealand Shooting — and the Humans Who Outsmarted Its Systems,” <em>Washington Post</em> (March 18, 2019), <a href="https://www.washingtonpost.com/technology/2019/03/18/inside-youtubes-struggles-shutdown-video-new-zealand-shooting-humans-who-outsmarted-itssystems/?utm_term=.6a5916ba26c1">https://​www​.wash​ing​ton​post​.com/​t​e​c​h​n​o​l​o​g​y​/​2​0​1​9​/​0​3​/​1​8​/​i​n​s​i​d​e​-​y​o​u​t​u​b​e​s​-​s​t​r​u​g​g​l​e​s​-​s​h​u​t​d​o​w​n​-​v​i​d​e​o​-​n​e​w​-​z​e​a​l​a​n​d​-​s​h​o​o​t​i​n​g​-​h​u​m​a​n​s​-​w​h​o​-​o​u​t​s​m​a​r​t​e​d​-​i​t​s​s​y​s​t​e​m​s​/​?​u​t​m​_​t​e​r​m​=​.​6​a​5​9​1​6​b​a26c1</a></p> <p><a href="#refer-17" id="#endnote-17"><sup>18</sup></a> See, e.g., Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran “Deceiving Google’s Perspective API Built for Detecting Toxic Comments,” <em>Arvix</em> (February 2017), <a href="https://arxiv.org/abs/1702.08138">https://​arx​iv​.org/​a​b​s​/​1​7​0​2​.​08138</a></p> </div> Tue, 25 Jun 2019 13:19:00 -0400 Julian Sanchez https://www.cato.org/publications/testimony/artificial-intelligence-counterterrorism-possibilities-limitations Is This Time Different? Schumpeter, the Tech Giants, and Monopoly Fatalism https://www.cato.org/multimedia/cato-daily-podcast/time-different-schumpeter-tech-giants-monopoly-fatalism Ryan Bourne, Caleb O. Brown <p>Remember MySpace? What about Kodak? These companies seemed to be unstoppable monopolies. So what happened? Ryan Bourne is author of the new Cato paper, “<a href="https://www.cato.org/publications/policy-analysis/time-different-schumpeter-tech-giants-monopoly-fatalism" target="_blank">Is This Time Different? Schumpeter, the Tech Giants, and Monopoly Fatalism.</a>”</p> Mon, 17 Jun 2019 03:00:00 -0400 Ryan Bourne, Caleb O. Brown https://www.cato.org/multimedia/cato-daily-podcast/time-different-schumpeter-tech-giants-monopoly-fatalism Julian Sanchez discusses antitrust investigations against the tech giants on KDMT’s Business for Breakfast with Jimmy Sengenberger https://www.cato.org/multimedia/media-highlights-radio/julian-sanchez-discusses-antitrust-investigations-against-tech Tue, 04 Jun 2019 11:50:00 -0400 Julian Sanchez https://www.cato.org/multimedia/media-highlights-radio/julian-sanchez-discusses-antitrust-investigations-against-tech CyberWork and the American Dream https://www.cato.org/multimedia/events/cyberwork-american-dream Matthew Feeney <p>The perceived threat of artificial intelligence (AI) to the American workforce and society more broadly has become a&nbsp;common topic of discussion among policymakers, academics, and the wider public. But is AI a&nbsp;threat? And if so, are there appropriate policy solutions? History is replete with examples of disruption caused by past technological advances. Are the lessons from those advances applicable to AI? These are just some of the questions addressed by the PBS television documentary <em>CyberWork and the American Dream</em>.</p> Mon, 22 Apr 2019 14:40:00 -0400 Matthew Feeney https://www.cato.org/multimedia/events/cyberwork-american-dream Antitrust and Big Tech https://www.cato.org/multimedia/cato-daily-podcast/antitrust-big-tech Kristian Stout, Caleb O. Brown <p>The benefits and rationale for subjecting large tech firms to antitrust claims seem less clear than the costs, according to Kristian Stout with the International Center for for Law and Economics.</p> Tue, 16 Apr 2019 14:27:00 -0400 Kristian Stout, Caleb O. Brown https://www.cato.org/multimedia/cato-daily-podcast/antitrust-big-tech Cato Institute event, “Why the Government Should Not Regulate Content Moderation of Social Media,” airs on C‑SPAN https://www.cato.org/multimedia/media-highlights-tv/cato-institute-event-why-government-should-not-regulate-content Mon, 15 Apr 2019 10:39:00 -0400 John Samples, Matthew Feeney https://www.cato.org/multimedia/media-highlights-tv/cato-institute-event-why-government-should-not-regulate-content Who’s Afraid of Big Tech? — Panel 3: Free Speech in an Age of Social Media https://www.cato.org/multimedia/events/whos-afraid-big-tech-panel-3-free-speech-age-social-media Corynne McSherry, Thomas Kadri, Jonathan Rauch, Andrew Moylan, John Samples <p>News of foreign interference in elections and allegations of mismanagement have prompted lawmakers to take action. Executives from the largest and most popular technology companies have been called before congressional committees and accused of being bad stewards of their users’ privacy, failing to properly police their platforms, and engaging in politically motivated censorship. At the same time, companies such as Google and Amazon have been criticized for engaging in monopolistic practices.</p> Fri, 01 Mar 2019 18:01:00 -0500 Corynne McSherry, Thomas Kadri, Jonathan Rauch, Andrew Moylan, John Samples https://www.cato.org/multimedia/events/whos-afraid-big-tech-panel-3-free-speech-age-social-media Who’s Afraid of Big Tech? — Flash Talk: Online Ad Regulation: Necessary or a Danger to Free Speech? https://www.cato.org/multimedia/events/whos-afraid-big-tech-flash-talk-online-ad-regulation-necessary-or-danger-free Allen Dickerson <p>News of foreign interference in elections and allegations of mismanagement have prompted lawmakers to take action. Executives from the largest and most popular technology companies have been called before congressional committees and accused of being bad stewards of their users’ privacy, failing to properly police their platforms, and engaging in politically motivated censorship. At the same time, companies such as Google and Amazon have been criticized for engaging in monopolistic practices.</p> Fri, 01 Mar 2019 17:51:00 -0500 Allen Dickerson https://www.cato.org/multimedia/events/whos-afraid-big-tech-flash-talk-online-ad-regulation-necessary-or-danger-free Who’s Afraid of Big Tech? — Panel 2: Is Big Tech Too Big? https://www.cato.org/multimedia/events/whos-afraid-big-tech-panel-2-big-tech-too-big Matthew Stoller, Kristian Stout, Peter Van Doren <p>News of foreign interference in elections and allegations of mismanagement have prompted lawmakers to take action. Executives from the largest and most popular technology companies have been called before congressional committees and accused of being bad stewards of their users’ privacy, failing to properly police their platforms, and engaging in politically motivated censorship. At the same time, companies such as Google and Amazon have been criticized for engaging in monopolistic practices.</p> Fri, 01 Mar 2019 17:44:00 -0500 Matthew Stoller, Kristian Stout, Peter Van Doren https://www.cato.org/multimedia/events/whos-afraid-big-tech-panel-2-big-tech-too-big Who’s Afraid of Big Tech? — Flash Talk: The Time Is Now: A Framework for Comprehensive Privacy Protection and Digital Rights in the United States https://www.cato.org/multimedia/events/whos-afraid-big-tech-flash-talk-time-now-framework-comprehensive-privacy Burcu Kilic <p>News of foreign interference in elections and allegations of mismanagement have prompted lawmakers to take action. Executives from the largest and most popular technology companies have been called before congressional committees and accused of being bad stewards of their users’ privacy, failing to properly police their platforms, and engaging in politically motivated censorship. At the same time, companies such as Google and Amazon have been criticized for engaging in monopolistic practices.</p> Fri, 01 Mar 2019 17:40:00 -0500 Burcu Kilic https://www.cato.org/multimedia/events/whos-afraid-big-tech-flash-talk-time-now-framework-comprehensive-privacy Who’s Afraid of Big Tech? — Welcome Remarks and Panel 1: Big Brother in Big Tech https://www.cato.org/multimedia/events/whos-afraid-big-tech-welcome-remarks-panel-1-big-brother-big-tech Matthew Feeney, Alec Stapp, Ashkhen Kazaryan, Lindsey Barrett, Julian Sanchez <p>News of foreign interference in elections and allegations of mismanagement have prompted lawmakers to take action. Executives from the largest and most popular technology companies have been called before congressional committees and accused of being bad stewards of their users’ privacy, failing to properly police their platforms, and engaging in politically motivated censorship. At the same time, companies such as Google and Amazon have been criticized for engaging in monopolistic practices.</p> Fri, 01 Mar 2019 17:09:00 -0500 Matthew Feeney, Alec Stapp, Ashkhen Kazaryan, Lindsey Barrett, Julian Sanchez https://www.cato.org/multimedia/events/whos-afraid-big-tech-welcome-remarks-panel-1-big-brother-big-tech Custodians of the Internet by Tarleton Gillespie https://www.cato.org/cato-journal/winter-2019/custodians-internet-tarleton-gillespie Will Duffield <div class="lead text-default"> <p>In <em>Custodians of the Internet</em>, Tarleton Gillespie examines the governing institutions of the modern internet. Content moderation — the process of setting and enforcing rules concerning what may be published on social media platforms — is a&nbsp;rapidly evolving form of private governance. Though social media is increasingly treated as and resembles a&nbsp;21st‐​century public square, it is governed by an ecosystem of profit‐​maximizing private firms.</p> </div> , <div class="text-default"> <p>As Gillespie notes early in <em>Custodians</em>, “moderation is, in many ways, <em>the</em> commodity that platforms offer.” Social media firms provide speech platforms bundled with rules, or community standards, intended to provide a&nbsp;pleasant user experience. It can be difficult to get these rules right, “too little curation, and users may leave to avoid the toxic environment that has taken hold; too much moderation, and users may still go, rejecting the platform as either too intrusive or too antiseptic.” Decisions not to moderate are, necessarily, moderation decisions. Platforms are not, and have never been, neutral with respect to content, though this does not imply that they have partisan political biases.</p> <p>Taking a&nbsp;problem‐​centric approach, Gillespie works his way through social media governance crises of the past decade, illustrating how controversy attending potentially objectionable content, from pro‐​anorexia “thinspiration” posts to breastfeeding photos, has driven the promulgation of new rules. He employs these stories to sketch out the web of private regulations governing our online interactions.</p> <p>After introducing content moderation and its necessity, <em>Custodians</em> comes into its own in chapter four, as Gillespie lays out the sheer scale of the task facing platforms on a&nbsp;daily basis. He quotes Del Harvey of Twitter: “Given the scale that Twitter is at, a&nbsp;one‐​in‐​a‐​million chance happens 500 times a&nbsp;day … say 99.999 percent of tweets pose no risk to anyone … that tiny percentage of tweets remaining works out to 150,000 per month.” With millions of users posting dozens of times a&nbsp;day (or 2.23 billion users in Facebook’s case), the volume of speech on social media defies any traditional editorial comparison. With volume comes diversity. Few platforms draw their users from any single community, and most are international, allowing users with different norms and standards of offense to interact with one another. Prescreening this deluge of expression is impossible, so apart from some algorithmic filtering of very specific sorts of content, like child pornography, content moderation is a&nbsp;post hoc game of whack‐​a‐​mole.</p> <p>While the differing architectures and intended uses of particular social media platforms lend themselves to different rules and styles of moderation, platforms’ rulesets have steadily converged because these firms “structurally inhabit the same position — between many of the same competing aims, the same competing users, and between their user bases and lawmakers concerned with their behavior.” Nevertheless, from top‐​down platforms like Facebook and YouTube to the federated structures of Reddit and Twitch, there is still a&nbsp;great deal of diversity in platform governance. More niche competitors like Full30 or Vimeo distinguish themselves by diverging from YouTube’s policies concerning, respectively, firearms and nudity.</p> <p>For those concerned with the private rather than public nature of platform governance, convergence can be problematic. While the emergence of some shared best practices may be harmless, American platforms’ near uniform adoption of European hate‐​speech prohibitions, especially in the wake of regulatory saber rattling by the European Commission, implies that something other than consumer‐​centric, market‐​driven decisionmaking is at work. Although much ink has been spilled about the private biases of social media firms, the ability of governments to launder censorship demands through ostensibly private content‐​moderation processes, evading constitutional limits on state authority, is far more concerning.</p> <p><em>Custodians</em> narrowly sidesteps technological determinism, instead highlighting often unappreciated differences between the infrastructures undergirding our physical and digital worlds to explain the use, or overuse, of certain technologies in content moderation. Noting that storefront retailers “can verify a&nbsp;buyer’s age only because the buyer can offer a&nbsp;driver’s license … an institutional mechanism that, for its own reasons, is deeply invested in reliable age verification,” Gillespie explains that without the means to utilize this infrastructure, and in the face of increasing pressure to serve age‐​appropriate content, internet platforms have doubled down on algorithmic age verification. Platforms may not be able to tell legitimate licenses from fakes, but they can deploy detailed behavioral data to glean the age of their users.</p> <p>Sometimes this “moderation by design” feels like an improvement over its physical antecedents. Finding that your Twitter account has been suspended is unpleasant, but it is probably less unpleasant than being physically hauled off your soapbox. In other examples, the move from physical to digital access controls seem disconcerting. It is one thing for adult magazines to be placed out of reach of children, it is another for them to be literally invisible to children (or anyone identified as a&nbsp;child based on an algorithmic assessment of their browsing habits).</p> <p>Geo‐​blocking, preventing users in certain locations from seeing certain pieces of content, raises similar concerns. However, in the face of demands from competing, intractable groups, it is often the easiest, if not the most principled, solution. When Pakistan demanded that Facebook remove an “Everybody Draw Muhammed Day” page, the platform’s options seemed binary: Facebook could either cave to the demand, effectively allowing Pakistan to alter its hate speech standards, or hold firm and risk losing access to Pakistani customers. Instead, Facebook chose to make the page inaccessible only to Pakistani users. This was not a&nbsp;politically costless decision, but Facebook received far less backlash than it might have had it taken the page down globally, or not at all. While technological half‐​measures have offered platforms some respite from demands made of their moderation efforts, they cannot resolve tensions inherent to intercommunal, value‐​laden questions. In many ways, the features that have made the platform internet so valuable to its users — the ability to transcend the fetters of identity or place, and speak, cheaply and instantaneously, to a&nbsp;mass audience — make universally acceptable moderation difficult, if not impossible.</p> <p>As the sophistication of moderation efforts has increased, efforts to influence the moderation process have grown evermore complex, both on and off platform. This is a&nbsp;trend <em>Custodians</em> could have explored in greater detail. While Gillespie touches on manipulation efforts when describing the human labor required to moderate content, it is discussed mostly in the context of platforms’ attempts to involve users in the moderation process. However, the frequency and quality of efforts to game the system have increased independently of platforms’ efforts to expand user agency.</p> <p>Instead of simply demanding takedowns in exchange for market access, states have begun to demand global content removals. Meanwhile, 4Chan users have attempted to make use of FOSTA, ostensibly an anti‐​sex trafficking bill, to get autonomous sensory meridian response (ASMR) recording makers banned from PayPal, alleging that the artists are engaged in sex work. (Listeners of ASMR recordings, which are often just people whispering, get a&nbsp;tingling, sensual sensation.) Advocacy groups and think tanks campaign for community standards changes intended to drive disfavored groups from the platform internet, and states have even placed agents at social media firms to spy on dissidents. Members of Congress regularly invite their favorite content creators to air their complaints about the moderation process on Capitol Hill. It is becoming evermore difficult for social media firms to moderate their walled gardens with the independence that legitimacy requires.</p> <p>While Gillespie explains in exacting detail the process of moderation and the problems moderators face, he concludes the book without offering much in the way of solutions. He calls for greater transparency from platforms and increased appreciation of the gravity and second‐​order effects of moderation. These are limited, valuable suggestions. Though Gillespie has published a&nbsp;more solution‐​centric addendum online that suggests legislative remedies, I&nbsp;cannot help but prefer the original ending. Given his unerring appreciation of moderation’s complexity throughout the book, the endorsement of blunt legislative “fixes” rings hollow. Yet, at this adolescent point in the internet’s history, there is great value in a&nbsp;book that makes the process of content moderation more legible, approachable, and understandable. Here, <em>Custodians of the Internet</em> is an unqualified success. Whether you like the current crop of social media platforms or hate them, no book will better equip you to appraise their actions.</p> <p>Will Duffield<br>Cato Institute</p> </div> Tue, 26 Feb 2019 03:00:00 -0500 Will Duffield https://www.cato.org/cato-journal/winter-2019/custodians-internet-tarleton-gillespie Seeking Intervention Backfired on Silicon Valley https://www.cato.org/policy-report/novemberdecember-2018/seeking-intervention-backfired-silicon-valley Drew Clark <div class="lead text-default"> <p>For nearly 30&nbsp;years now, Silicon Valley has been an American success story and one of the biggest drivers of the nation’s economic growth. America’s head start on the information revolution has done more to cement America’s dominant position in global affairs than a&nbsp;dozen aircraft carriers. The underlying factor was a&nbsp;kind of benign‐​neglect regulatory approach that left digital content companies largely free to do as they pleased, shielded from liability and operating under the guarantees of the First Amendment. The companies spawned by this revolution have grown to become some of the most valuable enterprises on the planet as they reshaped the modern world. Corporations like Twitter, Facebook, and Alphabet (better known by its flagship product Google) have made massive profits in an almost perfectly free‐​market environment.</p> </div> , <div class="text-default"> <div><em>“For whatsoever a man soweth, that shall he also reap.”</em><br /><br /></div> <p> </p><div> <div data-embed-button="image" data-entity-embed-display="view_mode:media.full" data-entity-type="media" data-entity-uuid="3dc005bb-9242-4a42-b13b-8a00abd8d93d" data-langcode="en" class="embedded-entity"> <img width="700" height="700" alt="Media Name: drewclark.jpg" class="lozad component-image lozad" data-srcset="/sites/cato.org/files/styles/pubs/public/images/drewclark.jpg?itok=9ZeQ8671 1x, /sites/cato.org/files/styles/pubs_2x/public/images/drewclark.jpg?itok=x4aIprQe 1.5x" data-src="/sites/cato.org/files/styles/pubs/public/images/drewclark.jpg?itok=9ZeQ8671" typeof="Image" /></div> <br />Drew Clark</div> <p>That success has brought with it a familiar temptation: turning to the government to secure advantages in the marketplace. In so doing, these tech titans have played with fire, and now they see policies they initially supported being turned on them in ways they hadn’t imagined were possible. First, in the 1990s with the antitrust case against Microsoft, and again more recently in the battle with internet service providers over net neutrality, Silicon Valley looked to Washington. With that fateful decision, they entangled the federal government in their industry in ways that are now coming back to haunt them.<br /><br /></p> <p><strong>NET NEUTRALITY</strong><br />On its face, the principle of net neutrality seems unobjectionable: internet service providers (ISPs) such as AT&amp;T, Verizon, and Comcast should enable access to all content and applications regardless of the source, without favoring or blocking particular products or websites. This approach has always been a norm under which the internet has operated since its inception. The question for today is whether the government needs to enforce this norm via regulation or let market participants test different ways to improve the delivery of content over the internet.</p> <p>The regulation of telecommunications services has always been an ungainly mess. From the 1910s until its breakup in the 1980s, AT&amp;T’s monopoly on telephone service was regulated first by the Interstate Commerce Commission (ICC) and then by the Federal Communications Commission (FCC) after its establishment during the New Deal era. Telephones were placed under Title II of the 1934 Communications Act, which had its origin in the ICC’s regulation of railroad common carriers. Title III governs frequencies used by radio and television broadcasters. Other sections regulate cable TV and satellites and set rules for the auction of airwaves for wireless cellular services.</p> <p>When the term “net neutrality” emerged in the public discussion in 2004, it was in response to entrepreneur Jeff Pulver’s creating a Voice‐​over‐​Internet‐​Protocol (VoIP) company. Pulver wanted to be free to develop software for internet telephone calls. Traditional telephone companies argued that internet telephony should be regulated just like they were. But then FCC chair Michael Powell articulated what he called the “four freedoms” of the internet: (a) freedom to access content, (b) freedom to use applications — like Jeff Pulver’s Free World Dialup VoIP, (c) freedom to attach personal devices to the network, and (d) freedom to obtain information about their ISP plans and policies. The Pulver Order, released within days of Powell’s “Preserving Internet Freedom” speech, said that VoIP was an information service. And information services, unlike telecommunications services, would not be regulated under the stricter regime of Title II.</p> <p>The 1996 Telecommunications Act left it to the FCC to decide whether or how to regulate broadband internet access from cable modems, digital subscriber lines (and later fiber optics), and wireless transmission. The approach under the administrations of Bill Clinton and George W. Bush was to put everything possible into Title I, which governed the agency’s “ancillary authority,” and to call them “information services.” Doing so avoided the common‐​carrier rules that require neutrality and nondiscrimination in the delivery of content. At the same time, the cable and telecom and wireless companies were investing and building more broadband connections. They too wanted to get out from under the regulatory regime. In their view, the internet worked so well because government regulated it so little.</p> <p>Barack Obama had a much more pro‐​regulation worldview. He and his allies believed that the internet’s convention of allowing innovators to launch new products and services without seeking permission from media gatekeepers was threatened by the big communications companies. The cable industry operated by bundling content distribution with pay‐​television services. These vertically integrated cable system operators had the power to thwart new entrants from launching new channels. That dynamic is absent from the internet, but progressives feared that the internet was in danger of becoming like cable TV.</p> <p>?But that’s not likely to happen. A provider’s blocking or throttling access to any websites or service has happened so rarely that it’s laughable to characterize the possibility as any kind of problem. And yet strong passions have been stirred by the fear that big communications companies might mess with the consumer’s expectation of net neutrality.</p> <p>Obama’s first FCC chair, Julius Genachowski, took a “light touch” regulatory approach as he attempted to put net neutrality into law. Genachowski kept wired internet access as an information service rather than a telecommunications service but added a narrow proscription against blocking and throttling. Verizon Communications sued and won, prompting the next FCC chair, Tom Wheeler, to try again. At first, this attempt appeared to be headed for the same defeat in court. But then Silicon Valley content giants, including Google, Facebook, Twitter, and Netflix, launched an all‐​out publicity war, enlisting celebrities and mobilizing young progressives to demand the adoption of net neutrality.</p> <p>The companies that drove the engine of America’s information technology machine essentially argued as follows: We provide the good stuff that you — the American consumer — want. You go to Google to get your searches answered. You want Facebook to keep up on posts from friends, families, and trusted content providers. Access to the content in the Apple iTunes store or to Amazon Prime streaming video subscriptions doesn’t need to be regulated because we tech giants compete vigorously among ourselves. But Washington does need to step in and regulate the telecom market because of a lack of competition among ISPs. And the FCC agreed in 2015 with what was officially dubbed the Open Internet Order.</p> <p>The argument for net neutrality might have served tech giants well under Obama, but it wasn’t as well received by the Trump administration. And for incumbent telecom providers, the new administration has been a time for political payback. On December 14, 2017, the Obama‐​era approach to net neutrality was starkly reversed by the Federal Communications Commission under Chair Ajit Pai.</p> <p>For years, many of the industry’s leading lights pressed the hardest for Washington to rescue them from the always‐​unpopular ISPs. Now some of the same companies, like Apple, are themselves the target of a trust‐​busting zeal among resurgent progressives in the Democratic Party and Steve Bannon–style nationalist populists.</p> <p>Major content companies like Google, Facebook, and Netflix feared that ISPs would seek to throttle their services as a way of extracting payment for prioritization. Particularly for data‐​intensive video‐ streaming services like Netflix and Google’s YouTube, this concern had a certain economic logic, even as it remained hypothetical. Having long courted Silicon Valley as a key constituency and facing a highly visible public demand with enthusiastic grassroots support on the left, Obama complied.</p> <p>The D.C. Circuit Court of Appeals upheld the new rules in June 2016, giving the FCC remarkably wide deference to do nearly whatever it wants with the internet. At first, it seemed like a major victory for progressives keen on heavier regulation. But with a change in administration, what the FCC now wants to do is very different from what it wanted to do just a few years ago.</p> <p>FCC Chair Pai effectively eliminated all of these new internet rules with the exception of a 2010 transparency requirement. Under the 2017 decision adopted by the agency on December 14, broadband providers can prioritize traffic for a fee, or for affiliated companies, or to block access to any or all websites and services, so long as they disclose those practices. Enforcement will be turned over to the FTC, on the theory that this change is a matter of antitrust and competition law rather than technology regulation. It now takes the FCC completely out of the business of micromanaging disputes about broadband traffic between telecom and content companies.</p> <p>Silicon Valley’s regulations‐​for‐​thee‐​but‐​not‐​for‐​me attitude has come back to bite them. They want the strictest form of regulation for telecommunications providers but no scrutiny of themselves, and now the tables have been turned.</p> <p>Pai has not hesitated to point out the hypocrisy as he has moved to undo the net neutrality rules. In a November 29 speech in the lead‐​up to his net neutrality rollback, he said that the tech giants are “part of the problem” of viewpoint discrimination. “Indeed, despite all the talk about the fear that broadband providers <em>could</em> decide what internet content consumers can see, recent experience shows that so‐​called edge providers <em>are in fact</em> deciding what content they see. These providers routinely block or discriminate against content they don’t like.”</p> <p>As examples, Pai cited Twitter’s blocking Rep. Marsha Blackburn (R‑TN) from advertising her Senate campaign with a message about partial‐​birth abortion, Apple’s blocking an app for cigar aficionados, Google–YouTube’s demonetizing videos from conservative commentator Dennis Prager and his “Prager University,” plus “algorithms that decide what content you see (or don’t), but aren’t disclosed themselves” and “online platforms secretly editing certain users’ comments.” Others have termed the need for clarity about algorithms as a form of “search neutrality.” The next day, for good measure, Pai blasted Facebook and Twitter for contributing to the rise of incivility in public discourse and “the breakdown in human interaction.”</p> <p>The Internet Association, a lobbying group for content companies like Google, Facebook, and Twitter, served up the standard response: “Websites and apps operate in a competitive environment with low barriers to entry where choice and competition are a click away. This stands in stark contrast to ISPs, where more than 60 percent of Americans have no choice in high speed broadband provider.” As the battle over regulating social media and search engines turns to Capitol Hill, we can expect legislators to vigorously debate whether consumers are better served by a “light touch” regulatory approach or by the truly free‐​market deregulation implemented by Pai.</p> <p>But if regulation returns, the tables have turned such that Google and Facebook would likely be subject to any such rules that get passed. As AT&amp;T CEO Randall Stephenson argued in a full‐​page advertisement in January, legislation “would provide consistent rules of the road for all internet companies across all websites, content, devices and applications.” In other words, no more free ride for tech companies at the expense of the telecom players: neutrality regulation either applies to everyone or to no one.</p> <p><strong>ANTITRUST</strong><br />Twenty years ago, the administration of Bill Clinton put forth an official government policy declaring that information technology industries — separate and apart from government treatment of telecommunications — should be effectively immune from government regulation. Republicans, opposed to practically everything else Clinton stood for, cheerfully agreed.</p> <p>For Silicon Valley, that all seemed great — except when it came to Microsoft, the software giant from the north. Eric Schmidt, who would later become CEO of Google, played a crucial role in rallying Silicon Valley against Microsoft through his role as chief technology officer at Sun Microsystems, and then as CEO of networking company Novell. Apple also played a role. The Clinton Justice Department put forward a theory about how Microsoft improperly leveraged the monopoly it had earned in the market for computer operating systems into the market for so‐​called middleware software. Hence, Microsoft was the original example of a “platform monopoly.”</p> <p>Yet the allegations, even as proved, never justified the demand for breakup. Once the D.C. Circuit Court of Appeals took that option off the table with a 2001 en banc decision, the Bush administration Justice Department settled the case in 2002 with a 10‐​year consent decree.</p> <p>Both Microsoft’s critics in Silicon Valley and its defenders among free‐​market economists acknowledged that the case needed to turn on the consumer welfare standard, per the appeals court: “To be condemned as exclusionary, a monopolist’s act must have an ‘anticompetitive effect.’ That is, it must harm the competitive process and thereby harm consumers.”</p> <p>Law professors Geoffrey Manne and Joshua Wright — in a 2011 paper on Google and antitrust — summarized decades’ worth of Supreme Court jurisprudence on Section 2 of the Sherman Act against attempted monopolization:</p> <ul><li>Mere possession of monopoly power is not an antitrust offense.</li> <li>The mere exercise of lawful monopoly power in the form of higher prices is not an antitrust violation.</li> <li>Courts must be concerned with the social costs of antitrust errors, and the error‐​cost framework is a desirable approach to developing standards that incorporate these concerns.</li> </ul><p>This “error‐​cost framework” requires the plaintiff, often the government, to prove exclusionary conduct. The accused monopolist has the opportunity to introduce evidence of pro‐​competitive effects. Unrebutted positive effects are then weighed against the consumer harms of exclusionary effects.</p> <p>Google appeared to be heading down the same path as Microsoft in the first Obama administration. Although estimates vary for market share in the United States, Google is undoubtedly dominant and currently has around 90 percent of global searches. The Justice Department imposed restrictions on Google’s acquisition of ITA Software’s airline flight‐​pricing information. And the Federal Trade Commission began a far more significant review of whether Google’s search results favored its own products over those of competitors.</p> <p>One of Microsoft’s former attorneys thought that Google had crossed the line without offering sufficient justification. Summarizing Google’s dilemma in the <em>Wall Street Journa</em>l in September 2010, Charles “Rick” Rule wrote, “What goes around comes around.” Taking his cue from Microsoft’s experience, he said, “The last 10 years have shown that reasonable antitrust rules can be applied to prevent exclusionary conduct by dominant tech firms without destroying market forces.”</p> <p>Yet in January 2013, all five FTC commissioners held that Google provided more benefits than harms to competition. “The totality of the evidence indicates that, in the main, Google adopted the design changes that the Commission investigated to improve the quality of its search results, and that any negative impact on actual or potential competitors was incidental to that purpose,” according to the official statement concluding the investigation. The rather modest changes agreed to by Google included not “scraping” the content of its rivals for specialized search results, as well as dropping contractual restrictions that made it harder for small businesses to advertise on competing search advertising platforms.</p> <p>That FTC decision not to sue Google is increasingly under fire, by conservatives as well as progressives. In late August, Sen. Orrin Hatch (R‑UT) asked that FTC Chair Joseph Simmons revisit Google’s role in search and digital advertising. That request came after three days in which Trump went on the warpath against Google, and also Facebook and Amazon, saying that the companies may be in a “very antitrust situation.”</p> <p>Google had argued that in search operations — unlike with broadband providers — the costs to switch remain minimal. “Internet platforms are not natural monopolies. Although some digital companies exhibit scale economies, they do not produce undifferentiated products and face relatively low costs of competitive entry,” writes Glenn Manishin on the blog of the Computer and Communications Industry Association. “If digital platforms were natural monopolies, Google could not have dethroned Yahoo search, Myspace and Friendster would not have been surpassed by Facebook, and Amazon’s small business e‑commerce platform would not have been bested by the start‐​up Shopify.”</p> <p>Google’s competitors and critics employed what is becoming a more common tactic: go to Europe to make charges. Indeed, the European Commission, the executive of the European Union, has taken an aggressively protectionist approach that has singled out American tech giants. In June 2017, it announced a fine of €2.4 billion , or $2.8 billion, against Google. It was for the same claims dismissed by the FTC. In July 2018, the EU lodged its second fine: €4.3 billion ($5.1 billion) over allegedly illegal restrictions on Android device makers and mobile network operators. (It’s worth noting that Trump himself criticized the verdict in a tweet: “The European Union just slapped a Five Billion Dollar fine on one of our great companies, Google.”) A third EU case involving Adsense has not yet concluded. It’s still unclear whether American state attorneys general will also jump on board against Google, as they did in the Microsoft case — although the Justice Department and 14 state attorneys general did attend a meeting on September 25 to consider growing concerns about the power of social media giants.</p> <p>Of course, the tech industry has never been exempt from laws of general applicability. But the once‐​unvarnished “success story” narrative has been scrapped for fashionable tech‐​bashing, as seen in popular books such as Jonathan Taplin’s <em>Move Fast</em> and <em>Break Things: How Google, Facebook, and Amazon Cornered Culture</em> and <em>Undermined Democracy; Franklin Foer’s World without Mind: The Existential Threat of Big Tech; and Tim Wu’s The Attention Merchants</em>.</p> <p>Yet these now‐​criticized tech giants are all constantly pushing up against each other, and smaller companies, with innovative new offers. As tech journalist Scott Rosenberg writes: “One problem with today’s charges of monopolistic behavior is that there are so many monopolists this time around. And they’re all competing with one another!”</p> <p>As to where the Trump administration comes down on antitrust, Washington is still trying to parse the meaning of the Antitrust Division’s prosecution of AT&amp;T’s proposed merger with content company Time Warner, a $109 billion transaction — and its dubious decision to appeal U.S. District Court Judge Richard Leon’s merger approval. The Obama administration had blocked AT&amp;T’s proposed acquisition of T‑Mobile, a $39 billion transaction. But that was a “horizontal” merger proposal that would have lowered the number of nationwide cellular carriers from four to three. In a “vertical” merger of content and internet distribution, the better parallel is Comcast’s acquisition of NBCUniversal, a $30 billion transaction approved by the Justice Department in 2011 with modest conditions.</p> <p>Silicon Valley should take all this activity as a teachable moment. When the companies first went to the government in the 1990s to seek intervention against Microsoft, and when they pushed for FCC net neutrality rules more recently, they set a precedent that brought Washington into their industry. They took it for granted that regulators would never go after content platforms like their own, but now it is precisely those platforms that are squarely in the sights of many politicians.</p> </div> Mon, 10 Dec 2018 10:55:00 -0500 Drew Clark https://www.cato.org/policy-report/novemberdecember-2018/seeking-intervention-backfired-silicon-valley Ajit Pai and Kat Murti discuss “net neutrality” https://www.cato.org/multimedia/cato-audio/ajit-pai-kat-murti-discuss-net-neutrality Wed, 01 Aug 2018 03:00:00 -0400 Ajit Pai, Kat Murti https://www.cato.org/multimedia/cato-audio/ajit-pai-kat-murti-discuss-net-neutrality The Untold History of FCC Regulation https://www.cato.org/policy-report/mayjune-2018/untold-history-fcc-regulation <div class="lead text-default"> <p><em>Popular wisdom holds that, before the creation of the Federal Radio Commission, the radio spectrum was in chaos. But as former chief economist at the Federal Communications Commission (FCC) THOMAS HAZLETT documents in his new book, </em>The Political Spectrum: The Tumultuous Liberation of Wireless Technology, from Herbert Hoover to the Smartphone<em>, the real story of the radio spectrum is quite different. At a&nbsp;Cato Book Forum, Hazlett was joined by FCC Chairman AJIT PAI to discuss how FCC regulations have hampered innovation for decades, delaying the advent of everything from FM radio to the cell phone.</em><br><br></p> </div> , <div class="text-default"> <p><strong>THOMAS HAZLETT:</strong> Wireless is a technology so curious that it’s named for what it’s not. In 1939, at the World’s Fair, when wireless television was debuted, the demonstration featured a special television created out of glass, to counter rumors that it featured tiny actors on a miniature stage.</p> <p>It turned out to be real science, of course, and consumers came to embrace it. With the discovery of broadcasting in the 1920s, a mass‐​market radio service developed. And it was widely said at that time, and even to this day, that a “market failure” developed: there was a cacophony of competing voices, endemic spillovers creating static interference and inefficient externalities. There had to be centralized, administrative control. Broadcasting stations could not, if left to their own, keep from destroying one another.</p> <p> </p><div> <div data-embed-button="image" data-entity-embed-display="view_mode:media.full" data-entity-type="media" data-entity-uuid="568699ad-5357-423d-b2fd-474e8d6ba373" data-langcode="en" class="embedded-entity"> <img width="425" height="283" alt="Media Name: img-cpr-v40n3-5-1.jpg" class="lozad component-image lozad" data-srcset="/sites/cato.org/files/styles/pubs/public/images/pubs/cpr-v40n3/img-cpr-v40n3-5-1.jpg?itok=GW20mjIp 1x, /sites/cato.org/files/styles/pubs_2x/public/images/pubs/cpr-v40n3/img-cpr-v40n3-5-1.jpg?itok=41asoJxJ 1.5x" data-src="/sites/cato.org/files/styles/pubs/public/images/pubs/cpr-v40n3/img-cpr-v40n3-5-1.jpg?itok=GW20mjIp" typeof="Image" /></div> <br />Thomas Hazlett</div> <p>But, in fact, that was <em>not</em> the history. The robust emergence of radio was under a first come, first served system of property rights in frequencies. The rules were enforced by the U.S. Department of Commerce — the first regulator of radio was the secretary of commerce, Herbert Hoover, 1921–1928. And rules borrowed from common law created an orderly marketplace with brisk development.</p> <p><br />Up until about February 23, 1927, when the Radio Act was signed into law. The Radio Act overturned and preempted property rights in frequencies. Instead, a “public interest” standard would be used by regulators to set aside particular frequencies for particular tasks by particular licensees. A commission would define the wireless services, the communications technologies, and the business models allowed to compete.</p> <p>This change in law was not due to the chaos of markets, but sprang from a coalition of political and business interests. Political actors, led by Hoover, desired more discretion over who could broadcast and what they said. These policymakers aligned with major commercial radio station owners, incumbents who sought barriers to entry and provided the actual language of “public interest” for the 1927 Act.</p> <p>The upshot was that a government agency, not competitive market forces, would allocate airwaves. A golden resource in the budding information age was taken off the market. The “political spectrum” was born.</p> <p>For decades, this system blocked competitive forces and stymied innovation. Gradually, however, the restrictions from the central allocation system loosened. Why and how this process unfolded is still a bit of a mystery. But there are two stories that will illustrate this path to liberalization.</p> <p>Let’s start with one of the great inventors of the 20th century, Edwin Howard Armstrong. He was a Columbia University professor of physics, but he preferred to be addressed as “Major Armstrong” — he was very patriotic and had served in both world wars. He was one of the key contributors to AM technology, and in the early 1920s he was the largest shareholder in the Radio Corporation of America (RCA) due to the sale of his patents. In 1934 he came up with a better mousetrap, FM broadcasting. He had to get permission to deploy his new idea using radio spectrum — and it took some years for the planet’s preeminent expert on wireless technology to get his allocation. But finally he did, stations were built, and some 500,000 radio sets in the northeastern United States tuned into high fidelity radio. They experienced the rich, wonderful, quality reception that Armstrong had told them they would.</p> <p>But in 1945, the FCC reconsidered and uprooted the entire band. The FCC had a “sunspot theory” of radio interference, uniquely applying the threat of solar flares. Armstrong, who would have been the first to worry about interference had the threat been real, objected violently to the move and introduced mountains of scientific evidence against it. To no effect. The FCC, under political pressure, eliminated the FM allocation, making all existing radios worthless. A new band was assigned, but by the time the new radios were designed, no one would buy them. Armstrong, distressed and humiliated, committed suicide in 1954. Had he lived to see FM radio given a straight‐​up chance to compete, as it finally was in the 1960s, he would have been proud to witness its almost instant domination of the incumbent AM technology.</p> <p>I tell the Armstrong FM story because that technology actually got <em>into</em> the marketplace before it was excluded. The great majority of wireless innovations never made it to market — nipped in the bud.</p> <p>Alas, progress slowly came. Let’s see it by fast‐​forwarding to 2005. In that year there was another idea for a <em>new ’n improved</em> radio. An entrepreneurial company in Cupertino, California, which had very nearly gone bust just a few years before, was thinking about how mobile phones could be made prettier, better, and vastly more functional. And so it was that Apple invented the iPhone. To work, the device would have to have access to airwaves. Just as had Edwin Armstrong, Steve Jobs needed spectrum.</p> <p>Yet by 2005, the regime called by its practitioners “mother may I” had been, at least in significant part, reformed. Apple did not have to ask Washington for permission to launch. Instead, the FCC had relaxed restrictions, granting mobile carriers wide latitude to use radio spectrum in flexible ways. This put the onus on the networks to manage their own frequency spaces.</p> <p>A new spectrum store was open for business, and the mobile carriers approached Apple to sell access to airwaves. Indeed, the networks bid fiercely against each other to host this consumer‐​pleasing innovation. The price Apple paid for airwave access was negative.</p> <p>The iPhone took the market by storm, selling in the hundreds of millions globally, instigating the smartphone revolution, and establishing the iconic consumer innovation of this century. And beyond that, a vast ecosystem emerged. There are millions of applications for iPhones and competing Android devices now in the radio space without approval from a commission. This is a level of complexity unheard of in the 1920s or ’30s, when it was said it would be too complicated for businesses to figure out where the rules of the road should be bent to accommodate interfering devices and services. The conflicts would be overwhelming unless carefully avoided by “public interest” spectrum allocations.</p> <p>In consumer welfare terms, almost the exact opposite was true. Until tight administrative controls were peeled back, the rules maintained a Quiet Zone. The radio spectrum was governed by passive aggressive librarians. Regulators could hardly have known what would happen when the rules were loosened, but they feared it greatly. They sought to prevent the future.</p> <p>Reforms gradually forged room for new opportunities. By ceding spectrum property rights to competitors in the marketplace, experiments could be run and the march of technology accommodated. You probably don’t think much about the potential spectrum conflicts that come into play when you tap your smartphone icons. But your Angry Birds and Pandora and Facebook; your map apps and Snapchat and Kindle; your ride share or your Twitter; your dog tracker and cat videos — each potentially interferes with the other. The complexity, barely background noise to you, is universes beyond what might be managed by a central authority. That regulatory structure had to fade away for the new wireless world to evolve.</p> <p>Now whole new sectors are created, and nary a thought is given to the fact that the platform it sits on is a liberalized, deregulated spectrum market that frees the competitive forces that were so recently thought not up to the task at hand. What Herbert Hoover asserted had to be done by the state, it turns out, can only be done by open markets.</p> <p>Proof of concept. Delegating spectrum coordination to private competitors, as now applies to perhaps one‐​fifth of the most valuable spectrum, has demonstrated its worth. The struggle now is to push reform far deeper into the “political spectrum,” unlocking more of nature’s wireless bounty. The successes are not illusory, and surely not the product of tiny actors on a miniature stage.</p> <p><strong>AJIT PAI:</strong> This book is an extraordinary read, and as I was going along chapter by chapter, I thought about one of my favorite philosophers. I refer of course to Yoda, who when instructing Luke in <em>The Empire Strikes Back</em> says, “You must unlearn what you have learned.” This book forced me to look at how some of the received wisdom we have accepted uncritically should actually be challenged.</p> <p> </p><div> <div data-embed-button="image" data-entity-embed-display="view_mode:media.full" data-entity-type="media" data-entity-uuid="fc3a00c6-0f9f-4e68-a251-da3d6a4c8bbc" data-langcode="en" class="embedded-entity"> <img width="425" height="283" alt="Media Name: img-cpr-v40n3-5-2.jpg" class="lozad component-image lozad" data-srcset="/sites/cato.org/files/styles/pubs/public/images/pubs/cpr-v40n3/img-cpr-v40n3-5-2.jpg?itok=GAjzB4e9 1x, /sites/cato.org/files/styles/pubs_2x/public/images/pubs/cpr-v40n3/img-cpr-v40n3-5-2.jpg?itok=zkQTsPUS 1.5x" data-src="/sites/cato.org/files/styles/pubs/public/images/pubs/cpr-v40n3/img-cpr-v40n3-5-2.jpg?itok=GAjzB4e9" typeof="Image" /></div> <br />Ajit Pai</div> <p>There are a couple of different insights that I found especially salient — number one, that far from empowering the public, the FCC’s style of decisionmaking over the years actually empowers politicians and bureaucrats to make decisions. Many FCC regulators of both parties, across different eras and through different technological debates, found themselves creating a system that essentially vested control in themselves.</p> <p><br />In one passage, Professor Hazlett discusses the scarcity rationale for broadcast regulation and the FCC’s paradoxical restriction on cable entry, which would have provided more competition. As the professor puts it, quoting an FCC order from many years ago, “The circularity of this argument bears note. Broadcasting was regulated because spectrum was a physically scarce resource limited by nature. But when ‘spectrum in a tube’” — cable — “promised (or threatened) to relax that scarcity, the government was allowed to extend its powers to preserve the very limitations that justified regulation in the first place.” That is such a profound insight. The FCC had been regulating for decades on the basis of a premise which, because of an emerging technology, should have been called into question. Yet the agency actually squelched that emerging technology and ultimately disserved consumers.</p> <p>Which leads to the second major insight that I got from the book in terms of public choice, which is how FCC decisionmaking typically has accommodated rent seeking. The example I liked in particular was the emergence of cellular — or the non‐​emergence of cellular. I’ve had the chance to meet the one and only Marty Cooper, who placed the very first cellular call in the early 1970s. And I remember after meeting him and seeing the prototype — this giant cell phone that he used to place that first call — wondering: Why did he place that call in the early 1970s, in the year I was born, yet cellular as we know it didn’t emerge until I was pretty much out of law school in the 1990s? The book goes to extraordinary lengths to detail exactly why that was, and one of the reasons was that the FCC was besieged by a couple of different entities that thought this was a competitive threat, or simply didn’t want the FCC to be focusing on this new and emerging technology.</p> <p>The other aspect of public choice that I thought was very interesting was how ultimately this way of decisionmaking has harmed consumers. And quite often this harm to consumer welfare comes under the guise of what the professor calls very humorously throughout the book, “technical reasons.” For “technical reasons” we have to prohibit you from exploring this emerging technology, or we have to restrict output.</p> <p>To me, the “technical reasons” excuse was best shown in 1944, as the professor already mentioned, when the FCC got a couple of petitions with a bold proposal: toss every FM station off its assigned frequency and relocate the entire industry up the dial. All existing equipment — transmitters owned by stations, receivers owned by listeners — would become obsolete. Proponents claimed the frequency switch would help FM stations avoid ionic interference, a threat alleged to emanate from sunspots. Think about how many decades of consumer welfare were forestalled, or in the immediate years prohibited, because the agency essentially restricted the ability of Armstrong and other FM pioneers to be able to engage in what they were doing best.</p> <p>The same thing happened with cellular. The professor quotes a study that concluded that, had the FCC proceeded directly to licensing from its 1970 allocation decision, cellular licenses could have been granted as early as 1972, and systems could have been operational in 1973. The study’s authors found that the FCC spectrum allocation process caused a 10- to 15‐​year delay in cellular service. And the professor suggests that actually might be on the <em>conservative</em> end of things.</p> <p>And that leads me to the last key insight, which is that the market mechanism, as he conceives it, has delivered far more value over the years than the amorphous and elastic public interest standard. And with respect to the former, of course, Ronald Coase features very prominently in the book. It really is incredible to think that in the late 1950s and early 1960s he was pioneering an idea that was almost quite literally laughed out of the academy, the halls of Congress, and even the halls of the FCC: this notion that assigning property rights and minimizing transaction costs would ultimately allow the asset itself to be allocated to its highest valued use. The professor quotes a couple of my predecessors at the commission who said that the likelihood of any spectrum being auctioned would be akin to the Easter Bunny winning the Preakness.</p> <p>I think that the success of Professor Coase’s theory has been proven over the years. And as Professor Hazlett mentions in closing, “From electricity to water to pollution allowances to fishing rights, newly constructed markets have fashioned superior alternatives to command‐​and‐​control regulation.”</p> <p>The other piece of it, of course, is the public interest standard, and here the professor does a masterful job of elucidating why that standard, all too often, has been amorphous and has been subject to the interpretation of whatever particular majority happens to occupy the FCC. In the best example I can think of, Hazlett talks about how FCC staffers over the years would be given an assignment to approve a broadcast license renewal. “The FCC was well practiced in crafting eloquent documents detailing how any given assignment advanced ‘public interest, convenience or necessity.’ These statements, required by administrative law, laid a veneer of respectability over processes that might otherwise attract interest from journalists or prosecutors. In one revealing episode, a surprisingly self‐​confident FCC staffer — tasked with writing up a justification for a license award — asked the chairman of the Commission to describe the policy grounds for this selection.” (This is in the mid‐​1960s.) “The annoyed chairman responded ‘You’ll think of some.’” In another case, “the FCC voted to grant a company a TV license, and the staff wrote up an order of more than 100 pages explaining it. For reasons undisclosed, the FCC reconsidered and switched licensees. The staff dutifully revised its order using the original draft as a template, producing an equally glowing public interest justification for the new winner.” I think that makes it critical for us to focus on the facts, to think about principles of economics and to have a view as to consumer welfare as opposed to whatever parochial interest might be badgering us for this or that regulatory favor.</p> <p>The professor, as I said, offers some great insights, and I would like to think that over the last year and change we have tried to incorporate some of those insights in terms of structure and policy. Last year, I introduced my proposal to create an Office of Economic Analysis. Our hope is to make sure that economic reasoning is not just an afterthought at the FCC, but a <em>central</em> thought as we make our decisions. That is one way to insulate the agency from that kind of “ends justify the means” decisionmaking that I just described.</p> <p>Additionally, we are giving teeth to Section 7 of the Communications Act. No longer will an innovator have to sit around waiting for years for the FCC to figure out whether or not an invention is in the public interest. We now have a one‐​year deadline for making these determinations. And we are adopting more market‐​based solutions — flexible spectrum use, for example, has been a profound benefit to consumers the world over. Instead of determining what the spectrum shall be used for, dictating it from on high and expecting entrepreneurs to make use of it, we let innovators make that decision. And the results speak for themselves. The fact that we have smartphones speaks to the fact that innovators have been able to devote the spectrum to its highest‐​valued use.</p> <p>Additionally, we want to minimize infrastructure burdens. Increasingly this is where the rubber meets the road. Next week, for example, we are going to be voting on an order modernizing our regulations to recognize that the networks of the future won’t look like the networks of the past. The small cells of the future and all the guts of the 5G networks need to be evaluated under a regulatory rubric that is different from the one that applied in decades past to 200‐​foot cell towers.</p> <p>Our hope is that both in terms of structure and in terms of policy we can make sure that we make decisions that are right for the American people, produce more consumer welfare, and most importantly ensure that when the sequel to this book is written, Chairman Ajit Pai is not going to be featured whatsoever. Except for, hopefully, as an example of something that went right.</p> </div> Mon, 18 Jun 2018 18:18:00 -0400 https://www.cato.org/policy-report/mayjune-2018/untold-history-fcc-regulation Big Tech’s Big‐​Time, Big‐​Scale Problem https://www.cato.org/policy-report/mayjune-2018/big-techs-big-time-big-scale-problem Geoffrey A. Manne, Justin (Gus) Hurwitz <div class="lead text-default"> <p>High‐​tech and network industries have a&nbsp;long history of evoking populist scrutiny. New technologies frequently disrupt incumbent, often less centralized, business models and interfere with existing relationships between sellers and consumers. Inevitably, the paradigmatic small‐​town buggy manufacturer displaced by technological advance directs his ire against the large, distant car companies that make the automobiles responsible for his demise. Even consumers and business owners who benefit from enhanced efficiency or entirely new and beneficial products often end up feeling dependent on them. Adding to that a&nbsp;distrust of firms that operate in geographies or at scales that are distant from typical consumer experiences, critics express their concerns about firms with a&nbsp;single heuristic: big is bad.</p> </div> , <div class="text-default"> <p> </p><div> <div data-embed-button="image" data-entity-embed-display="view_mode:media.full" data-entity-type="media" data-entity-uuid="96186251-b0e2-4a53-a6f3-8cb9f903cf09" data-langcode="en" class="embedded-entity"> <img width="425" height="272" alt="Media Name: img-cpr-v40n3-1.jpg" class="lozad component-image lozad" data-srcset="/sites/cato.org/files/styles/pubs/public/images/pubs/cpr-v40n3/img-cpr-v40n3-1.jpg?itok=k_oKXYeg 1x, /sites/cato.org/files/styles/pubs_2x/public/images/pubs/cpr-v40n3/img-cpr-v40n3-1.jpg?itok=-Oe6kJhs 1.5x" data-src="/sites/cato.org/files/styles/pubs/public/images/pubs/cpr-v40n3/img-cpr-v40n3-1.jpg?itok=k_oKXYeg" typeof="Image" /></div> </div> <p>Although often framed in more complex antitrust terms — large firms are accused of employing anticompetitive business practices, including the development of “predatory” innovations designed to expand their reach and thwart potential competition, for example — populist antipathy is, at root, fundamentally about the “bigness” of these high‐​tech firms. Companies that owe their success — and their size — to clever implementations of innovative technologies are ultimately decried not for their technology or their business models but for their expansive operations. Standard Oil, AT&amp;T (in the early 20th century), IBM, AT&amp;T (in the late 20th century), Microsoft, and, most recently, Google, Facebook, Amazon, and (once again) AT&amp;T have regularly found themselves in the crosshairs of antitrust enforcers for growing large by besting (and, often, buying) their competitors.</p> <p><br />The size of these companies — among the largest in the American economy — endows them with the superficial appearance of market power, providing competitors and advocates with a rhetorical basis for antitrust action against them. But their problems also extend beyond mere allegations that they are too large: in each case, these companies have also engaged in some conduct disfavored by powerful political actors. The appearance of market power and the firms’ problematic‐​to‐​some conduct give rise to calls to use antitrust law to regulate their behavior — or, perhaps most troubling, to constrain their perceived power by breaking them up.</p> <p>This article is not an apologia for the bad acts of the modern tech industry. There is no question that some of today’s largest companies have transformed society and the economy over the years (not necessarily always for the better) and have engaged in arguably troubling conduct in the process. But whatever the beliefs of those calling for the breakup of Big Tech, the question remains whether it is wise to shoehorn broader social and political concerns into the narrow, economic remit of antitrust law.</p> <p>By and large, the market is sufficiently powerful to constrain potentially problematic conduct. Consider the discussion that has developed since the disclosure of Facebook’s relationship with Cambridge Analytica. Mark Zuckerberg has been scrambling to make public amends and to stave off a potentially devastating regulatory response, even going so far as to suggest that perhaps platforms like Facebook should be subject to some regulation. Despite those efforts, the reality is that the market is the most effective regulator, and at the time of this writing Facebook has lost over $75 billion in value.</p> <p>Alas, the urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of the tech industry are framed in antitrust terms. Sen. Elizabeth Warren (D‑MA) is one of the worst offenders in this regard:</p> </div> , <blockquote class="blockquote"> <div> <p>Today in America competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy.</p> </div> </blockquote> <cite> </cite> , <div class="text-default"> <p>For Senator Warren the antidote is clear: “It is time to do what Teddy Roosevelt did: pick up the antitrust stick again.”</p> <p>And she is not alone. Abetted by a&nbsp;growing chorus of advocates and scholars on both the left and right, proponents of activist antitrust are now calling for invasive, “public‐​utility‐​style” regulation or even the dissolution of the world’s most innovative companies essentially because they seem “too big.” Unconstrained by a&nbsp;sufficient number of competitors, the argument goes, these firms impose all manner of alleged harms — from fake news to the demise of local retail to low wages to the veritable destruction of “democracy.” What is needed, they say, is industrial policy that shackles large companies or that mandates more, smaller firms.</p> <p>This view contradicts the past century’s worth of experience and learning. It would require jettisoning the crown jewel of modern antitrust law — the consumer welfare standard — and returning antitrust to an earlier era in which inefficient firms were protected from the burdens of competition at the expense of consumers. And doing so would put industrial regulation in the hands of would‐​be central planners, shielded from any politically accountable oversight.</p> <p><strong>WILSON, BRANDEIS, AND THE IGNORANT CONSUMER</strong><br>American antitrust law began with the Sherman Antitrust Act of 1890. The Sherman Act, named for Ohio Senator John Sherman, prohibited agreements “in restraint of trade” (that is, collusion) and “monopoliz[ation], or attempt[s] to monopolize.” Importantly, and contrary to common understandings on both the left and right, the purpose of the Sherman Antitrust Act was never particularly clear.</p> <p>There is ample evidence that it was intended both to proscribe business practices that harmed consumers and to allow politically preferred businesses to maintain high prices in the face of competition from politically disfavored businesses — never mind that modern economics roundly tells us that these two goals are incompatible. This ambiguity isn’t entirely surprising, both because Senator Sherman was fickle and petty in his own purposes for introducing the legislation and because the regnant economic theory of the day was relatively unsophisticated and would remain so for at least another several decades.</p> <p>The years surrounding the adoption of the Sherman Act were characterized by dramatic growth in the high‐​tech industries of the day — most notably manufacturing/​refining, railroads, and telecommunications — as well as corporate and conglomerate consolidation. For many, the purpose of the Sherman Act was to stem this growth — to prevent low prices and large firms from “driving out of business the small dealers and worthy men whose lives have been spent therein,” in the words of one of the early Supreme Court decisions applying the act. It failed to do so, however, and by the time of the presidential election of 1912, concern about large firms had developed as a&nbsp;divisive, populist issue. Woodrow Wilson was elected president largely on a&nbsp;big‐​is‐​bad antitrust platform.</p> <p>The key architect of that platform was Louis Brandeis. Brandeis played an important role in reshaping antitrust and industrial policy in the United States, helping to design the Clayton Antitrust Act and the Federal Trade Commission in 1914, both of which dramatically expanded federal antitrust law. Brandeis’s views were informed by a&nbsp;strong belief that large firms could become large only by illegitimate means and could not be trusted. Large firms, unlike their Main Street, mom‐​and‐​pop counterparts, operated primarily by deceiving consumers into buying unnecessary and lower‐​quality products. Stated bluntly, Brandeis’s views were informed by a&nbsp;belief that consumers were (in his own words) “servile, self‐​indulgent, indolent, ignorant.”</p> <p><strong>THE RISE AND FALL OF MID-CENTURY ANTITRUST</strong><br>As the 20th century progressed, antitrust economics and the study of industrial organization grew increasingly sophisticated. The most prominent early advance in antitrust economics was the development of the Structure‐​Conduct‐​Performance (SCP) paradigm, associated with University of California, Berkeley, economist Joe Bain. SCP held that the conduct of firms in an industry, and ultimately their performance, was a&nbsp;function of the overall structure of the industry. One of the predictions of the SCP model is that more‐​concentrated industries are inherently less competitive, allowing firms to employ anticompetitive conduct (like collusion) to raise prices. Profitability and market performance, in this view, are a&nbsp;function of market structure, not the relative efficiency of competing firms. SCP therefore generally prescribed reducing concentration — for instance, by breaking up firms or challenging mergers — as a&nbsp;way of making industries more competitive. Ultimately, the SCP model proved to be overly simplistic and fell out of favor relatively not long after it was popularized.</p> <p>Both SCP and the Brandeisian view of antitrust espouse a&nbsp;preference for smaller firms, though they diverge on the harm of “bigness.” The Brandeisian view holds that big is bad per se, whereas SCP suggests that a&nbsp;market comprising multiple competing smaller firms is comparatively better than a&nbsp;highly concentrated one (which implies larger firms). Neither approach readily admits the possibility that big could be better under appropriate conditions, however.</p> <p>Yet the weight of subsequent economic research holds that large firms are frequently ideal economic actors for maximizing consumer welfare. Since the Industrial Revolution, and especially in the Information Age, it’s not unusual for efficient, competitive markets to comprise only a&nbsp;few big, innovative firms. Unlike the textbook models of monopoly markets, these markets tend to exhibit extremely high levels of research and development, continual product evolution, frequent entry, almost as frequent exit — and economies of scope and scale (i.e., “bigness”). Size simply does not correlate with anything recognizable as “consumer harm.”</p> <p>While perhaps counterintuitive, this observation means that, in many cases, modern antitrust law actually condones bigness — or, put differently, without additional factors to substantiate potential concern, antitrust law is fundamentally agnostic about the size of firms or the extent of market concentration.</p> <p>The classic example of the problem with the Brandeisian and SCP approaches to antitrust analysis is the 1966 <em>Von’s Grocery</em> case. In <em>Von’s Grocery</em>, the Supreme Court addressed the government’s challenge of the 1960 merger of Von’s Grocery and Shopping Bag Food Stores, two grocery chains in southern California that were succeeding in a&nbsp;rapidly changing and increasingly concentrated market for grocery stores. Together, these chains controlled less than 8&nbsp;percent of a&nbsp;grocery market that was increasingly dominated by a&nbsp;smaller number of big‐​box supermarkets that were coming into existence as a&nbsp;result of business model innovation, changing demographics, affordable automobiles, and economies of scale enabled in part by new technology.</p> <p>The market share of the merged chains was insufficient to have any meaningful effect on prices, but it might have been sufficient to give the resulting retail chain the scale it needed to compete. Yet despite the lack of evidence of any anticompetitive effect from the merger, the Supreme Court affirmed the government’s challenge, adopting the SCP presumption against increased concentration even where there was no anticompetitive harm.</p> <p>In <em>Von’s Grocery</em>, this decision meant breaking up a&nbsp;merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other. As Justice Stewart noted in dissent:</p> </div> , <blockquote class="blockquote"> <div> <p>In fashioning its per se rule, based on the net arithmetical decline in the number of single store operators, the Court completely disregards the obvious procreative vigour of competition in the market as reflected in the turbulent history of entry and exit of competing small chains. …The Clayton Act was never intended by Congress for use by the Court as a&nbsp;charter to roll back the supermarket revolution.</p> </div> </blockquote> <cite> </cite> , <div class="text-default"> <p>In other words, by adopting a&nbsp;formalistic rule against increased concentration, the analysis in <em>Von’s Grocery</em> disregarded the more nuanced market dynamics that justified the merger, thus harming consumers, competitors, and dynamic competition.</p> <p>In the 1970s, antitrust economists increasingly questioned the small‐​is‐​good bias of Brandeisian and SCP antitrust. Prompted by cases like <em>Von’s Grocery</em>, antitrust economists realized that small is good as an antitrust ethos lacked empirical and intellectual justification. Moreover, preferring firm size as an analytical dimension for applying antitrust laws could often lead to perverse outcomes in which consumers were harmed and smaller, less efficient competitors were protected. Rather than focusing on naive proxies for conduct and performance, more probing analysis was needed.</p> <p><strong>ANTITRUST’S PARADOX: PROTECTING COMPETITORS HARMS CONSUMERS</strong><br>Robert Bork famously synthesized the lessons of these economists in <em>The Antitrust Paradox</em>, the 1978 <em>urtext</em> of modern antitrust. Bork argued that the best understanding of the purpose of American antitrust law is the protection of consumers against anticompetitive business practices, and that success on this front is best measured in terms of consumer welfare. Under the consumer welfare standard, we are not concerned with the structure of an industry in the abstract; we are concerned only with the extent to which particular actions of firms within that industry are likely to harm consumers.</p> <p>But Bork’s normative focus wasn’t merely the meaning of the Sherman Act or the significance of the consumer welfare standard. For Bork, the paradox of antitrust is that antitrust law, meant to shield <em>consumers</em> from anticompetitive business practices, had come to be used to shield <em>competitors</em> from competition, at the expense of consumers’ welfare.</p> <p>By its nature, competition disrupts incumbent firms and existing markets. Firms develop new technologies or processes that allow them to offer better products or lower prices than their rivals. This benefits consumers and successful firms alike. Sometimes firms develop new products that disrupt markets entirely, putting a&nbsp;whole generation of firms out of business — again benefitting consumers and successful firms. Horses and buggies are replaced by cars; small grocers are replaced by supermarkets.</p> <p>What Bork saw was that antitrust law was used by unsuccessful firms to constrain the competitive efforts of their rivals. Under the Brandeisian and SCP models, firms that sought to evade the pressures of competition and managers who preferred to extract easy rents than to give their customers better products or lower prices could point to virtually any threat to the status quo as evidence of anticompetitive conduct. If a&nbsp;firm developed a&nbsp;better product, the law could be used to punish its success if the firm grew too large. If a&nbsp;firm developed a&nbsp;better process that enabled it to offer lower prices, competitors could allege that such conduct was illegal because it was “predatory.”</p> <p><strong>THE POLITICAL ECONOMY OF ANTITRUST REGULATION</strong><br>Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a&nbsp;standard. The story of antitrust law for most of the 20th century was one of standard‐​less enforcement for political ends. It was a&nbsp;tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.</p> <p>This is because competition, on its face, is virtually indistinguishable from anticompetitive behavior. Every firm strives to undercut its rivals, to put its rivals out of business, to increase its rivals’ costs, or to steal its rivals’ customers. The consumer welfare standard provides courts with a&nbsp;concrete mechanism for distinguishing between good and bad conduct, based not on the effect on rival firms but on the effect on consumers. Absent such a&nbsp;standard, any firm could potentially be deemed to violate the antitrust laws for any act it undertakes that could impede its competitors.</p> <p>Compounding the problem, the operative text of the Sherman Act comprises about 170 frustratingly ambiguous words. It is difficult to escape the sense that advocates of the elimination or dilution of the consumer welfare standard seek to co‐​opt the act’s terse ambiguity to invent a&nbsp;sort of “meta‐​legislation” effectively to enact social preferences that they couldn’t convince Congress to adopt outright.</p> <p>The same high‐​tech, scale industries that are likely to evoke superficial big‐​is‐​bad antitrust concerns are also likely to raise important social, legal, and political questions. The telephone and the railroad reshaped society; the computer began a&nbsp;reshaping of society that the <em>personal</em> computer continued and that is still ongoing in today’s internet era.</p> <p>Adapting to the changes wrought by these industries is one of the defining challenges of the 21st century. It could well be the case, as Mark Zuckerberg suggests, that it’s time to regulate all or part of these industries. If so, the shape and scope of that regulation is a&nbsp;matter for political debate and social response. But antitrust law is not the proper vehicle for addressing open‐​ended issues related to social and political values, disconnected from the economic effects of restraints on competition.</p> <p>One major risk of addressing these concerns through antitrust law — and of weakening the consumer welfare standard in the process — is that applying antitrust law short‐​circuits the social and political processes that are better suited to addressing the concerns. Another risk is that such a&nbsp;standard‐​less antitrust law could be used to impose arbitrary market controls subject only to political whim. The earliest, worst impulses of American antitrust law catered to the would‐​be industrial planners of the early 20th century. Contemporary calls to weaken the consumer welfare standard are motivated by the demands of similar would‐​be planners to reshape American industry in their own idiosyncratic image. Regardless of whether that image for the American economy is good (it is not), such designs should be made through the legislative process, not by hollowing out the core of antitrust law and parasitically repurposing it for political purposes.</p> <p>Today’s promoters of breaking up or strictly regulating Big Tech companies are primarily motivated by political concerns. These undeniably large and pervasive companies have social influence and economic power that rivals and therefore threatens would‐​be regulators who have spent their careers positioning themselves to wield the power of the state to advance what Thomas Sowell has dubbed the “vision of the anointed.” Thus, Amazon threatens to undermine an aesthetic vision of “local self‐​reliance;” Facebook and Google are “information gatekeepers” with the power to undermine preferred political narratives; Uber exploits workers, evades social regulation, and threatens to hasten the decline of labor unions (and their campaign contributions).</p> <p>Ultimately, from Louis Brandeis to Elizabeth Warren, those who cloak their approach to market regulation in the cloth of “consumer protection” instead of “consumer welfare” start from the premise that consumers need protection — from both the market and from themselves — and that learned regulators are best situated to offer this protection. A&nbsp;consumer protection standard is inherently ambiguous, affording regulators the power to structure markets in whatever manner they deem best for consumers. The consumer welfare standard, on the other hand, restricts the conduct of firms and regulators alike, ensuring that both operate in the objective best interest of consumers.</p> <p>Properly construed, antitrust law has one focus: protecting consumers from anticompetitive conduct. This goal is encapsulated in the consumer welfare standard, and that standard accomplishes it well. We should not jeopardize that function merely for political expedience, let alone to hastily and surreptitiously implement a&nbsp;particular, politicized industrial policy.</p> </div> Mon, 18 Jun 2018 16:20:00 -0400 Geoffrey A. Manne, Justin (Gus) Hurwitz https://www.cato.org/policy-report/mayjune-2018/big-techs-big-time-big-scale-problem #CatoDigital—Net Neutrality, Six Months Later https://www.cato.org/multimedia/events/catodigital-net-neutrality-six-months-later Ajit Pai, Kat Murti <p>On December 14, 2017, the Federal Communications Commission (FCC) voted to repeal “net neutrality,” a&nbsp;set of Obama‐​era regulations that had only been enacted in 2015. The outcry was oversized. Racist memes featuring FCC chairman Ajit Pai, who spearheaded the repeal effort, flooded the internet, while grassroots activists invaded Pai’s neighborhood, placing pamphlets with his face on his neighbors’ doorsteps, peering through the windows of his house, and taking photos of his young children inside. Faced with death threats, Pai had to cancel speaking engagements in the months following the vote for fear of his safety. Minutes after the vote, New York Attorney General Eric Schneiderman announced his intent to lead a&nbsp;multistate lawsuit against the FCC to “stop illegal rollback of net neutrality.” Net neutrality supporters in more than 20 states quickly joined the suit. Powerful tech companies like Netflix, Reddit, Amazon, and Kickstarter called for the immediate restoration of net neutrality. Executive orders in New York and Montana imposed net neutrality requirements on internet service providers that had contracts with those states, and Washington recently became the first state to pass its own version of net neutrality. Even now, the debate continues, with Senate Democrats pushing to restore the net neutrality rules the FCC vote repealed. Yet what really has changed in the six months since the repeal vote? On Thursday, June 14, the six‐​month anniversary of the controversial FCC vote, please join the Cato Institute for a&nbsp;one‐​on‐​one interview with FCC chairman Ajit Pai. Pai will explain what net neutrality is, why he supported its repeal, and what comes next for the future of the internet.</p> Thu, 14 Jun 2018 10:35:00 -0400 Ajit Pai, Kat Murti https://www.cato.org/multimedia/events/catodigital-net-neutrality-six-months-later AT&T Ruling Tells Government: It’s Not 1948 Anymore https://www.cato.org/publications/commentary/att-ruling-tells-government-its-not-1948-anymore Walter Olson <div class="lead text-default"> <p>Judge to federal government: The entertainment business has moved on from the Truman era, and so has antitrust law.</p> </div> , <div class="text-default"> <p>In 1948 the US Supreme Court&nbsp;<a href="https://supreme.justia.com/cases/federal/us/334/131/case.html" target="_blank">ordered&nbsp;</a>Hollywood studios to sell their movie theaters, following the then‐​popular idea that the government should police marketplace competition by restraining businesses’ vertical integration — or as we might put it these days, by ordering content kept separate from distribution.</p> <p>The surprise in 2018 is not so much that US District Judge Richard Leon&nbsp;<a href="http://money.cnn.com/2018/06/12/media/att-time-warner-ruling/index.html">rejected</a>&nbsp;the government’s challenge to the $85 billion AT&amp;T‑Time Warner merger. That much was expected by most antitrust watchers. The shock came from the stinging way he rejected the government’s evidence — using language such as “gossamer thin” and “poppycock.”</p> <p>[pullquote[The entertainment business has moved on from the Truman era, and so has antitrust law.[/pullquote] </p><p>That surprise wasn’t an unpleasant one for many. Media and telecom&nbsp;<a href="https://www.reuters.com/article/us-usa-stocks/wall-street-flat-media-stocks-jump-after-att-ruling-idUSKBN1J91HI" target="_blank">stocks rose</a>&nbsp;on Wall Street, with the decision widely seen as green lighting further hookups of cable and wireless distributors with content providers, such as<a href="https://variety.com/2018/biz/news/comcast-att-time-warner-ruling-21st-century-fox-disney-1202843088/" target="_blank">a potential Comcast deal</a>&nbsp;for 21st Century Fox.</p> <p>While “horizontal” challenges to mergers between competitors who sell to the same group of customers are alive and well, the government hadn’t gone all the way to a&nbsp;court decision in a&nbsp;vertical merger case&nbsp;<a href="https://law.justia.com/cases/federal/district-courts/FSupp/429/1271/1532053/" target="_blank">in 40&nbsp;years</a>&nbsp;(and it&nbsp;<a href="https://scholar.google.com/scholar_case?case=10221755252244746550&amp;hl=en&amp;as_sdt=6&amp;as_vis=1&amp;oi=scholarr" target="_blank">lost then, too</a>). It’s been&nbsp;<a href="https://hbr.org/2017/11/why-mergers-like-the-att-time-warner-deal-should-go-through" target="_blank">more than 30&nbsp;years&nbsp;</a>since the government successfully opposed a&nbsp;vertical merger, though it’s sometimes negotiated&nbsp;<a href="https://www.ftc.gov/news-events/press-releases/2000/12/ftc-approves-aoltime-warner-merger-conditions" target="_blank">to attach strings</a>&nbsp;in order to proceed.</p> <p>Until recently, media companies could do well at either the content end — like Time Warner, with its properties such as CNN, Turner and HBO — or at the distribution end, like AT&amp;T with its vast consumer base including cell phone and satellite users. You could be good at making shows even if you weren’t good at getting to know individual customers and their data.</p> <p>Now, amid rapid technological change, the advantage has shifted to companies that can do both, commissioning original programming while also knowing a&nbsp;lot in real time about who is watching and how, making informed predictions about what they might want to watch tomorrow or next year. Netflix, Hulu and Amazon, for example — with Facebook, Google and others coming up fast — can do both. Enterprises of this sort, the judge&nbsp;<a href="https://ecf.dcd.uscourts.gov/cgi-bin/show_public_doc?2017cv2511-146" target="_blank">wrote</a>, “have driven much of the recent innovation in the video programming and distribution industry.”</p> <p>The government’s own merger guidelines&nbsp;<a href="https://www.justice.gov/atr/non-horizontal-merger-guidelines" target="_blank">describe</a> vertical mergers as “not invariably innocuous,” a&nbsp;backhanded phrasing that points to the uphill legal burden of showing that the case at hand was in some way exceptional. And while the Department of Justice tried that, as with one theory about how existing distributors such as cable companies would be squeezed in negotiations, the judge found after a&nbsp;full trial it hadn’t come near to proving its case. Sample problem: It had analyzed as a&nbsp;market a&nbsp;slice of the TV business so narrowly defined that it excluded Netflix and the other new(ish) providers.</p> <p>Meanwhile, AT&amp;T’s lawyers pointed to uncontested big savings the merger would yield for the company and its customers: The government’s own expert acknowledged that customers of AT&amp;T’s DirecTV and U‑verse services&nbsp;<a href="https://www.wsj.com/articles/decoding-judge-leon-s-at-t-time-warner-decision-1528845853" target="_blank">would</a>&nbsp;“pay a&nbsp;total of about $350 million less per year for their video distribution services.” Perhaps bigger, the combination would promote innovation.&nbsp;<a href="https://ecf.dcd.uscourts.gov/cgi-bin/show_public_doc?2017cv2511-146" target="_blank">To quote</a>&nbsp;the opinion: “The merged entity could, for instance, gather and edit individual news clips from CNN throughout the day — all tailored to a&nbsp;given user’s interests — and deliver that news to the wireless customer for viewing on his or her fifteen‐​minute break.”</p> <p>The days of the Hollywood studio system are long gone, and so is the old antitrust law. It should be too late for Washington to block this deal at the altar; even trying would be as futile as attempting to separate Net from Flix or You from Tube.</p> </div> Wed, 13 Jun 2018 15:10:00 -0400 Walter Olson https://www.cato.org/publications/commentary/att-ruling-tells-government-its-not-1948-anymore Fake News and Our Real Problems https://www.cato.org/multimedia/cato-out-loud/fake-news-our-real-problems Will Rinehart An audio version of “<a href="https://www.cato-unbound.org/2017/12/05/will-rinehart/fake-news-our-real-problems">Fake News and Our Real Problems</a>,” the lead essay from the December 2017 issue of <em>Cato Unbound</em>, “<a href="https://www.cato-unbound.org/issues/december-2017/social-media-broken">Is Social Media Broken?</a>.” Tue, 22 May 2018 15:26:00 -0400 Will Rinehart https://www.cato.org/multimedia/cato-out-loud/fake-news-our-real-problems Thomas W. Hazlett on the regulatory history of the radio spectrum https://www.cato.org/multimedia/cato-audio/thomas-w-hazlett-regulatory-history-radio-spectrum Tue, 01 May 2018 03:00:00 -0400 https://www.cato.org/multimedia/cato-audio/thomas-w-hazlett-regulatory-history-radio-spectrum