Topic: Telecom, Internet & Information Policy

Unconscious People Can’t Consent to Police Searches

A reasonable expectation of privacy is one of the most fundamental rights people hold in a free society. Accordingly, the Fourth Amendment prohibits warrantless searches, with few exceptions. Police officers in Wisconsin violated that right when they drew Gerald Mitchell’s blood while he was unconscious—to test his blood alcohol content after a drunk-driving arrest. The state has attempted to excuse the officers by citing an implied-consent statute, which provides that simply driving on state roads constitutes consent to such searches.

The right to privacy is not absolute; police are allowed to search for evidence of a crime. But in doing so, they must follow procedures that comport with the Constitution. Before police conduct a search, Johnson v. United States (1948) indicates that the evidence should be judged by “a neutral and detached magistrate instead of being judged by the officer engaged in the often competitive enterprise of ferreting out crime.” The Fourth Amendment contains a simple requirement for law enforcement that is an effective bulwark against unreasonable searches: get a warrant first.

Unfortunately for Mitchell, the Wisconsin Supreme Court upheld this unconstitutional search under the “pervasively regulated business” exception, which allows for warrantless administrative inspections of certain highly regulated businesses. But this exception is quite narrow and designed to ensure regulatory compliance, not to facilitate evidence-gathering in cases of suspected of criminal activity.

The U.S. Supreme Court has only recognized four types of business to which the exception applies: liquor sales, firearms dealing, running an automobile junkyard, and mining. None of these resemble the simple act of driving a vehicle. The state court thus erroneously conflated the licensing of a driver with a highly regulated business order to justify an otherwise unreasonable search.

Gerald Mitchell is thus asking the U.S. Supreme Court to overturn the Wisconsin Supreme Court and find that this warrantless, non-consensual search violated his Fourth Amendment rights. Cato has joined the Rutherford Institute in filing an amicus brief in support of his petition. We argue for the basic notion that unconscious people can’t consent to anything, especially police searches, and that inspecting a coal mine for safety compliance—a justified exception to warrantless searches—is not the same as searching a driver’s blood in an attempt to convict him of DUI.

A Conversation with Marietje Schaake (Part 2)

Marietje Schaake is a leading and influential voice in Europe on digital platforms and the digital economy. She is the founder of the European Parliament Intergroup on the Digital Agenda for Europe and has been a member of the European Parliament since 2009 representing the Dutch party D66 that is part of the Alliance of Liberals and Democrats for Europe (ALDE) political group. Schaake is spokesperson for the center/right group in the European Parliament on transatlantic trade and digital trade, and she is Vice-President of the European Parliament’s US Delegation. She has for some time advocated more regulation and accountability of the digital platforms.

You can read Part 1 of this conversation here.

FR: I want to focus on the small players. People concerned about regulation say that if you only focus on the big players like Facebook, Google or Twitter and how to regulate them, you will make it very difficult for the small players to stay in the market because transaction costs and other costs connected to regulation will kill the small companies. Regulation becomes a way to lock in the existing regime and market shares because it takes so many resources and so much money to stay in the market and compete. And new companies will never be able to enter the market. What do say to that argument?

MS: It depends on how the regulations are made but it is a real risk. It is the risk of GDPR (general data protection regulation,https://www.wired.co.uk/article/what-is-gdpr-uk-eu-legislation-compliance-summary-fines-2018 ), and with filtering as suggested now. The size of a company is always a way to assess whether there is a problem, and I think we should do the same with these regulations so that there could be a progressive liability depending on how big the company is or there could be some kind of mechanism that would help small or medium size companies to deal with these requirements. Indeed, it is true that for companies that have billions of euros or dollars of revenue, it’s easy to deploy lots of people. A representative of Google yesterday (at a conference in the European Parliament) said they have 10.000 people working on content moderation. Those are extraordinary figures, and they are proportionate because of the big the impact of these companies, but if you are a small company you may not be able to do it, and this is always an issue. It’s not the first time we have been dealing with this. With every regulation the question is how hard it is for small and medium enterprises.

 

FR: The challenge or threat from misinformation is also playing a big role in the debate about regulation and liability. We will soon have an election in Denmark. Sweden recently had an election where there was a big focus on misinformation, but it turns out that misinformation doesn’t work as well in Denmark as in the US or some other countries because the public is more resilient. Why not focus more on resilience and less on regulation so people have a choice? We are up against human nature, these things are triggered by tribalism and other human characteristics. To counter it you need education, media pluralism, and so on.

MS: I think you need to focus on both. First, what is choice if you have a few near monopolies dominating the market? Second, how much can we expect from citizens? If you look at the terms of service for a common digital provider that you and I use, they are quite lengthy. Is that a choice for a consumer? I think it’s nonsense. That’s one thing. Moreover, we are lucky because we are from countries where basic trust is relatively high, media pluralism exists, there are many political parties, and our governments will be committed to investing in education and media pluralism, knock on wood. How will this play out in a country like Italy where basic trust is lower and where there is less media pluralism, how are ever going to overcome this with big tech, so I think there is a sufficient risk if you look at the entire European Union, Hungary and other countries, that governments will not commit resources to what is right and they will create the kind of resilience that our societies already have. In the Netherlands trust in the media is among the highest, and it’s probably also because of a certain quality of life and certain kind of freedom that people have enjoyed for a long time. Even in our country you see a lot of anti-system political parties rise, so it’s not a given that this balance will continue forever because it requires public resources to be spend on media and other factors. So I think both are very important and I don’t want to suggest that we should not involve people but I don’t know if we can expect of the average citizen to have the time and the ability to have access to information it would take to make them resilient enough on their own.

FR: Do you think a version of the German ”Facebook law” with the delegation of law enforcement to the digital platforms will make it to the agenda of lawmakers in the European Parliament?

MS: No, I think there are too many flaws in it. It’s bad. Some form of responsibility on behalf of companies to take down information will exist, but I hope the law will be the primary tool. The companies will take down content measured against the law with the proper safeguards and proportionality. If there are incentives like big fines to be overtly ambitious in taking down information, that’s a risk. But on the other hand, the platforms as private companies already have all the freedom they want to take down any information with a reference to their terms of use. We are assuming that they are going to take the law as guidance, but nothing indicates they will. In fact, Facebook doesn’t accept breastfeeding pictures, so they are already setting new social norms. A new generation may grow up thinking breastfeeding is obscene. The platforms are already regulating speech,  and people who are scared about regulation should understand that it is Mark Zuckerberg who is regulating speech right now.

FR: Recently the EU praised the Code of Conduct to fight hate speech online that they signed with the tech companies in 2016. A lot of speech has been taken down according to the EU: 89 percent of flagged content within 24 hours in the past year, but my question is: Do we know how much speech has been taken down that should not have been taken down?

MS: No, we don’t know.

FR: That will concern those who value free speech. You have the law and you have community standards and then you have a mob mentality, i.e. the people who are complaining most and screaming louder will have their way and they will set the standards. So if you organize people to complain about certain content, it will be taken down to make life easier for Facebook and Twitter and Google.

MS: Yes.

FR: So you agree that it’s a concern?

MS: It’s a huge concern. If you believe in freedom of expression which I know you do, and I think it’s one of the most important rights and so many people have been fighting for it, why will we give it up? Just a little bit of erosion of freedom of expression is a huge danger and therefore to put responsibility on these companies to take down content without a check against the law is a risk, to allow these companies to set their own terms of use that can be at complete odds with the law and also with social norms (consider the restrictions on the breastfeeding, on Italian Renaissance statues as pornographic, or on the photo of a naked girl hit by napalm in Vietnam). Let me give you an example from my own experience.

I gave a speech here in parliament, it was a very innocent and clearly political speech, but it was taken down by YouTube. They said it was marked as spam, which I don’t believe. I have never posted anything that was labeled spam. What I think happened was that my speech was about banning goods and trade that can be used for torture and the death penalty. I think that the machine flagged torture because torture is bad, but a political debate about torture is not bad. I took a screenshot of the fact that YouTube took it down, posted it on twitter and said “wow!, see what happened”, and they were on the phone within two hours, but that’s not the experience most people (including the people I represent) will have. That’s the danger.  We also know examples of Russians having flagged Ukrainian websites and then they were taken down. And if that happens to a political candidate in the last 24 hours before an election it could be decisive, even if the companies say they’ll restore it within 24 hours.

FR: I spoke to a representative from one of the tech companies who said that when they consult with German lawyers whether something is legal or not, they will get three different answers from three different lawyers. He said that his company would be willing to do certain things on behalf of the government, but it requires clear rules and today the rules aren’t clear.

MS: Right, so now you see incentives coming from the companies as well. It’s no longer working for them to take on all these responsibilities whether they are pushed to do so or just asked to do it. The fact that they have to do things is also a consequence of them saying “don’t regulate us, we can fix this.” I think it’s a slippery slope. I don’t want to see privatized law enforcement. What if Facebook is bought by Alibaba tomorrow? How happy would we be?

FR: I want to ask you about monopolies, competition and regulation. If you go back to 2007 MySpace was the biggest platform, then it was outcompeted by Facebook. As you say, there are concerns about the way Facebook manages our data and its business model with ads and sensational news driving traffic and getting more eyeballs. But why not let the market sort things out? If there is dissatisfaction with the way Facebook is running their business and our data, why not set up a competing company based on a different business model that will satisfy customers’ need?

MS: States don’t built companies in Europe.

FR: I was having private companies in mind. Netflix has a subscription model, wouldn’t a digital platform like Facebook be able to do the same?

MS: I think it would be difficult now, because there is a lock-in effect. In Europe we are trying to provide people with the ability to take their data out again. If you use gmail for 12 years, your pictures, your correspondance with your family and loved ones, with your boss and colleagues, it could all be in there, and you want to take all those data with you. It’s your correspondence, it’s private, you may need it for your personal records. You may have filed your taxes and saved your returns and receipts in the cloud. If you are not able to move that data to another place, then competition exist only in theory. Also, if you look at Facebook, almost everybody is on Fcebook now. For somebody else to start from scratch and reach everybody is very difficult. It’s not impossible but it’s difficult. And for those models to make money the question is how much are customers willing to pay as required by the subscription model?

Facebook and Google already have so much data about us. Even if I am not on Facebook, but all my friends are, then a sketch of my identity emerges because I am the empty spot between everybody else. If people start posting pictures of a birthday party with the 10 people who are on Facebook and the one person that is not, and then somebody says I can’t wait to go on holiday with Marietje or whatever, then at some point it would be clear who I am, even if I am not on the platform, so they already know so much and they already has access to so much data about people’s behaviour that effectively it will be very hard for any competitor to get close, and we have seen it in practice. Why hasn’t there been more competition?

FR: Do you compare notes with US lawmakers on this? And do you see that your positions are getting closer to one another?

MS: Yes.

FR: Can you say a bit more about that?

MS: First of all the talk has changed. The Europeans were dismissed as being jealous of US companies and therefore proposing regulations, i.e. we were proposing regulations in order to destroy US competitors. I don’t think that’s true, but this stereotypical view has been widespread. Also, we were being accused of being too emotional about this, so we were dismissed as being irrational which is quite insulting, but not unusual when Americans look at Europeans. I think we are in a different place now with a privacy law in California, with New York Times editorials about the need for tougher competition regulations, with senators proposing more drastic measures, with organizations like the Center for Humane Technology focusing om time well spent, and with Apple hiring people to focus on privacy issues. Recall also conversations about inequality in San Francisco. We have a flow of topics and conversations that suggest that the excessive outcomes of this platform economy need boundaries. I think this has become more and more accepted.  The election of Donald Trump was probably the tipping point. We learned later how Facebook and others had been manipulated.

FR: You said that the problem with these companies is that they have become so powerful and therefore we need to regulate them. Is the line between public and private is blurred in Europe compared to the US? You focus on power no matter whether it’s the government or a private company when it comes to protection of free speech, while in the US the First Amendment exclusively deals with the government. Do you see that as a fundamental distinction between Europe and the US?

MS: There are more articulated limitations on speech in Europe: for example, Holocaust denial, hate speech and other forms of expression may be prohibited by law. I think there is another context here that matters. Americans in general trust private companies more than they trust the government, and in Europe roughly speaking it’s the other way round, so intuitively most people in Europe would prefer safeguards coming from law than trusting the market to regulate itself. That might be more important than the line between private and public and the First Amendment compared to European free speech doctrine.

Cross-posted at Techdirt
https://www.techdirt.com/articles/20190215/11321741603/conversation-with…

A Conversation with Marietje Schaake (Part I)

Marietje Schaake is a leading and influential voice in Europe on digital platforms and the digital economy. She is the founder of the European Parliament Intergroup on the Digital Agenda for Europe and has been a member of the European Parliament since 2009 representing the Dutch party D66 that is part of the Alliance of Liberals and Democrats for Europe (ALDE) political group. Schaake is spokesperson for the center/right group in the European Parliament on transatlantic trade and digital trade, and she is Vice-President of the European Parliament’s US Delegation. She has for some time advocated more regulation and accountability of the digital platforms.

Recently, I sat down with Marietje Schaake in a café in the European Parliament in Brussels to talk about what’s on the agenda in Europe when it comes to digital platforms and possible regulation.

Roger McNamee’s Facebook Critique

In a recent Time magazine article, Roger McNamee offers an agitated criticism of Facebook, adapted from his book Zucked: Waking Up to the Facebook Catastrophe.  Facebook “has a huge impact on politics and social welfare,” he claims, and “has done things that are truly horrible.”  Facebook, he says, is “terrible for America.”

McNamee suggests his “history with the company made me a credible voice.” From 2005 to 2015, McNamee was one of a half dozen managing directors of Elevation Partners, an $1.9 billion private equity firm that bought and sold  shares in eight companies, including such oldies as Forbes and Palm.  U2 singer Bono was a co-founder. Other partners included two former executives from Apple and one from Yahoo.  Another is married to the sister of Facebook’s COO.  Such investors are not necessarily disinterested observers, much less policy experts.

Between November 2009 and June 2010 Elevation Partners invested $210 million for 1% of Facebook.  That was early, but two years after Microsoft made a larger investment.  Back then, McNamee and other investors had facetime with Zuckerberg. 

McNamee supposedly became alarmed while perusing “Bay Area for Bernie” on Facebook and finding suspicious memes critical of Hillary.  Later, he imagined the Brexit vote must be due to misleading Facebook posts (as if British tabloids and TV were silent).  “Brexit happens in June,” he says, “and then I think, Oh my god, what if it’s possible that in a campaign setting, the candidate that has the more inflammatory message gets a structural advantage from Facebook? And then in August, we hear about Manafort, so we need to introduce the Russians into the equation.” 

He suggests goofy Facebook ads by Russian trolls stole the U.S. election from Clinton. Actually, the Mueller indictment said the Internet Research Agency “allegedly used social media and other internet platforms to address a wide variety of topicsto inflame political debates, frequently taking both sides of divisive issues.  Such political trolling for fun and profit (clicks generate advertising money) is commonplace in Russia, and also at home in the USA.

What’s Missing from Facebook’s Oversight Board

Facebook has set out a draft charter for an “Oversight Board for Content Decisions.” This document represents the first concrete step yet toward the “Supreme Court” for content moderation suggested by Mark Zuckerberg. The draft charter outlines the board itself and poses several related questions for interested parties. I will offer thoughts on those questions in upcoming blog posts. I begin here not with a question posed by Facebook, but rather by discussing two values I think get too little attention in the charter: legitimacy and deliberation.

The draft charter mentions “legitimacy” once: “The public legitimacy of the board will grow from the transparent, independent decisions that the board makes.” Legitimacy is commonly defined as conforming to law or existing rules (see, for example, the American Heritage Dictionary). But Facebook is clearly thinking more broadly, and they are wise to do so. Those who remove content from Facebook (and the board that judges the propriety of those removals) have considerable power. The authors of banned content acquire at least a certain stigma and may incur a broader social censure. Facebook has every legal right to remove the content, but they also need public acceptance of this power to impose costs on others. Absent that acceptance, the oversight board might become just another site of irreconcilable political differences or worse, “the removed” will call in the government to make things right. The oversight board should achieve many goals, but its architects might think first about its legitimacy.

The term “deliberation” also gets one mention in the draft charter: “Members will commit themselves not to reveal private deliberations except as expressed in official board explanations and decisions.” So there will be deliberations, and they will not be public (more on this in later posts about transparency). The case for deliberation is strengthened by considering its absence.

The draft could have said “members will commit themselves not to reveal private voting….” In a pure stakeholder model of the board, members would accurately represent the Facebook community (that is, they would be diverse). Members would consider the case before them and vote to advance the interests of those they represent. No deliberation would be necessary, though talk among members might be permitted. And, of course, such voting could be both transparent and independent. But the decision would be a mere weighing of interests rather than a consideration of reasons.

Why would those disappointed by the decision nonetheless consider it legitimate? Facebook could say to the disappointed: The board has final say on appeals of content moderation (after all, it’s in the terms of service you signed), and this is their decision. Logically that deduction might do the trick, but I think a somewhat different process might increase the legitimacy of the content moderation in the eyes of the disappointed. 

Consider a deliberative model for the board. A subset of the board meets and discusses the case before them. Arguments are offered, values probed, and conclusions reached. But the votes on the case would be informed by the prior deliberation. Members will represent the larger community in its many facets, but the path from representation to voting will include a collective giving and taking of reasons. That difference, I think, makes the deliberative model more likely to gain legitimacy. Simply losing a vote can seem like an expression of power. Losing an argument is more acceptable, and later the argument might be renewed with a different outcome.

The importance of deliberation implicates other values in the charter, especially independence. The draft places great weight on the independence of the board from Facebook. That emphasis is understandable. Critics have said Facebook will turn a blind eye to dangerous speech because it attracts attention and thereby, advertising dollars (Mark Zuckerberg has rebutted this criticism). The emphasis on independence from the business contains a truth: a board dedicated to maximizing Facebook’s quarterly returns might have a hard time gaining legitimacy. But the board’s deliberations should not be completely independent of Facebook. Facebook needs to make money to exist. Doing great harm to Facebook as a business cannot be part of the remit of the board. 

Here, as so often in life, James Madison has something valuable to add. In Federalist 10, Madison argues that political institutions should be designed to protect the rights of citizens and to advance “the permanent and aggregate interests of the community.” Facebook is a community. The Community Standards (and the board’s interpretation of them) should serve the permanent and aggregate interests of that community. The prosperity of the company (though perhaps not necessarily at every margin) is surely in the interest of the community. The interests represented on the board are a starting point for understanding the interests of that community, but in themselves they are not enough for that.  Deliberation might be the bridge between those interests and the “permanent and aggregate interests of the community.” Looked at that way, most users would have a reason to believe in the legitimacy of a deliberative board as opposed to a board of stakeholders.

Facebook’s draft charter evinces hard work and thought. But it could benefit from more focus on the conditions for the legitimacy of the oversight board. Deliberation (rather than simple interest representation) is part of the answer to the legitimacy question. As deliberations go forward, perhaps the charter’s framers might give more attention to how institutional design can foster deliberation.

Can Pluralism Work Online?

The Wall Street Journal reports that Facebook has consulted with conservative individuals and groups about its content moderation. Recently I suggested that social media managers would be inclined to give stakeholders a voice (though not a veto) on content moderation policies. Some on the left were well ahead in this game, proposing that the tech companies essentially turn over content moderation of “hate speech” to them. Giving voice to the right represents a kind of rebalancing of the play of political forces. 

argued earlier that looking to stakeholders had a flaw. These groups would be highly organized representatives of their members but not of most users of a platform. The infamous “special interests” of regular politics would thus come to dominate social media content moderation which in turn would have trouble generating legitimacy with users and the larger world outside of the internet.  

But another possibility exists which might be called “pluralism.” Both left and right are organized and thus are stakeholders. Social media managers recognize and seek advice from both sides about content moderation. But the managers retain the right of deciding the “content” part of content moderation. The groups are not happy, but we settle into a stable equilibrium that over time becomes a de facto speech regime for social media.  

A successful pluralism is possible. A lot will depend on the managers rapidly developing the political skills necessary to the task. They may be honing such skills. Facebook’s efforts with conservatives are far from hiring the usual suspects to get out of a jam. Twitter apparently followed conservative advice and verified a pro-gun Parkland survivor, an issue of considerable importance to conservative web pundits, given the extent of institutional support for the March for Our Lives movement. Note I am not saying the Right will win out but rather the companies may be able to manage a balanced system of oversight.  

But there will be challenges for this model.  

Spending decisions by Congress are often seen as a case of pluralist bargaining. Better organized or more skillful groups get more from the appropriations process; those who lose out can be placated with “side payments” to make legislation possible. Overall you get spending bills that no one completely likes, but everyone can live with until the next appropriations cycle. (I know that libertarians reject this sort of pluralism, but I not discussing what should be but rather what is as a way of understanding private content moderation). 

Here’s the challenge. The groups trying to affect social media content moderation are not bargaining over money. The left believes much of the rhetoric of the right has no place on any platform. The right notes that most social media employees lean left and wonder if their effort to cleanse the platforms begins with Alex Jones and ends with Charles Murray (i.e. everyone on the right). The right is thus tempted to call in a fourth player in the pluralist game of content moderation: the federal government. Managing pluralist competition and bargaining is a lot harder in a time of culture wars, as Facebook and Google have discovered.  

Transparency will not help matters. The Journal article mentioned earlier states: 

For users frustrated by the lack of clarity around how these companies make decisions, the added voices have made matters even murkier. Meetings between companies and their unofficial advisers are rarely publicized, and some outside groups and individuals have to sign nondisclosure agreements. 

Murkiness has its value! In this case, it allows candid discussions between the tech companies and various representatives of the left and the right. Those conversations might build trust between the companies and the groups from the left and the right and maybe even, among the groups. The left might stop thinking democracy is threatened online, and the right might conclude they are not eventually going to be pushed off the platforms. We might end up with rules for online speech that no one completely likes and yet are better than all realistic alternatives.  

Now imagine that everything about private content moderation is made public. For some, allowing speech on a platform will become compromising with “hate.” (Even if a group’s leaders don’t actually believe that, they would be required to say it for political reasons). Suppressing harassment or threats will frighten others and foster calls for government intervention to protect speech online. Our culture wars will endlessly inform the politics of content moderation. That outcome is unlikely to be the best we can hope for in an era when most speech will be online. 

 

The Politics of Public-Private Censorship

A month ago the novelist Jay Seliger asked “Is there an actual Facebook crisis, or media narrative about Facebook crisis?” After two years of criticism of the company, he noted, its users are still on board. Indeed, you might have to pay them a $1,000 to give up Facebook for one year. Seliger remarks that an earlier New York Times story “reads a lot like a media narrative that has very little to do with users’ actual lives.”

Seliger asserts that Facebook is “a Girardian scapegoat for a media ecosystem that is unable or unwilling to consider its own role” in the election of Donald Trump. (On Rene Girard see this). I don’t know about the culpability of the “media ecosystem,” but the ferocity of the campaign against Facebook suggests something more at work than a concern about privacy and the use of online data.

Many people were horrified and surprised by Trump’s election. But Trump himself, his campaign, and those who voted for him bear responsibility for his election; to be more accurate those who voted for him in a small number of states like Michigan and Wisconsin put him in the White House.

It is difficult to believe that Facebook’s managers were dumb enough to take sides in a presidential campaign, least of all the side of Donald Trump. Brad Parscale, Trump’s campaign manager in 2016, says plausibly that Facebook gave the campaign as much assistance as it would any multi-million dollar advertising customer. The company sent a person to be a “living manual” to the platform and to fix it quickly when it did not work.

Pages