Tag: Internet

Online Privacy and Regulation by Default

My colleague Jim Harper and I have been having a friendly internal argument about Internet privacy regulation that strikes me as having potential implications for other contexts, so I thought I might as well pick it up here in case it’s of interest to anyone else. Unsurprisingly, neither of us are particularly sanguine about elaborate regulatory schemes—and I’m sympathetic to the general tenor of his recent post on the topic. But unlike Jim, as I recently wrote here, I can think of two rules that might be appropriate: A notice requirement that says third-party trackers must provide a link to an ordinary-language explanation of what information is being collected, and for what purpose, combined with a clear rule making those stated privacy policies enforceable in court. Jim regards this as paternalistic meddling with online markets; I regard it as establishing the conditions for the smooth functioning of a market. What do those differences come down to?

First, a question of expectations. Jim thinks it’s unreasonable for people to expect any privacy in information they “release” publicly—and when he’s talking about messages posted to public fora or Facebook pages, that’s certainly right. But it’s not always right, and as we navigate the Internet our computers can be coaxed into “releasing” information in ways that are far from transparent to the ordinary user. Consider this analogy. You go to the mall to buy some jeans; you’re out in public and clearly in plain view of many other people—most of whom, in this day and age, are probably carrying cameras built into their cell phones. You can hardly complain about being observed, and possibly caught on camera, as you make your way to the store. But what about when you make your way to the changing room at The Gap to try on those jeans? If the management has placed an unobtrusive camera behind a mirror to catch shoplifters, can the law require that the store post a sign informing you that you’re being taped in a location and context where—even though it’s someone else’s property—most people would expect privacy? Current U.S. law does, and really it’s just one special case of the law laying down default rules to stabilize expectations.  I think Jim sees the reasonable expectation in the online context as “everything is potentially monitored and archived all the time, unless you’ve explicitly been warned otherwise.” Empirically, this is not what most people expect—though they might begin to as a result of a notice requirement.

Now, as Jim well knows, there are many cases in which the law sets defaults to stabilize expectations. Under the common law doctrine of implied warranty, when you go out and buy a toaster, you do not explicitly write out a contract in which it’s stipulated that the thing will turn on when you get home and plug it in, that it will toast bread without bursting into flames, and so on. Markets would not function terribly well if you did have to do this constantly. Rather, it’s understood that there are some minimal expectations built into the transaction—toasters toast bread!—unless the seller provides explicit notice that this is an “as is” sale. This brings us to a second point of divergence: Like Jim, I think the evolutionary mechanism of the common law is generally the best way to establish these market-structuring defaults. Unlike Jim, I think sometimes it’s appropriate to resort to statute instead. This story from Techdirt should suggest why:

It’s still not entirely clear what online agreements are actually enforceable and which aren’t. We’ve seen cases go both ways, with a recent ruling even noting that terms that are a hyperlink away, rather than on the agreement page itself, may be enforceable. But the latest case, involving online retailer Overstock went in the other direction. A court found that Overstock’s arbitration requirement was unenforceable, because, as “browserwrap,” the user was not adequately notified. Eventually, it seems that someone’s going to have to make it clear what sorts of online terms are actually enforceable (if any). Until then, we’re going to see a lot more lawsuits like this one.

Evolutionary mechanisms are great, but they’re also slow, incremental, and in the case of the common law typically parasitic on the parallel evolution of broader social norms and expectations. That makes it an uneasy fit with novel and rapidly changing technological platforms for interaction. The tradeoff is that, while it’s slow, the discovery process tends to settle on efficient rules. But sometimes having a clear rule is actually more important—maybe significantly more important—than getting the rule just right. These features seem to me to weigh in favor of allowing Congress, not to say what standards of privacy must look like, but to step in and lay down public default rules that provide a stable basis for informed consumers and sellers to reach their own mutually beneficial agreements.

Finally, there’s the question of whether it’s constitutionally appropriate for federal legislators, rather than courts, to make that kind of decision. I scruple to say how “the Founders intended” the Constitution to apply to e-commerce, but even on a very narrow reading of the Commerce Clause, this seems to fall safely within the purview of a power to “make regular” commerce between the several states by establishing uniform rules for transactions across a network that pays no heed to state boundaries. A patchwork of divergent standards imposed by judges and state legislators does not strike me as an especially market-friendly response to people’s online privacy concerns, but that appears to be the alternative. If there’s a way to address those concerns that’s both constitutionally appropriate and works by enabling informed choice and contract rather than nannying consumers or micromanaging business practices, then it seems to me that it makes sense for supporters of limited government to point that solution out.

A Bizarre Privacy Indictment

Page one of today’s Washington Times—above the fold—has a fascinating story indicting the White House for failing to disclose that it will collect and retain material posted by visitors to its pages on social networking sites like Facebook and YouTube. The story is fascinating because so much attention is being paid to it. (It was first reported, as an aside at least, by Major Garrett on Fox News a month ago.)

The question here is not over the niceties of the Presidential Records Act, which may or may not require collection and storage of the data. It’s over people’s expectations when they use the Internet.

Marc Rotenberg, president of the Electronic Privacy Information Center, said the White House signaled that it would insist on open dealings with Internet users and, in fact, should feel obliged to disclose that it is collecting such information.

Of course, the White House is free to disclose or announce anything it wants. It might be nice to disclose this particular data practice. But is it really a breach of privacy—and, through failure to notify, transparency—if there isn’t a distinct disclosure about this particular data collection?

Let’s talk about what people expect when they use the Internet and social networking sites. Though the Internet is a gigantic copying machine, some may not know that data is collected online. They may imagine that, in the absence of notice, the data they post will not be warehoused and redistributed, even though that’s exactly what the Internet does.

There can be special problems when it is the government collecting the information. The White House’s “flag [at] whitehouse.gov” tip line was concerning because it asked Americans to submit information about others. There is a history of presidents amassing “enemies” lists. But this is not the complaint with White House tracking of data posted on its social networking sites.

People typically post things online because they want publicity for those things—often they want publicity for the fact that they are the ones posting, too. When they write letters, they give publicity to the information in the letter and the fact of having sent it. When they hold up signs, they seek publicity for the information on the signs, and their own role in publicizing it.

How strange that taking note of the things people publicize is taken as a violation of their privacy. And failing to notify them of the fact they will be observed and recorded is a failure of transparency.

America, for most of what you do, you do not get “notice” of the consequences. Instead, in the real world and online, you grown-ups are “on notice” that information you put online can be copied, stored, retransmitted, and reused in countless ways. Aside from uses that harm you, you have little recourse against that after you have made the decision to release information about yourself.

The White House is not in the wrong here. If there’s a lesson, it’s that people are responsible for their own privacy and need to be aware of how information moves in the online environment.

Picture Don Draper Stamping on a Human Face, Forever

Last week, a coalition of 10 privacy and consumer groups sent letters to Congress advocating legislation to regulate behavioral tracking and advertising, a phrase that actually describes a broad range of practices used by online marketers to monitor and profile Web users for the purpose of delivering targeted ads. While several friends at the Tech Liberation Front have already weighed in on the proposal in broad terms – in a nutshell: they don’t like it – I think it’s worth taking a look at some of the specific concerns raised and remedies proposed. Some of the former strike me as being more serious than the TLF folks allow, but many of the latter seem conspicuously ill-tailored to their ends.

First, while it’s certainly true that there are privacy advocates who seem incapable of grasping that not all rational people place an equally high premium on anonymity, it strikes me as unduly dismissive to suggest, as Berin Szoka does, that it’s inherently elitist or condescending to question whether most users are making informed choices about their privacy. If you’re a reasonably tech-savvy reader, you probably know something about conventional browser cookies, how they can be used by advertisers to create a trail of your travels across the Internet, and how you can limit this.  But how much do you know about Flash cookies? Did you know about the old CSS hack I can use to infer the contents of your browser history even without tracking cookies? And that’s without getting really tricksy. If you knew all those things, congratulations, you’re an enormous geek too – but normal people don’t.  And indeed, polls suggest that people generally hold a variety of false beliefs about common online commercial privacy practices.  Proof, you might say, that people just don’t care that much about privacy or they’d be attending more scrupulously to Web privacy policies – except this turns out to impose a significant economic cost in itself.

The truth is, if we were dealing with a frictionless Coaseian market of fully-informed users, regulation would not be necessary, but it would not be especially harmful either, because users who currently allow themselves to be tracked would all gladly opt in. In the real world, though, behavioral economics suggests that defaults matter quite a lot: Making informed privacy choices can be costly, and while an opt-out regime will probably yield tracking of some who would prefer not to be under conditions of full information and frictionless choice, an opt-in regime will likely prevent tracking of folks who don’t object to tracking. And preventing that tracking also has real social costs, as Berin and Adam Thierer have taken pains to point out. In particular, it merits emphasis that behavioral advertising is regarded by many as providing a viable business model for online journalism, where contextual advertising tends not to work very well: There aren’t a lot of obvious products to tie in to an important investigative story about municipal corruption. Either way, though, the outcome is shaped by the default rule about the level of monitoring users are presumed to consent to. So which set of defaults ought we to prefer?

Here’s why I still come down mostly on Adam and Berin’s side, and against many of the regulatory remedies proposed. At the risk of stating the obvious, users start with de facto control of their data. Slightly less obvious: While users will tend to have heterogeneous privacy preferences – that’s why setting defaults either way is tricky – individual users will often have fairly homogeneous preferences across many different sites. Now, it seems to be an implicit premise of the argument for regulation that the friction involved in making lots of individual site-by-site choices about privacy will yield oversharing. But the same logic cuts in both directions: Transactional friction can block efficient departures from a high-privacy default as well. Even a default that optimally reflects the median user’s preferences or reasonable expectations is going to flub it for the outliers. If the variance in preferences is substantial, and if different defaults entail different levels of transactional friction, nailing the default is going to be less important than choosing the rule that keeps friction lowest. Given that most people do most of their Web surfing on a relatively small number of machines, this makes the browser a much more attractive locus of control. In terms of a practical effect on privacy, the coalition members would probably achieve more by persuading Firefox to set their browser to reject third-party cookies out of the box than from any legislation they’re likely to get – and indeed, it would probably have a more devastating effect on the behavioral ad market. Less bluntly, browsers could include a startup option that asks users whether they want to import an exclusion list maintained by their favorite force for good.

On the model proposed by the coalition, individuals have to make affirmative decisions about what data collection to permit for each Web site or ad network at least once every three months, and maybe each time they clear their cookies. If you think almost everyone would, if fully informed, opt out of such collection, this might make sense. But if you take the social benefits of behavioral targeting seriously, this scheme seems likely to block a lot of efficient sharing. Browser-based controls can still be a bit much for the novice user to grapple with, but programmers seem to be getting better and better at making it more easy and automatic for users to set privacy-protective defaults. If the problem with the unregulated market is supposed to be excessive transaction costs, it seems strange to lock in a model that keeps those costs high even as browser developers are finding ways to streamline that process. It’s also worth considering whether such rules wouldn’t have the perverse consequence of encouraging consolidation across behavioral trackers. The higher the bar is set for consent to monitoring, the more that consent effectively becomes a network good, which may encourage concentration of data in a small number of large trackers – not, presumably, the result privacy advocates are looking for. Finally – and for me this may be the dispositive point – it’s worth remembering that while American law is constrained by national borders, the Internet is not. And it seems to me that there’s a very real danger of giving the least savvy users a false sense of security – the government is on the job guarding my privacy! no need to bother learning about cookies! – when they may routinely and unwittingly be interacting with sites beyond the reach of domestic regulations.

There are similar practical difficulties with the proposal that users be granted a right of access to behavioral tracking data about them.  Here’s the dilemma: Any requirement that trackers make such data available to users is a potential security breach, which increases the chances of sensitive data falling into the wrong hands. I may trust a site or ad network to store this information for the purpose of serving me ads and providing me with free services, but I certainly don’t want anyone who sends them an e-mail with my IP address to have access to it. The obvious solution is for them to have procedures for verifying the identity of each tracked user – but this would appear to require that they store still more information about me in order to render tracking data personally identifiable and verifiable. A few ways of managing the difficulty spring to mind, but most defer rather than resolve the problem, and add further points of potential breach.

That doesn’t mean there’s no place for government or policy change here, but it’s not always the one the coalition endorses. Let’s look  more closely at some of their specific concerns and see which, if any, are well-suited to policy remedies. Only one really has anything to do with behavioral advertising, and it’s easily the weakest of the bunch. The groups worry that targeted ads – for payday loans, sub-prime mortgages, or snake-oil remedies – could be used to “take advantage of vulnerable consumers.” It’s not clear that this is really a special problem with behavioral ads, however: Similar targeting could surely be accomplished by means of contextual ads, which are delivered via relevant sites, pages, or search terms rather than depending on the personal characteristics or browsing history of the viewer – yet the groups explicitly aver that no new regulation is appropriate for contextual advertising. In any event, since whatever problem exists here is a problem with ads, the appropriate remedy is to focus on deceptive or fraudulent ads, not the particular means of delivery. We already, quite properly, have rules covering dishonest advertising practices.

The same sort of reply works for some of the other concerns, which are all linked in some more specific way to the collection, dissemination, and non-advertising use of information about people and their Web browsing habits. The groups worry, for instance, about “redlining” – the restriction or denial of access to goods, services, loans, or jobs on the basis of traits linked to race, gender, sexual orientation, or some other suspect classification. But as Steve Jobs might say, we’ve got an app for that: It’s already illegal to turn down a loan application on the grounds that the applicant is African American. There’s no special exemption for the case where the applicant’s race was inferred from a Doubleclick profile. But this actually appears to be something of a redlining herring, so to speak: When you get down into the weeds, the actual proposal is to bar any use of data collected for “any credit, employment, insurance, or governmental purpose or for redlining.” This seems excessively broad; it should suffice to say that a targeter “cannot use or disclose information about an individual in a manner that is inconsistent with its published notice.”

Particular methods of tracking may also be covered by current law, and I find it unfortunate that the coalition letter lumps together so many different practices under the catch-all heading of “behavioral tracking.” Most behavioral tracking is either done directly by sites users interact with – as when Amazon uses records of my past purchases to recommend new products I might like – or by third party companies whose ads place browser cookies on user computers. Recently, though, some Internet Service Providers have drawn fire for proposals to use Deep Packet Inspection to provide information about their users’ behavior to advertising partners – proposals thus far scuppered by a combination of user backlash and congressional grumbling. There is at least a colorable argument to be made that this practice would already run afoul of the Electronic Communications Privacy Act, which places strict limits on the circumstances under which telecom providers may intercept or share information about the contents of user communications without explicit permission. ECPA is already seriously overdue for an update, and some clarification on this point would be welcome. If users do wish to consent to such monitoring, that should be their right, but it should not be by means of a blanket authorization in eight-point type on page 27 of a terms-of-service agreement.

Similarly welcome would be some clarification on the status of such behavioral profiles when the government comes calling. It’s an unfortunate legacy of some technologically atavistic Supreme Court rulings that we enjoy very little Fourth Amendment protection against government seizure of private records held by third parties – the dubious rationale being that we lose our “reasonable expectation of privacy” in information we’ve already disclosed to others outside a circle of intimates. While ECPA seeks to restore some protection of that data by statute, we’ve made it increasingly easy in recent years for the government to seek “business records” by administrative subpoena rather than court order. It should not be possible to circumvent ECPA’s protections by acquiring, for instance, records of keyword-sensitive ads served on a user’s Web-based e-mail.

All that said, some of the proposals offered up seem,while perhaps not urgent, less problematic. Requiring some prominent link to a plain-English description of how information is collected and used constitutes a minimal burden on trackers – responsible sites already maintain prominent links to privacy policies anyway – and serves the goal of empowering users to make more informed decisions. I’m also warily sympathetic to the idea of giving privacy policies more enforcement teeth – the wariness stemming from a fear of incentivizing frivolous litigation. Still, the status quo is that sites and ad networks profitably elicit information from users on the basis of stated privacy practices, but often aren’t directly liable to consumers if they flout those promises, unless the consumer can show that the breach of trust resulted in some kind of monetary loss.

Finally, a quick note about one element of the coalition recommendations that neither they nor their opponents seem to have discussed much – the insistence that there be no federal preemption of state privacy law. I assume what’s going on here is that the privacy advocates expect some states to be more protective of privacy than Congress or the FTC would be, and want to encourage that, while libertarians are more concerned with keeping the federal government from getting involved at all. But really, if there’s an issue that was made for federal preemption, this is it.  A country where vendors, advertisers, and consumers on a borderless Internet have to navigate 50 flavors of privacy rules to sell a banner add or an iTunes track does not sound particularly conducive to privacy, commerce, or informed consumer choice.

600 Billion Data Points Per Day? It’s Time to Restore the Fourth Amendment

Jeff Jonas has published an important post: “Your Movements Speak for Themselves: Space-Time Travel Data is Analytic Super-Food!”

More than you probably realize, your mobile device is a digital sensor, creating records of your whereabouts and movements:

Mobile devices in America are generating something like 600 billion geo-spatially tagged transactions per day. Every call, text message, email and data transfer handled by your mobile device creates a transaction with your space-time coordinate (to roughly 60 meters accuracy if there are three cell towers in range), whether you have GPS or not. Got a Blackberry? Every few minutes, it sends a heartbeat, creating a transaction whether you are using the phone or not. If the device is GPS-enabled and you’re using a location-based service your location is accurate to somewhere between 10 and 30 meters. Using Wi-Fi? It is accurate below 10 meters.

The process of deploying this data to markedly improve our lives is underway. A friend of Jonas’ says that space-time travel data used to reveal traffic tie-ups shaves two to four hours off his commute each week. When it is put to full use, “the world we live in will fundamentally change. Organizations and citizens alike will operate with substantially more efficiency. There will be less carbon emissions, increased longevity, and fewer deaths.”

This progress is not without cost:

A government not so keen on free speech could use such data to see a crowd converging towards a protest site and respond before the swarm takes form – detected and preempted, this protest never happens. Or worse, it could be used to understand and then undermine any political opponent.

Very few want government to be able to use this data as Jonas describes, and not everybody wants to participate in the information economy quite so robustly. But the public can’t protect itself against what it can’t see. So Jonas invites holders of space-time data to reveal it:

[O]ne way to enlighten the consumer would involve holders of space-time-travel data [permitting] an owner of a mobile device the ability to also see what they can see:

(a) The top 10 places you spend the most time (e.g., 1. a home address, 2. a work address, 3. a secondary work facility address, 4. your kids school address, 5. your gym address, and so on);

(b) The top three most predictable places you will be at a specific time when on the move (e.g., Vegas on the 215 freeway passing the Rainbow exit on Thursdays 6:07 - 6:21pm – 57% of the time);

(c) The first name and first letter of the last name of the top 20 people that you regularly meet-up with (turns out to be wife, kids, best friends, and co-workers – and hopefully in that order!)

(d) The best three predictions of where you will be for more than one hour (in one place) over the next month, not counting home or work.

Google’s Android and Latitude products are candidates to take the lead, he says, and I agree. Google collectively understands both openness and privacy, and it’s nimble enough still to execute something like this. Other mobile providers would be forced to follow this innovation.

What should we do to reap the benefits while minimizing the costs? The starting point is you: It is your responsibility to deal with your mobile provider as an adult. Have you read your contract? Have you asked them whether they collect this data, how long they keep it, whether they share it, and under what terms?

Think about how you can obscure yourself. Put your phone in airplane mode when you are going someplace unusual - or someplace usual. (You might find that taking a break from being connected opens new vistas in front of your eyes.) Trade phones with others from time to time. There are probably hacks on mobile phone system that could allow people to protect themselves to some degree.

Privacy self-help is important, but obviously it can be costly. And you shouldn’t have to obscure yourself from your mobile communications provider, giving up the benefits of connected living, to maintain your privacy from government.

The emergence of space-time travel data begs for restoration of Fourth Amendment protections in communications data. In my American University Law Review article, “Reforming Fourth Amendment Privacy Doctrine,” I described the sorry state of the Fourth Amendment as to modern communications.

The “reasonable expectation of privacy” doctrine that arose out of the Supreme Court’s 1967 Katz decision is wrong—it isn’t even founded in the majority holding of the case. The “third-party doctrine,” following Katz in a pair of early 1970s Bank Secrecy Act cases, denies individuals Fourth Amendment claims on information held by service providers. Smith v. Maryland brought it home to communications in 1979, holding that people do not have a “reasonable expectation of privacy” in the telephone numbers they dial. (Nevermind that they actually have privacy—the doctrine trumps it.)

Concluding, apropos of Jonas’ post, I wrote:

These holdings were never right, but they grow more wrong with each step forward in modern, connected living. Incredibly deep reservoirs of information are constantly collected by third-party service providers today.

Cellular telephone networks pinpoint customers’ locations throughout the day through the movement of their phones. Internet service providers maintain copies of huge swaths of the information that crosses their networks, tied to customer identifiers. Search engines maintain logs of searches that can be correlated to specific computers and usually the individuals that use them. Payment systems record each instance of commerce, and the time and place it occurred.

The totality of these records are very, very revealing of people’s lives. They are a window onto each individual’s spiritual nature, feelings, and intellect. They reflect each American’s beliefs, thoughts, emotions, and sensations. They ought to be protected, as they are the modern iteration of our “papers and effects.”

This “Cyberwar” Is a Cybersnooze

The AP and other sources have been reporting on a “cyberattack” affecting South Korea and U.S. government Web sites, including the White House, Secret Service and Treasury Department.

Allegedly mounted by North Korea, this attack puts various “cyber” threats in perspective. Most Americans will probably not know about it, and the ones who do will learn of it by reading about it. Only a tiny percentage of people will notice the absence of the Web sites attacked. (An update to the story linked above notes that several agencies and entities “blunted” the attacks, as well-run Web sites will do.)

This is the face of “cyberwar,” which has little strategic value and little capacity to do real damage. This episode also underscores the fact that “cyberterrorism” cannot exist – because this kind of attack isn’t terrifying.

As I said in my recent testimony before the House Science Committee, it is important to secure web sites, data, and networks against all threats, but this can be done and is being done methodically and successfully – if imperfectly – by the distributed owners and controllers of all our nation’s “cyber” assets. Hyping threats like “cyberwar” and “cyberterror” is not helpful.

Some Thinking on “Cyber”

Last week, I had the opportunity to testify before the House Science Committee’s Subcommittee on Technology and Innovation on the topic of “cybersecurity.” I have been reluctant to opine on it because of its complexity, but I did issue a short piece a few months ago arguing against government-run cybersecurity. That piece was cited prominently in the White House’s “Cyberspace Policy Review” and – blamo! – I’m a cybersecurity expert.

Not really – but I have been forming some opinions at a high level of generality that are worth making available. They can be found in my testimony, but I’ll summarize them briefly here.

First, “cybersecurity” is a term so broad as to be meaningless. Yes, we are constructing a new “space” analogous to physical space using computers, networks, sensors, and data, but we can no more secure “cyberspace” in its entirety than we can secure planet Earth and the galaxy. Instead, we secure the discrete things that are important to us – houses, cars, buildings, power lines, roads, private information, money, and so on. And we secure these things in thousands of different ways. We should secure “cyberspace” the same way – thousands of different ways.

By “we,” of course, I don’t mean the collective. I mean that each owner or controller of a prized thing should look out for its security. It’s the responsibility of designers, builders, and owners of houses, for exmple, to ensure that they properly secure the goods kept inside. It’s the responsibility of individuals to secure the information they wish to keep private and the money they wish to keep. It is the responsibility of network operators to secure their networks, data holders to secure their data, and so on.

Second, “cyber” threats are being over-hyped by a variety of players in the public policy area. Invoking “cyberterrorism” or “cyberwar” is near-boilerplate in white papers addressing government cybersecurity policy, but there is very limited strategic logic to “cyberwarfare” (aside from attacking networks during actual war-time), and “cyberterrorism” is a near-impossibility. You’re not going to panic people – and that’s rather integral to terrorism – by knocking out the ATM network or some part of the power grid for a period of time.

(We weren’t short of careless discussions about defending against “cyber attack,” but L. Gordon Crovitz provided yet another example in yesterday’s Wall Street Journal. As Ben Friedman pointed out, Evgeny Morozov has the better of it in the most recent Boston Review.)

This is not to deny the importance of securing digital infrastructure; it’s to say that it’s serious, not scary. Precipitous government cybersecurity policies – especially to address threats that don’t even have a strategic logic – would waste our wealth, confound innovation, and threaten civil liberties and privacy.

In the cacophony over cybersecurity, an important policy seems to be getting lost: keeping true critical infrastructure offline. I noted Senator Jay Rockefeller’s (D-WV) awesomely silly comments about cybersecurity a few months ago. They were animated by the premise that all the good things in our society should be connected to the Internet or managed via the Internet. This is not true. Removing true critical infrastructure from the Internet takes care of the lion’s share of the cybersecurity problem.

Since 9/11, the country has suffered significant “critical-infrastructure inflation” as companies gravitate to the special treatments and emoluments government gives owners of “critical” stuff. If “criticality” is to be a dividing line for how assets are treated, it should be tightly construed: If the loss of an asset would immediately and proximately threaten life or health, that makes it critical. If danger would materialize over time, that’s not critical infrastructure – the owners need to get good at promptly repairing their stuff. And proximity is an important limitation, too: The loss of electric power could kill people in hospitals, for example, but ensuring backup power at hospitals can intervene and relieve us of treating the entire power grid as “critical infrastructure,” with all the expense and governmental bloat that would entail.

So how do we improve the state of cybersecurity? It’s widely believed that we are behind on it. Rather than figuring out how to do cybersecurity – which is impossible – I urged the committee to consider what policies or legal mechanisms might get these problems figured out.

I talked about a hierarchy of sorts. First, contract and contract liability. The government is a substantial purchaser of technology products and services – and highly knowledgeable thanks to entities like the National Institutes of Standards and Technology. Yes, I would like it to be a smaller purchaser of just about everything, but while it is a large market actor, it can drive standards and practices (like secure settings by default) into the marketplace that redound to the benefit of the cybersecurity ecology. The government could also form contracts that rely on contract liability – when products or services fail to serve the purposes for which they’re intended, including security – sellers would lose money. That would focus them as well.

A prominent report by a working group at the Center for Strategic and International Studies – co-chaired by one of my fellow panelists before the Science Committee last week, Scott Charney of Microsoft – argued strenuously for cybersecurity regulation.

But that begs the question of what regulation would say. Regulation is poorly suited to the process of discovering how to solve new problems amid changing technology and business practices.

There is some market failure in the cybersecurity area. Insecure technology can harm networks and users of networks, and these costs don’t accrue to the people selling or buying technology products. To get them to internalize these costs, I suggested tort liability rather than regulation. While courts discover the legal doctrines that unpack the myriad complex problems with litigating about technology products and services, they will force technology sellers and buyers to figure out how to prevent cyber-harms.

Government has a role in preventing people from harming each other, of course, and the common law could develop to meet “cyber” harms if it is left to its own devices. Tort litigation has been abused, and the established corporate sector prefers regulation because it is a stable environment for them, it helps them exclude competition, and they can use it to avoid liability for causing harm, making it easier to lag on security. Litigation isn’t preferable, and we don’t want lots of it – we just want the incentive structure tort liability creates.

As the distended policy issue it is, “cybersecurity” is ripe for shenanigans. Aggressive government agencies are looking to get regulatory authority over the Internet, computers, and software. Some of them wouldn’t mind getting to watch our Internet traffic, of course. Meanwhile, the corporate sector would like to use government to avoid the hot press of market competition, while shielding itself from liability for harms it may cause.

The government must secure its own assets and resources – that’s a given. Beyond that, not much good can come from government cybersecurity policy, except the occassional good, long blog post.

Morozov vs. Cyber-Alarmism

I’m no information security expert, but you don’t have to be to realize that an outbreak of cyber-alarmism afflicts American pundits and reporters.

As Jim Harper and Tim Lee have repeatedly argued (with a little help from me), while the internet created new opportunities for crime, spying, vandalism and military attack, the evidence that the web opens a huge American national security vulnerability comes not from events but from improbable what-ifs. That idea is, in other words, still a theory. Few pundits bother to point out that hackers don’t kill, that cyberspies don’t seem to have stolen many (or any?) important American secrets, and that our most critical infrastructure is not run on the public internet and thus is relatively invulnerable to cyberwhatever. They never note that to the extent that future wars have an online component, this redounds to the U.S. advantage, given our technological prowess.  Even the Wall Street Journal and New York Times recently published breathless stories exaggerating our vulnerability to online attacks and espionage.

So it’s good to see that the July/ August Boston Review has a terrific article by Evgeny Morozov taking on the alarmists. He provides not only a sober net assessment of the various worries categorized by the vague modifier “cyber” but even offers a theory about why hype wins.

Why is there so much concern about “cyber-terrorism”? Answering a question with a question: who frames the debate? Much of the data are gathered by ultra-secretive government agencies—which need to justify their own existence—and cyber-security companies—which derive commercial benefits from popular anxiety. Journalists do not help. Gloomy scenarios and speculations about cyber-Armaggedon draw attention, even if they are relatively short on facts.

I agree.