Government‐​Run Cyber Security? No, Thanks

Share

Most people assume, and it's probably true, that our nation'snetworks and databases aren't secure enough. The risks range fromcorporate espionage to data breach and identity fraud to "cyberwarfare." The White House is taking on this problem-it's conducting a 60-day cyber securityreview. The review should explicitly deny federal responsibilityfor securing private infrastructure.

The president regards his budget as a "blueprintfor America's future." His opponent in the recent electionwanted to be commander-in-chiefof the United States. So it wouldn't be surprising if thereview set the stage for a federal takeover of communicationsnetworks in the name of cyber security. But owning cyber securitymay be an unappealing prospect even for federal authorities with anexpansive view of their roles. The surveillance needed forgovernment-run cyber security would create prohibitive threats tocivil liberties and privacy. And government folks seem aware thatthey don't know how to do cyber security any better than anyoneelse.

How do you improve security without exploding government power?How do you do it without giving the government de factosurveillance over the Internet? And, most importantly, how do youactually figure out how to do it?

The economic statement of the problem is this: Networkoperators, data owners, and users sometimes createexternalities-risks to others that don't affect their own bottomlines. Getting them to internalize those risks can be done one oftwo ways: Regulation-you mandate it-or liability-you make them payfor harms they cause others. Regulation and liability each havestrengths and weaknesses, but a liability regime is ultimatelysuperior.

One of the main problems with regulation-especially in a dynamicfield like technology-is that it requires a small number of peopleto figure out how things are going to work for an unknown andindefinite future. Those kinds of smarts simply don't exist. Soregulators often punt: When the Financial Services ModernizationAct tasked the Federal Trade Commission with figuring out how tosecure financial information, it didn't. Instead, the "SafeguardsRule" simply requires financial institutions to have a securityplan. If something goes wrong, the FTC will go back in and eitherfind the plan lacking or find that it was violated, much like thebody-baggingthe SEC does.

Another weakness of regulation is that it tends to be too broad.In an area where risks exist, regulators will ban entire swaths ofbehavior rather than selecting among the good and bad. In 1998, forexample, Congress passed the Children's Online Privacy ProtectionAct, and the FTC set up an impossible-to-navigateregime for parental approval of the websites their childrencould use. Today, no child has been harmed by a site that complieswith COPPA because there really aren't any. The market for servingchildren entertaining and educational content is a shadow of whatit could be.

Regulators and regulatory agencies are also subject to"capture." In his recent caution againstnetwork neutrality regulation, Tim Lee shows how industrieshave historically co-opted the agencies intended to control themand turned those agencies toward insulating incumbents fromcompetition.

And regulation often displaces individual justice. The FairCredit Reporting Act preempted state law causes of action againstcredit bureaus that, thus, cannot be held liable for defamationwhen their reports wrongfully cause someone to be denied credit."Privacy" regulations under the Health Insurance Portability andAccountability Act gave enforcement powers to an obscure office inthe Department of Health and Human Services. While a compliance kabuki dance goes on overhead, people who havesuffered privacy violations are diverted to seeking redress by thegrace of a federal agency.

Tort liability is based on the idea that someone who does harm,or allows harm to occur, should be responsible to the injuredparty. When a person drives a car, builds a building, runs a hotel,or installs a light switch, he or she owes it to anyone who mightbe injured to keep them safe. A rule of this typecould apply to owners and operators of networks anddatabases.

A liability regime is better at discovering and solving problemsthan regulation. Owners faced with paying for harms they cause willuse the latest knowledge and their intimacy with their businessesto protect the public. Like regulation, a liability regime won'tcatch a new threat the first time it appears, but as soon as athreat is known, all actors must improve their practices to meetit. Unlike regulations, which can take decades to update, liabilityupdates automatically.

Liability also leaves more room for innovation. Anything thatcauses harm is forbidden, but anything that does not cause harm isallowed. Entrepreneurs who are free to experiment will discoverconsumer-beneficial products and services that improve health,welfare, life, and longevity.

Liability rules arent always crystal clear, of course, but whencases of harm are alleged in tort law, the parties meet in acourtroom before a judge, and the judge neutrally adjudicates whatharm was done and who is responsible. When an agency enforces itsown regulation, it's not neutral: Agencies work to "send messages,"to protect their powers and budgets, and to foster future careersfor their staffs.

Especially in the high-tech world, it's hard to prove causation.The forensic skill to determine who was responsible for aninformation age harm is still too rare. But regulation is equallysubject to evasion. And liability acts not through lawsuits won,but by creating a protective incentive structure.

One risk unique to liability is that advocates will push to domore with it than compensate actual harms. Some would treat thecreation of risk as a "harm," arguing, for example, that companiesshould pay someone or do something about potential identity fraudjust because a data breach created the risk of it. They oftenshould, but blanket regulations like that actually promote too muchinformation security, lowering consumer welfare as people areprotected against things that don't actually harm them.

As complex and changing as cyber security is, the federalgovernment has no capability to institute a protective program forthe entire country. While it secures its own networks, the federalgovernment should encourage the adoption of state common law dutiesthat require network operators, data owners, and computer users tosecure their own infrastructure and assets. (They in turn willdivide up responsibility efficiently by contract.) This is the bestroute to discovering and patching security flaws in all theimplements of our information economy and society.

The White House's 60-day cyber security review should explicitlydeny federal responsibility for securing private communicationsinfrastructure. This is the best way forward-and an essential routeif we are to keep the government from monitoring and controllingAmericans' private communications.