Assessing Cybersecurity Activities at NIST and DHS


Cybersecurity is a bigger, more multi‐​faceted problem than thegovernment can solve, and it certainly cannot solve the whole rangeof cybersecurity problems quickly.

With a few exceptions, cybersecurity is less urgent than manycommentators allege. There is no argument, of course, thatcybersecurity is not important.

The policy of keeping true critical infrastructure off thepublic Internet has been lost in the “cybersecurity” cacophony. Itis a simple security practice that will take care of many threatsagainst truly essential assets.

The goal of policymakers should be not to solve cybersecurity,but to determine the systems that will best discover and propagategood security technology and practices.

As a market participant, the federal government is wellpositioned to effect the cybersecurity ecology positively, withNIST standards integral to that process. The federal government mayalso advance cybersecurity by shifting risk to sellers oftechnology by contract.

For the market failure that is on exhibit when insecuretechnology harms networks or other users, liability is preferableto regulation for discovering who should bear responsibility.

When the federal government abandons its role of marketparticipant and becomes a market dominator, regulator, “partner,“or investor with private sector entities, a number of risks arise,including threats to privacy and civil liberties, weakenedcompetition and innovation, and waste of taxpayer dollars.


Chairman Wu, Ranking Member Smith, and members of thesubcommittee, thank you for inviting me to address you in thishearing on the cybersecurity activities of the National Instituteof Standards and Technology and the Department of HomelandSecurity. The hearings you have conducted so far are a valuablecontribution to the national discussion, as I hope my participationin this hearing will be valuable as well.

My name is Jim Harper and I am director of information policystudies at the Cato Institute. In that role, I study and writeabout the difficult problems of adapting law and policy to thechallenges of the information age. I also maintain an onlinefederal spending resource called Wash​ing​ton​Watch​.com. Catois a market liberal, or libertarian, think‐​tank, and I pay specialattention to preserving and restoring our nation’s founding,constitutional traditions of individual liberty, limitedgovernment, free markets, peace, and the rule of law.

I serve as an advisor to the Department of Homeland Security onits Data Integrity and Privacy Advisory Committee, and my primaryfocus in general is on privacy and civil liberties. I am not atechnologist or a cybersecurity expert, but a lawyer familiar withtechnology and security issues. As a former committee counsel inboth the House and Senate, I also blend an understanding oflawmaking and regulatory processes with technology and security. Ihope this background and my perspective enhance your considerationof the many challenging issues falling under the name“cybersecurity.”

In my testimony, I will spend a good deal of time on fundamentalproblems in cybersecurity and the national cybersecurity discussionso far. I will then apply this thinking to some of the policiesNIST, DHS, and other agencies are working on.

The Use and Misuse of “Cyberspace” and“Cybersecurity”

One of the profound challenges you face in setting“cybersecurity” policy is the framing of the issue. “Cyberspace” isinsecure, we all believe, and by making it integral to our lives,we are importing insecurity, as individuals and as a nation.

In some senses this is true, and “securing cyberspace” is ahelpful way of thinking about the problem. But it also promotesovergeneralization, suggesting that a bounded set of behaviorscalled “cybersecurity” can resolve things.

A new world or “space” is indeed coming into existence throughthe development of communications networks, protocols, software,sensors, commerce, and content. In many ways, this world isdistinct and different from the physical space that we occupy. In“cyberspace,” we now do many of the things we used to do only inphysical space: we shop, debate, read the news, work, gossip,manage our financial affairs, and so on. Businesses and governmentagencies, of course, conduct their operations in the new“cyberspace” as well.

It is even helpful to extend this analogy and imagine“cyberspace” as organized like the physical world. Think ofpersonal computers as people’s homes. Their attachments to thenetwork analogize to driveways, which connect to roads and thenhighways. (Perhaps phones and handheld devices are data‐​bearingcars and motorcycles.) Emails, financial files, and pictures arethe personal possessions that could be stolen out of houses andprivate vehicles, leading to privacy loss.

Corporate and government networks are cyberspace’s officebuildings. Business data, personnel files, and intellectualproperty are the goods that sometimes get left on the loading dock,personnel files and business places that are left on the desk in anexecutive’s office overnight, and so on. They can be stolen fromthe “office buildings” in data breaches.

How do you secure these places and things from theft, bothcasual and organized? How do you prevent fires, maintain water andelectric service, ensure delivery of food, and prevent outbreaks ofdisease? How do you defend against military invasion or weapons ofmass destruction in this all‐​new “space”?

These problems are harder to solve in some senses, and not ashard to solve in others. Consider, for example, that the “houses“and “office buildings” of cyberspace can be reconstituted inminutes or hours if software and data have been properly backed up.Lost possessions can be “regained” just as quickly‐​though copies ofthem may permanently be found elsewhere. “Cyberspace” has manyresiliencies that real space lacks.

On the other hand, “diseases” (new exploits) multiply much morequickly and broadly than in the real world. “Cyber-public-health“measures like mandated vaccinations (the required use of securityprotocols) are important, though they may be unreliable. On aglobal public medium like the Internet, they would have to bemandated by an authority or authorities with global jurisdictionand authority over every computing device, which is unlikely andprobably undesirable.

The analogy between cyberspace and real space shows that“cybersecurity” is not a small universe of problems, but thousandsof different problems that will be handled in thousands ofdifferent ways by millions of people over the coming decades.Securing cyberspace means tackling thousands of technologyproblems, business problems, economics problems, and lawenforcement problems.

In my opinion, if it takes decades to come up with solutions,that is fine. The security of things in “real” space has developedin an iterative process over hundreds and, in some cases, thousandsof years. Even “simple” security devices like doors, locks, andwindows involve fascinating and intricate security, utility, andconvenience trade‐​offs that are hard even for experts tosummarize.

Many would argue, of course, that we do not have decades tofigure out cybersecurity. But I believe that, with few exceptions,most of these assertions are mistaken. Your ability to craft soundcybersecurity policies for the government is threatened by thebreathlessness of public discussion that is common in thisfield.

Calm Down, Slow Down

Overuse of urgent rhetoric is a challenge to setting balancedcybersecurity policy. Threat exaggeration has become boilerplate inthe cybersecurity area, it seems, and while cybersecurity isimportant, overstatement of the problems will promote imbalancedresponses that are likely to sacrifice our wealth, progress, andprivacy.

For example, comparisons between “cyberattack” and conventionalmilitary attack are overwrought. As one example (which I selectonly because it is timely), the Center for a New American Securityis hosting a cybersecurity event this week, and the language of theinvitation says: “[A] cyberattack on the United States’telecommunications, electrical grid, or banking system could poseas serious a threat to U.S. security as an attack carried out byconventional forces.”[1]

As a statement of theoretical extremes, it is true: Theinconvenience and modest harms posed by a successful crack of ourcommunications or data infrastructure could be more serious than aninvasion by an ill‐​equipped, small army. But as a serious assertionabout real threats, an attack by conventional forces (howeverunlikely) would be entirely more serious than any realisticcyberattack. We would stand to lose national territory, whichcannot be reconstituted by rebooting, repairing software, andreloading backed‐​up files.

The Center for Strategic and International Studies’ influentialreport, Securing Cyberspace for the 44thPresidency, said similarly that cybersecurity “is a strategicissue on par with weapons of mass destruction and globaljihad.”[2] Many weapons of mass destruction areless destructive than people assume, and the threat of global jihadappears to be waning, but threats to our communications networks,computing facilities, and data stores pale in comparison to trueWMD like nuclear weapons. Controlling the risk of nuclear attackremains well above cybersecurity in any sound ranking of strategicnational priorities.

It is a common form of threat exaggeration to cite the rawnumber of attacks on sensitive networks, like the Department ofDefense’s. It suffers hundreds of millions of attacks per year. Buthappily most of these “attacks” are repetitious use of the sameattack. They are mounted by “script kiddies”-unsophisticatedknow-nothings who get copies of others’ attacks and run them on thechance that they will find an open door.

The defense against this is to continually foreclose attacks andgenres of attack as they develop, the way the human body developsantibodies to germs and viruses. Securing against these attacks isimportant work, and it is not always easy, but it is an ongoing,stable practice in network management and a field of ongoing studyin computer science. The attacks may continue to come in themillions, but this is less concerning when immunities and failsafesare in place and continuously being updated.

In his generally balanced speech on cybersecurity, PresidentObama cited a threat he termed “weapons of massdisruption.”[3] Again, analogy to the devastationthat might be done by nuclear weapons is misleading. Inconvenienceand disruption are bad things, they can be costly, and in theextreme case deadly‐​again, cybersecurity is important‐​but securingagainst the use of real weapons on the U.S. and its people is amore important government role.

In a similar vein, a commentator on the National Journal’snational security experts blog recently said, “Cyberterrorism ishere to stay and will grow bigger.”[4] Cyberterrorism is nothere, and thus it is not in a position to stay.

Provocative statements of this type lack a key piece offoundation: They do not rest on a sound strategic model wherebyopponents of the United States and U.S. power would use thecapabilities they actually have to gain strategic advantage.

Take cyberterrorism. With communications networks, computinginfrastructure, and data stores under regular attack from a varietyof quarters‐​and regularly strengthening to meet them‐​it is highlyunlikely that terrorists can pull off a cybersecurity eventdisruptive enough to instill widespread fear of further disruption.Fear is a necessary element for terrorism to work its will, ofcourse. The impotence of computer problems to instill fear renders“cyberterrorism” an unlikely threat. This is not to deny theimportance of preventing the failure of infrastructure, ofcourse.

Cyberattacks by foreign powers have a similarly implausiblestrategic logic. The advantage gained by a disabling attack onprivate and civilian government infrastructure would be largelyeconomic, with perhaps some psychological effects. Such attackswould not plausibly “soften up” the United States for invasion. Butcommitting such attacks would risk harsh responses if theperpetrators were found, and conventional intelligence methods areundoubtedly keenly tuned to doing so. Ultimately, a foreigngovernment’s cyberattack on the United States would have to be adeath‐​blow, as it would risk eliciting ruinous responses. Thismakes it very unlikely that a cyberattack on civilianinfrastructure would be a tool of true war.

Attacking military communications infrastructure and data doeshave a rational strategic logic, of course. And the testimony yourcommittee received from Dr. Leheny of the Defense Advanced ResearchProject Agency at your June 16 hearing illustrates some of what theDefense Department is doing to anticipate and prevent attacks onthis true critical infrastructure.

The more plausible strategic use of attacks on communicationsand data infrastructure is not “cyberterrorism” or “cyberattack,“but what might be called “cybersapping”: Infiltrating networks togain business intelligence, intellectual property, money, personaland financial data, and perhaps strategic government information.These infiltrations can slowly degrade the advantages that the U.S.economy and government have over others. They are important toaddress diligently and promptly. But they are not a reason to panicand overreact.

A final example of cybersecurity boilerplate that deservesmention is the alleged weakness of military information systems.The story that confidential files about the Joint Strike Fighterwere compromised earlier this year has become a standard direwarning about our national vulnerability. But many are convenientlyforgetting the other half of the story, even though it is availableright there in some of the earliest reporting. According to acontemporaneous story on CNN​.com:

[O]fficials insisted that none of the information accessed washighly sensitive data. The plane uses stealth and other highlysensitive electronic equipment, but it does not appear thatinformation on those systems was compromised, because it is storedon computers that are not connected to the Internet, according tothe defense officials.[5]

The compromise of some data about the Joint Strike Fighter isregrettable, but this is also a story of cybersecurity success. Thekey security policy of keeping the most sensitive data away fromthe public Internet successfully protected that data. TheDepartment of Defense deserves credit for instituting andmaintaining that policy.

Cybersecurity is important, but exaggerating threats andfailures as a matter of routine will lead to poor policymaking. Donot let the urgency of many statements about cybersecurity“buffalo” you into precipitous, careless, and intrusivepolicies.

Exhortation about some cybersecurity policies seem to be pushingothers off the table, like the policy so successful at protectingthe most important information about the Joint Strike Fighter. Thesimple, elegant policy of keeping truly critical infrastructure offthe public Internet is not receiving enough discussion.

Critical Infrastructure: Off the Internet

At the confirmation hearing of Commerce Secretary Gary Lockeearlier this year, Senator Jay Rockefeller stated his view of thecybersecurity problem in no uncertain terms. Of cyberattack, hesaid:

It’s an act which can shut this country down‐​shut down itselectricity system, its banking system, shut down really anythingwe have to offer. It is an awesome problem.… It is a fearsome,awesome problem.[6]

What is fearsome is the embedded premise that everythingimportant to our country would be put on the Internet rather thancontrolled over separate, dedicated networks. This is not true, asthe example of the Joint Strike Fighter example illustrates. And itturns out that many important functions in government and societyare indeed handled by dedicated communications networks.

Cato Institute adjunct fellow Timothy B. Lee, a Ph.D. student incomputer science at Princeton University and an affiliate of theCenter for Information Technology Policy, commented on the Estoniancyberattacks last year:

[S]ome mission‐​critical activities, including voting andbanking, are carried out via the Internet in some places. But tothe extent that that’s true, the lesson of the Estonian attacksisn’t that the Internet is “critical infrastructure” on par withelectricity and water, but that it’s stupid to build “criticalinfrastructure” on top of the public Internet. There’s a reasonthat banks maintain dedicated infrastructure for financialtransactions, that the power grid has a dedicated communicationsinfrastructure, and that computer security experts are all butunanimous that Internet voting is a bad idea.[7]

Tim has also noted that the Estonia attacks did not reachparliament, ministries, banks, and media‐​just their Web sites.Access to some businesses and government agencies went down, buttheir core functions were not compromised.

Yet this policy‐​of keeping critical functions away from theInternet‐​has received almost no discussion in the recent majorreports on cybersecurity. The White House’s Cyberspace PolicyReview did not highlight this approach,[8] and thePresident’s speech presenting the review did not either. The CSISreport also did not emphasize this simple, straightforward methodfor securing truly critical functions.

Where security is truly at a premium, the lion’s share ofsecuring infrastructure against cyberattack can be achieved by thesimple policy of fully decoupling it from the Internet.

“Criticality” has become a popular line to draw in discussionsof cybersecurity, of course, and the meaning of the term is in noway settled. A 2003 Congressional Research Service report exploredthe dimensions of the concept at the time.[9] My study of“criticality” is cursory, but the CSIS report’s suggestion issensible, if loosely drawn:

[C]ritical means that, if the function or service is disrupted,there is immediate and serious damage to key national functionssuch as U.S. military capabilities or economic performance. It doesnot mean slow erosion or annoying disruptions.[10]

In my mind, criticality should probably turn on whethercompromise of the resource would immediately and proximatelyendanger life and health. Immediacy is an important limitationbecause resources that can be promptly repaired to prevent harmshould be made resilient that way rather than treated as criticalinfrastructure.

Proximity to harm is also important to prevent “criticality“grade-inflation. The loss of electric power for even an hour willkill people on respirators in hospitals, for example, but theproximate solution to such foreseeable risks is to have backuppower systems at hospitals‐​not to make the entire electricity gridcritical infrastructure on that basis.

If it is to be a focal point for cybersecurity policies, thenotion of “critical infrastructure” must be sharply circumscribed.Given the special treatment accorded critical infrastructure bygovernment, private entities will all clamor for that status, andthe government will be stuck protecting thousands of things thatare kind of important, rather than the networks and data that areimmediately needed for protecting life and health.

Keeping the small universe of truly critical infrastructureentirely separate from the public Internet, and encouraging privateoperators of critical infrastructure to do so, is a policy that hasnot received enough discussion so far. It deserves a great dealmore.

But this is one among dozens of policy choices to deal withthousands of problems. The many complex challenges lumped togetheras “cybersecurity” cannot be solved by any one expert, group ofexperts, legislature, regulatory body, or commission. It has toomany moving parts.

Rather than trying to address cybersecurity in toto, I recommendaddressing the problem at a level once‐​removed: By asking whatsystems we should use to address cybersecurity. There are a varietyof social mechanisms, each with merits and demerits.

Cybersecurity Through Contract

In my testimony so far, I have argued againstover‐​generalization and over‐​heated rhetoric around cybersecurity.Cybersecurity is many different problems, only some of which areurgent.

None of this is to deny that cybersecurity is a serious andimportant challenge. I applaud the work of the Defense Departmentto secure its critical information, and find very interestingDARPA’s innovative work to develop networks over which our militarybranches can conduct their very important functions. These are twoexamples among many government‐​wide efforts to secure true criticalinfrastructure.

But what about the rest of the country’s communications and datainfrastructure? Is the entire nation’s cyberstuff a “strategicnational asset,” as the president suggested in his speech oncybersecurity?[11] Should it all come under a militaryor quasi‐​military command‐​and‐​control operation?

The CSIS study called for a “comprehensive national securitystrategy for cyberspace” and stated accordingly and unflinchinglythat the government should “regulate cyberspace.”[12] Thereport also laid our cybersecurity woes at the feet of the market:“We have deferred to market forces in the hope that they wouldproduce enough security to mitigate national security threats. Itis not surprising that … industrial organization andoverreliance on the market has not produced success.”[13]

Competition and markets should not be passed over in favor ofregulation. Indeed, the argument for regulation begs the centralquestion: What do we want from our technical infrastructures sothat we have appropriate security? What would a cybersecurityregulation say? Nobody yet knows.

To illustrate, FISMA the Federal Information Security ManagementAct, has not taken care of cybersecurity for the federalgovernment. Federal chief information security officers and othersrightly criticize the government’s self‐​regulation for its focus oncompliance reporting and paperwork at the expense of addressingknown problems.[14]

If the federal government knew how to do cybersecurity well,FISMA would be a to‐​do list that more or less secured the federalenterprise. We would not have the cybersecurity problem all agreewe have. But the practices that lead to successful cybersecurityhave not yet been discovered. Regulations to implement theseundiscovered practices would not help.

Success in cybersecurity is not easy to define. Professor EdFelten from Princeton University’s Center for InformationTechnology Policy points out that the ideal is not perfectsecurity, but optimal security‐​the efficient point whereinvestments in security avoid equal or greater losses.[15] Communications and computingdevices are meant to process, display, and transmit informationthat they often acquire from other resources. To make them useful,we must embrace the risk of opening them up to other computers,software, and data. Some level of insecurity is what makes theInternet, computing, and “cyberspace” so useful and valuable.

Again, the question is what processes we can use to discoveroptimal or near‐​optimal cybersecurity products and behaviors, thenpropagate them throughout the society.

Criticisms of the market are not misplaced, though they may bemis‐​focused. The market for communications and computingtechnologies is very immature. Many products are rushed to marketwithout adequate security testing. Many are delivered with insecuresettings enabled by default. My impression also is that most aresold without any warranty of fitness for the purposes users willput them to, leaving all risk of failure with buyers who are poorlypositioned to make sound security judgments. There are several waysto address these problems.

As this committee is aware, the federal government is one of thelargest purchasers‐​if not the largest purchaser‐​of informationtechnology in the world. This is not the preferred state of affairsfrom my perspective, but there is no reason to deny that itspurchasing decisions can affect the improvement of productsavailable on the market.

Thanks to entities like the National Institute of Standards andTechnology, the federal government is also one of the mostsophisticated purchasers of technology. As other witnesses andadvocates have articulated better than I can, the government candrive maturation in the market for technology products by settingstandards and defaults for the products and services it buys.

The federal government can also insist on shifting the risk ofloss from the buyer to the seller. Contracts with technologysellers can include guarantees that their products are fit for thepurposes to which they will be put‐​including, of course, secureoperation.

Federal buyers should expect to pay more if they demand fitnessand security guarantees, of course, but more secure products havemore value. Sellers will have to do more thorough development andmore rigorous security testing. Because they currently bear littleor no risk of loss, technology sellers will probably howl at theprospect of bearing risk, but ready to step in will be technologysellers willing to produce better, more secure, and more reliableproducts for the premium that gets them.

As a large market participant, the federal government can have agood influence on the security ecology without resorting tointrusive regulation. Whether it creates a “gold standard” forsecurity in technologies purchased in the private sector, orwhether it moves the market toward contract‐​based liability fortechnology sellers, the federal government can help the technologymarket mature.

Cybersecurity Through Tort Liability

There is more to criticism of the market for cybersecurity than“lack of maturity,” however. There is also an arguable marketfailure in the area of technology products and services, caused bya lack of maturity in the law. I was pleased that the executivesummary of the White House Cyberspace Policy Review citeda short paper I wrote arguing that updated tort law would besuperior to regulation for curing the market.[16]

A market failure exists when the market price of a good does notinclude the costs or benefits of externalities (harmful orbeneficial side effects that occur in the production, distribution,or consumption of a good). Producers or consumers may have littleincentive to alter activities that contribute to air pollution, forexample, when the costs of pollution do not affect their costs.Likewise, users of computers that are insecure may harm the networkor other users, such as when malware infects a computer and uses itto launch spam or distributed denial‐​of‐​service attacks.

When there is no contractual relations between the parties,getting network operators, data owners, and computer users tointernalize risks can be done one of two ways: Regulation‐​youmandate certain behaviors‐​or liability‐​you make them pay for harmsthey cause others. Regulation and liability each have strengths andweaknesses, but I believe a liability regime is ultimatelysuperior.

One of the main problems with regulation‐​especially in a dynamicfield like technology‐​is that it requires a small number of peopleto figure out how things are going to work for an unknown andindefinite future. Those kinds of smarts do not exist.

So regulators often punt: When the Financial ServicesModernization Act tasked the Federal Trade Commission with figuringout how to secure financial information, it did not do that.Instead, the “Safeguards Rule”[17] (similarly to FISMA)simply requires financial institutions to have a security plan. Ifsomething goes wrong, the FTC will go back in and either find theplan lacking or find that it was violated.

Another weakness of regulation is that it tends to be too broad​.In an area where risks exist, regulation will ban entire swaths ofbehavior rather than selecting among the good and bad. In 1998, forexample, Congress passed the Children’s Online Privacy ProtectionAct, and the FTC set up an impossible‐​to‐​navigate regime forparental approval of the websites their children could use.[18] Today, no child has been harmed bya site that complies with COPPA because they are so rare. Themarket for serving children entertaining and educational content isa shadow of what it could be.

Regulators and regulatory agencies are also subject to“capture.” Industries have historically co‐​opted the agenciesintended to control them and turned those agencies towardinsulating incumbents from competition.[19]

And regulation often displaces individual justice. The FairCredit Reporting Act preempted state law causes of action againstcredit bureaus that, thus, cannot be held liable for defamationwhen their reports wrongfully cause someone to be denied credit.“Privacy” regulations under the Health Insurance Portability andAccountability Act gave enforcement powers to an obscure office inthe Department of Health and Human Services. While a compliancekabuki dance goes on overhead, people who have suffered privacyviolations are diverted to seeking redress by the grace of afederal agency.

Tort liability is based on the idea that someone who does harm,or allows harm to occur, should be responsible to the injuredparty. The role of law and government is to prevent individualsfrom harming one another. When a person drives a car, builds abuilding, runs a hotel, or installs a light switch, he or she owesit to anyone who might be injured to keep them safe. A rule of thistype could apply to owners and operators of networks and databases,and possibly even to software writers and computer owners.

A liability regime is better at discovering and solving problemsthan regulation. Owners faced with paying for harms they cause willuse the latest knowledge and their intimacy with their businessesto protect the public. Like regulation, a liability regime will notcatch a new threat the first time it appears, but as soon as athreat is known, all actors must improve their practices to meetit. Unlike regulations, which can take decades to update, liabilityupdates automatically.

Liability also leaves more room for innovation. Anything thatcauses harm is forbidden, but anything that does not cause harm isallowed. Entrepreneurs who are free to experiment will discoverconsumer‐​beneficial products and services that improve health,welfare, life, and longevity.

Liability rules are not always crystal clear, of course, butwhen cases of harm are alleged in tort law, the parties meet in acourtroom before a judge, and the judge neutrally adjudicates whatharm was done and who is responsible. When an agency enforces itsown regulation, it is not neutral: Agencies work to “sendmessages,” to protect their powers and budgets, and to fosterfuture careers for their staffs.

Especially in the high‐​tech world of today, it is hard to provecausation. The forensic skill to determine who was responsible foran information‐​age harm is still too rare. But regulation isequally subject to evasion. And liability acts not through lawsuitswon, but by creating a protective incentive structure.

One risk unique to liability is that advocates will push to domore with it than compensate actual harms. Some would treat thecreation of risk as a “harm,” arguing, for example, that companiesshould pay someone or do something about potential identity fraudjust because a data breach created the risk of it. They oftenshould, but blanket regulations like that actually promote too muchinformation security, lowering consumer welfare as people areprotected against things that do not actually harm them.

It is also true that the tort liability system has been abusedin some cases. Plaintiffs’ bars have sought to turn litigation intoanother regulatory mechanism‐​or a cash cow. State common lawreforms to meet these challenges are in order; dismissing thecommon law out of hand is not.

There are dozens of complexities to how the tort law wouldoperate in the cybersecurity area, of course. The common law is asystem of discovery that crafts doctrines to meet emergingchallenges. I cannot predict each challenge common law courts wouldencounter and how they would address them, but the growth of commonlaw doctrines to prevent harm is an important alternative to theheavy hand of regulation.

As complex and changing as cyber security is, the federalgovernment has no capability to institute a protective program forthe entire country. While it secures its own networks, the federalgovernment should observe the growth of state common law dutiesthat require network operators, data owners, and computer users tosecure their own infrastructure and assets. (They in turn willdivide up responsibility efficiently by contract.) This is the bestroute to discovering and patching security flaws in all theimplements of our information economy and society.

Between the two, contract and tort liability can provide aseamless web of cybersecurity incentives, spreading risks to theparties most capable of controlling them and bearing their costs.Regulation pushes responsibility to protect where it is politicallypalatable, not where it is economically most efficient or bestdone. Regulation often shields the private sector from liability,foisting risk onto the public‐​one of the concerns I will turn tonext.

Standards, Public‐​Private Partnerships, and the RisksThereof

As a market participant, the federal government can play animportant role in promoting secure products and practices. When itleaves the role of market participant and becomes a marketdominator, a regulator, a “partner,” or investor with privatesector entities, a number of risks arise, including threats toprivacy and civil liberties, weakened competition and innovation,and waste of taxpayer dollars. I will address selected examples ofNIST and DHS activity in that light.

As a standard‐​setting organization for the federal government,NIST is a valuable resource‐​not just for the government but for thecybersecurity ecology. But standards are tricky business. What maybe appropriate in one context may not be in another

An area of keen interest to me as an advocate for privacy andcivil liberties is the avoidance of a national ID system in theUnited States. My book, Identity Crisis: How Identification isOverused and Misunderstood, sought to reveal the demerits inhaving a U.S. national ID. The REAL ID Act of 2005, which attemptedto create a national ID system in the United States, has founderedfor a variety of reasons. Unfortunately, a bill recently introducedin the Senate would seek to revive this national IDprogram.[20]

Accurate identification or “identity security” is important insome contexts, but less so in others. Anonymity and obscurity areimportant protections for Americans’ privacy and freedom to speakand act as they wish. Ultimately, I believe a diverse andcompetitive identity and credentialing system will deliver all thebenefits that digital identity systems can provide, without thesurveillance.

So I was concerned to see one bullet point in the testimony ofCita Furlani from NIST at your recent joint hearing. Shecharacterized NIST’s identity and credentialing management standardfor federal employees and contractors (FIPS 201) as “becoming thede facto national standard.”[21]

It is unclear exactly what this means, of course, and I do notview FIPS 201 as the foremost threatened national ID standard atthis time. But the needs in identity and credentialing outside thefederal government are quite different from those within thegovernment. The same market dominance that makes the federalgovernment such a potential boon to cybersecurity could make it anequal bane to privacy and civil liberties should FIPS 201 beadopted widely by state governments for their employees, by statesfor their drivers’ licenses and IDs, and in private‐​sectoremployment and access control. The same is probably true of otherstandards in other ways.

Cybersecurity standard‐​setting for federal government purchasingand use should present few problems. It can often be beneficialwhen it drives forward the cybersecurity marketplace. But pressingstandards onto the private sector where they are not a good fit‐​indelicate areas such as personal information handling‐​createsconcerns.

Professor Schneider from Cornell said it well in your firsthearing of this series:

[T]he Internet is as much a social construct as a technologicalone, and we need to understand what effects proposed technologicalchanges could have; forgoing social values like anonymity andprivacy (in some sense, analogous to freedom of speech andassembly) in order to make the Internet more trustworthy mightsignificantly limit the Internet’s utility to some, and thus not beseen as progress.[22]

A different array of concerns arises from nominal“public-private partnerships.” The concept is much ballyhooed amonggovernments and corporations because it suggests happiness andcooperation. But I am not enthusiastic about a joining of handsbetween the government and the corporate sector.

Public‐​private partnerships take many forms, of course. Theleast objectionable are information‐​sharing arrangements like theDepartment of Homeland Security’s US-CERT, or United StatesComputer Emergency Readiness Team. But consumers, the society, andour economy do not get the best from corporations when theycooperate, much less when they cooperate with government. Marketssqueeze the most out of the business sector when competitors arenakedly pitted against each other and forced to compete on everydimension of their products and services, includingcybersecurity.

Programs like US-CERT run the risk of diminishing competitionand innovation in cybersecurity. Vulnerability warning is not apublic good; it can be provided privately by companies competingagainst each other to do the best job for their clients. “Free“taxpayer-funded vulnerability warning will tend to squeeze privateproviders out of the market.

This risks lowering overall consumer welfare, especially if itleads to cybersecurity monoculture. “Monoculture” is the idea thatuniformity among security systems is a weakness. In a securitymonoculture, one flaw could be exploited in many domains at once,bringing them all down and creating problems that would not havematerialized in a diverse security environment.

With US-CERT this is only a risk. Public‐​private partnerships ofother stripes raise more powerful concerns.

Earlier in my testimony, I wrote about how liability can promotecybersecurity. It is equally the case that the absence of liabilitycan degrade security. If public‐​private partnerships confuse linesof responsibility for security, the results can be very badindeed.

Consider how responsibility for passenger air transportation wasmixed before the 9/11 attacks. Airlines nominally providedsecurity, but they had to obey the dictates of the Federal AviationAdministration. Were something bad to happen, both entities were ina position to deny responsibility.

Flying a plane into a building had been written about in a 1994novel‐​and kamikaze attacks were, of course, a tactic of theJapanese in World War II‐​but on 9/11 hijacking protocols had notbeen seriously revamped since the 1970s, when absconding to Cubawas the chief goal of most airline takeovers.

After 9/11, neither airlines nor the Federal AviationAdministration shouldered responsibility. The airlines movedswiftly to capitalize on emotion and patriotism, getting Congressto shield them from liability, give them an infusion of taxpayerdollars, and take over their security obligations. This“public-private partnership” in security was a disaster from startto finish, and remains so. The party ultimately bearing theloss‐​and still at risk today‐​was the American taxpayer andtraveler.

This illustration is not to suggest that cybersecurity failuresthreaten attacks equivalent to 9/11. It is simply to suggest thatthe better role of the government is to stand apart from industryand to arbitrate liability when a company has failed to meet itscontractual or tort‐​based obligations.

Public‐​private partnerships may also be conduits fortransferring taxpayer funds to corporations, or to universities whodo research for corporations. While reviewing the testimoniespresented to you in earlier hearings, I was impressed by the nearlyuniform requests for taxpayer money.

Much of the money requested would go to research that industryneeds to do a good job. In other words, it is research they wouldfund themselves in the absence of a subsidy. Using a small amountof money taken from each taxpayer, Congress can give money tocorporations and claim a role in the production of security, eventhough the corporations would have put their own money to that usethemselves. This is another form of “partnership” where theAmerican taxpayer loses.

When the federal government abandons the role of marketparticipant and neutral arbiter, difficulties arise. Though NISTstandards are useful for the federal government‐​and many of themcan apply well in the private sector‐​they may not be appropriatelyforced on the private sector when the government ismarket‐​dominant. Government‐​corporate collaboration raises manyrisks: security monoculture; mixed responsibility and weakenedsecurity; and simple waste of taxpayer dollars.

Cybersecurity is special, but not so special that principlesabout the limited role of government should go by the wayside. Wewill get the best security and the best deal for taxpayers and thepublic if the government remains within its proper sphere.


Cybersecurity is a huge topic, and I have ranged widely acrossit in my imperfect testimony. I hope it is more clear that“cybersecurity” is a bigger, more multi‐​faceted problem than thegovernment can solve, and government certainly cannot solve thewhole range of cybersecurity problems quickly.

Happily, with a few exceptions, cybersecurity is also lessurgent than many commentators allege. “Cyberattack” or“cyberterrorism” might be replaced by “cybersapping” of thecountry’s assets and technology as the threat we should promptlyand diligently address. There is no argument, of course, thatcybersecurity is not important.

I am concerned that the policy of keeping true criticalinfrastructure off the public Internet has been lost in thecybersecurity cacophony. It is a simple, elegant practice that willtake care of many threats against truly essential assets.

The government will not fix the nation’s cybersecurity. Yourgoal as policymakers should be one level removed: to determine thesystem that will best discover and propagate good cybersecuritypractices.

As a market participant, the federal government is wellpositioned to effect the cybersecurity ecology positively, withNIST standards integral to that process. The federal government mayalso advance cybersecurity by shifting risk to sellers oftechnology by contract.

For the market failure that is on exhibit when insecuretechnology harms networks or other users, liability is a preferablemechanism to regulation for discovering who should bear theresponsibility to protect.

When the federal government abandons its role of marketparticipant and becomes a market dominator, regulator, “partner,“or investor with private sector entities, a number of risks arise,including threats to privacy and civil liberties, weakenedcompetition and innovation, and waste of taxpayer dollars.

I appreciate the chance to share these ideas with you, and Ihope that they will aid the committee’s deliberations.

[1] Center for a New American Security,“Developing a National Cybersecurity Strategy” web page (visitedJune 23, 2009) http://​www​.cnas​.org/​n​o​d​e​/2818.

[2] CSIS Commission on Cybersecurity forthe 44th Presidency, “Securing Cyberspace for the 44th Presidency,“p. 15 (2008) http://​www​.csis​.org/​m​e​d​i​a​/​c​s​i​s​/​p​u​b​s​/​0​8​1​2​0​8​_​s​e​c​u​r​i​n​g​c​y​b​e​r​s​p​a​c​e​_​4​4.pdf[hereinafter “CSIS Report”].

[6] See “Jay Rockefeller:Internet Should Have Never Existed,” YouTube (posted Mar. 20, 2009)http://​www​.youtube​.com/​w​a​t​c​h​?​v​=​C​t​9​x​z​X​UQLuY.

[9] John Moteff et al., Resources,Science, and Industry Division, Congressional Research Service,“Critical Infrastructures: What Makes an Infrastructure Critical?”,CRS Order Code RL31556 (updated Jan. 29, 2003) http://​www​.fas​.org/​i​r​p​/​c​r​s​/​R​L​3​1​5​5​6.pdf.

[10] CSIS Report, p. 44.

[12] CSIS Report, pp. 1–2.

[13] CSIS Report, p. 12.

[16] Much of Jim Harper, “Government‐​RunCyber Security? No, Thanks,” Cato Institute TechKnowledge #123(March 13, 2009) https://​www​.cato​.org/​t​e​c​h​/​t​k​/​0​9​0​3​1​3​-​t​k​.html,is incorporated into this testimony.

[18] See Federal TradeCommission, “You, Your Privacy Policy, and COPPA: How to Complywith the Children’s Online Privacy Protection Act” web page(visited June 23, 2009) http://​www​.ftc​.gov/​b​c​p​/​e​d​u​/​p​u​b​s​/​b​u​s​i​n​e​s​s​/​i​d​t​h​e​f​t​/​b​u​s​5​1.pdf.

[19] See Timothy B. Lee, “TheDurable Internet: Preserving Network Neutrality withoutRegulation,” Cato Policy Analysis #626 (Nov. 12, 2008) https://​www​.cato​.org/​p​u​b​_​d​i​s​p​l​a​y​.​p​h​p​?​p​u​b​_​i​d​=9775.

[21] Testimony of Ms. Cita Furlani,Director, Information Technology Laboratory, National Institute ofStandards and Technology (NIST), to a hearing entitled “AgencyResponse to Cyberspace Policy Review,” Subcommittee on Technology& Innovation, Committee on Science and Technology, UnitedStates House of Represenatives, p. 4 (June 16, 2009) http://​democ​rats​.sci​ence​.house​.gov/​M​e​d​i​a​/​f​i​l​e​/​C​o​m​m​d​o​c​s​/​h​e​a​r​i​n​g​s​/​2​0​0​9​/​T​e​c​h​/​1​6​j​u​n​/​F​u​r​l​a​n​i​_​T​e​s​t​i​m​o​n​y.pdf.

[22] Testimony of Dr. Fred B. Schneider,Samuel B. Eckert Professor of Computer Science, Cornell University,to a hearing entitled “Cyber Security R&D,” Subcommittee onTechnology & Innovation, Committee on Science and Technology,United States House of Representatives, p. 4 (June 10, 2009)http://​democ​rats​.sci​ence​.house​.gov/​M​e​d​i​a​/​f​i​l​e​/​C​o​m​m​d​o​c​s​/​h​e​a​r​i​n​g​s​/​2​0​0​9​/​R​e​s​e​a​r​c​h​/​1​0​j​u​n​/​S​c​h​e​i​d​e​r​_​T​e​s​t​i​m​o​n​y.pdf.

Jim Harper

Subcommittee on Technology & Innovation
Committee on Science and Technology
United States House of Representatives