Skip to main content
Testimony

Assessing Cybersecurity Activities at NIST and DHS

June 25, 2009 • Testimony

Cybersecurity is a bigger, more multi‐​faceted problem than the government can solve, and it certainly cannot solve the whole range of cybersecurity problems quickly.

With a few exceptions, cybersecurity is less urgent than many commentators allege. There is no argument, of course, that cybersecurity is not important.

The policy of keeping true critical infrastructure off the public Internet has been lost in the “cybersecurity” cacophony. It is a simple security practice that will take care of many threats against truly essential assets.

The goal of policymakers should be not to solve cybersecurity, but to determine the systems that will best discover and propagate good security technology and practices.

As a market participant, the federal government is well positioned to effect the cybersecurity ecology positively, with NIST standards integral to that process. The federal government may also advance cybersecurity by shifting risk to sellers of technology by contract.

For the market failure that is on exhibit when insecure technology harms networks or other users, liability is preferable to regulation for discovering who should bear responsibility.

When the federal government abandons its role of market participant and becomes a market dominator, regulator, “partner,” or investor with private sector entities, a number of risks arise, including threats to privacy and civil liberties, weakened competition and innovation, and waste of taxpayer dollars.

Introduction

Chairman Wu, Ranking Member Smith, and members of the subcommittee, thank you for inviting me to address you in this hearing on the cybersecurity activities of the National Institute of Standards and Technology and the Department of Homeland Security. The hearings you have conducted so far are a valuable contribution to the national discussion, as I hope my participation in this hearing will be valuable as well.

My name is Jim Harper and I am director of information policy studies at the Cato Institute. In that role, I study and write about the difficult problems of adapting law and policy to the challenges of the information age. I also maintain an online federal spending resource called Wash​ing​ton​Watch​.com. Cato is a market liberal, or libertarian, think‐​tank, and I pay special attention to preserving and restoring our nation’s founding, constitutional traditions of individual liberty, limited government, free markets, peace, and the rule of law.

I serve as an advisor to the Department of Homeland Security on its Data Integrity and Privacy Advisory Committee, and my primary focus in general is on privacy and civil liberties. I am not a technologist or a cybersecurity expert, but a lawyer familiar with technology and security issues. As a former committee counsel in both the House and Senate, I also blend an understanding of lawmaking and regulatory processes with technology and security. I hope this background and my perspective enhance your consideration of the many challenging issues falling under the name “cybersecurity.”

In my testimony, I will spend a good deal of time on fundamental problems in cybersecurity and the national cybersecurity discussion so far. I will then apply this thinking to some of the policies NIST, DHS, and other agencies are working on.

The Use and Misuse of “Cyberspace” and “Cybersecurity”

One of the profound challenges you face in setting “cybersecurity” policy is the framing of the issue. “Cyberspace” is insecure, we all believe, and by making it integral to our lives, we are importing insecurity, as individuals and as a nation.

In some senses this is true, and “securing cyberspace” is a helpful way of thinking about the problem. But it also promotes overgeneralization, suggesting that a bounded set of behaviors called “cybersecurity” can resolve things.

A new world or “space” is indeed coming into existence through the development of communications networks, protocols, software, sensors, commerce, and content. In many ways, this world is distinct and different from the physical space that we occupy. In “cyberspace,” we now do many of the things we used to do only in physical space: we shop, debate, read the news, work, gossip, manage our financial affairs, and so on. Businesses and government agencies, of course, conduct their operations in the new “cyberspace” as well.

It is even helpful to extend this analogy and imagine “cyberspace” as organized like the physical world. Think of personal computers as people’s homes. Their attachments to the network analogize to driveways, which connect to roads and then highways. (Perhaps phones and handheld devices are data‐​bearing cars and motorcycles.) Emails, financial files, and pictures are the personal possessions that could be stolen out of houses and private vehicles, leading to privacy loss.

Corporate and government networks are cyberspace’s office buildings. Business data, personnel files, and intellectual property are the goods that sometimes get left on the loading dock, personnel files and business places that are left on the desk in an executive’s office overnight, and so on. They can be stolen from the “office buildings” in data breaches.

How do you secure these places and things from theft, both casual and organized? How do you prevent fires, maintain water and electric service, ensure delivery of food, and prevent outbreaks of disease? How do you defend against military invasion or weapons of mass destruction in this all‐​new “space”?

These problems are harder to solve in some senses, and not as hard to solve in others. Consider, for example, that the “houses” and “office buildings” of cyberspace can be reconstituted in minutes or hours if software and data have been properly backed up. Lost possessions can be “regained” just as quickly‐​though copies of them may permanently be found elsewhere. “Cyberspace” has many resiliencies that real space lacks.

On the other hand, “diseases” (new exploits) multiply much more quickly and broadly than in the real world. “Cyber‐​public‐​health” measures like mandated vaccinations (the required use of security protocols) are important, though they may be unreliable. On a global public medium like the Internet, they would have to be mandated by an authority or authorities with global jurisdiction and authority over every computing device, which is unlikely and probably undesirable.

The analogy between cyberspace and real space shows that “cybersecurity” is not a small universe of problems, but thousands of different problems that will be handled in thousands of different ways by millions of people over the coming decades. Securing cyberspace means tackling thousands of technology problems, business problems, economics problems, and law enforcement problems.

In my opinion, if it takes decades to come up with solutions, that is fine. The security of things in “real” space has developed in an iterative process over hundreds and, in some cases, thousands of years. Even “simple” security devices like doors, locks, and windows involve fascinating and intricate security, utility, and convenience trade‐​offs that are hard even for experts to summarize.

Many would argue, of course, that we do not have decades to figure out cybersecurity. But I believe that, with few exceptions, most of these assertions are mistaken. Your ability to craft sound cybersecurity policies for the government is threatened by the breathlessness of public discussion that is common in this field.

Calm Down, Slow Down

Overuse of urgent rhetoric is a challenge to setting balanced cybersecurity policy. Threat exaggeration has become boilerplate in the cybersecurity area, it seems, and while cybersecurity is important, overstatement of the problems will promote imbalanced responses that are likely to sacrifice our wealth, progress, and privacy.

For example, comparisons between “cyberattack” and conventional military attack are overwrought. As one example (which I select only because it is timely), the Center for a New American Security is hosting a cybersecurity event this week, and the language of the invitation says: “[A] cyberattack on the United States’ telecommunications, electrical grid, or banking system could pose as serious a threat to U.S. security as an attack carried out by conventional forces.”[1]

As a statement of theoretical extremes, it is true: The inconvenience and modest harms posed by a successful crack of our communications or data infrastructure could be more serious than an invasion by an ill‐​equipped, small army. But as a serious assertion about real threats, an attack by conventional forces (however unlikely) would be entirely more serious than any realistic cyberattack. We would stand to lose national territory, which cannot be reconstituted by rebooting, repairing software, and reloading backed‐​up files.

The Center for Strategic and International Studies’ influential report, Securing Cyberspace for the 44th Presidency, said similarly that cybersecurity “is a strategic issue on par with weapons of mass destruction and global jihad.”[2] Many weapons of mass destruction are less destructive than people assume, and the threat of global jihad appears to be waning, but threats to our communications networks, computing facilities, and data stores pale in comparison to true WMD like nuclear weapons. Controlling the risk of nuclear attack remains well above cybersecurity in any sound ranking of strategic national priorities.

It is a common form of threat exaggeration to cite the raw number of attacks on sensitive networks, like the Department of Defense’s. It suffers hundreds of millions of attacks per year. But happily most of these “attacks” are repetitious use of the same attack. They are mounted by “script kiddies”-unsophisticated know‐​nothings who get copies of others’ attacks and run them on the chance that they will find an open door.

The defense against this is to continually foreclose attacks and genres of attack as they develop, the way the human body develops antibodies to germs and viruses. Securing against these attacks is important work, and it is not always easy, but it is an ongoing, stable practice in network management and a field of ongoing study in computer science. The attacks may continue to come in the millions, but this is less concerning when immunities and failsafes are in place and continuously being updated.

In his generally balanced speech on cybersecurity, President Obama cited a threat he termed “weapons of mass disruption.”[3] Again, analogy to the devastation that might be done by nuclear weapons is misleading. Inconvenience and disruption are bad things, they can be costly, and in the extreme case deadly‐​again, cybersecurity is important‐​but securing against the use of real weapons on the U.S. and its people is a more important government role.

In a similar vein, a commentator on the National Journal’s national security experts blog recently said, “Cyberterrorism is here to stay and will grow bigger.”[4] Cyberterrorism is not here, and thus it is not in a position to stay.

Provocative statements of this type lack a key piece of foundation: They do not rest on a sound strategic model whereby opponents of the United States and U.S. power would use the capabilities they actually have to gain strategic advantage.

Take cyberterrorism. With communications networks, computing infrastructure, and data stores under regular attack from a variety of quarters‐​and regularly strengthening to meet them‐​it is highly unlikely that terrorists can pull off a cybersecurity event disruptive enough to instill widespread fear of further disruption. Fear is a necessary element for terrorism to work its will, of course. The impotence of computer problems to instill fear renders “cyberterrorism” an unlikely threat. This is not to deny the importance of preventing the failure of infrastructure, of course.

Cyberattacks by foreign powers have a similarly implausible strategic logic. The advantage gained by a disabling attack on private and civilian government infrastructure would be largely economic, with perhaps some psychological effects. Such attacks would not plausibly “soften up” the United States for invasion. But committing such attacks would risk harsh responses if the perpetrators were found, and conventional intelligence methods are undoubtedly keenly tuned to doing so. Ultimately, a foreign government’s cyberattack on the United States would have to be a death‐​blow, as it would risk eliciting ruinous responses. This makes it very unlikely that a cyberattack on civilian infrastructure would be a tool of true war.

Attacking military communications infrastructure and data does have a rational strategic logic, of course. And the testimony your committee received from Dr. Leheny of the Defense Advanced Research Project Agency at your June 16 hearing illustrates some of what the Defense Department is doing to anticipate and prevent attacks on this true critical infrastructure.

The more plausible strategic use of attacks on communications and data infrastructure is not “cyberterrorism” or “cyberattack,” but what might be called “cybersapping”: Infiltrating networks to gain business intelligence, intellectual property, money, personal and financial data, and perhaps strategic government information. These infiltrations can slowly degrade the advantages that the U.S. economy and government have over others. They are important to address diligently and promptly. But they are not a reason to panic and overreact.

A final example of cybersecurity boilerplate that deserves mention is the alleged weakness of military information systems. The story that confidential files about the Joint Strike Fighter were compromised earlier this year has become a standard dire warning about our national vulnerability. But many are conveniently forgetting the other half of the story, even though it is available right there in some of the earliest reporting. According to a contemporaneous story on CNN​.com:

[O]fficials insisted that none of the information accessed was highly sensitive data. The plane uses stealth and other highly sensitive electronic equipment, but it does not appear that information on those systems was compromised, because it is stored on computers that are not connected to the Internet, according to the defense officials.[5]

The compromise of some data about the Joint Strike Fighter is regrettable, but this is also a story of cybersecurity success. The key security policy of keeping the most sensitive data away from the public Internet successfully protected that data. The Department of Defense deserves credit for instituting and maintaining that policy.

Cybersecurity is important, but exaggerating threats and failures as a matter of routine will lead to poor policymaking. Do not let the urgency of many statements about cybersecurity “buffalo” you into precipitous, careless, and intrusive policies.

Exhortation about some cybersecurity policies seem to be pushing others off the table, like the policy so successful at protecting the most important information about the Joint Strike Fighter. The simple, elegant policy of keeping truly critical infrastructure off the public Internet is not receiving enough discussion.

Critical Infrastructure: Off the Internet

At the confirmation hearing of Commerce Secretary Gary Locke earlier this year, Senator Jay Rockefeller stated his view of the cybersecurity problem in no uncertain terms. Of cyberattack, he said:

It’s an act which can shut this country down‐​shut down its electricity system, its banking system, shut down really anything we have to offer. It is an awesome problem.… It is a fearsome, awesome problem.[6]

What is fearsome is the embedded premise that everything important to our country would be put on the Internet rather than controlled over separate, dedicated networks. This is not true, as the example of the Joint Strike Fighter example illustrates. And it turns out that many important functions in government and society are indeed handled by dedicated communications networks.

Cato Institute adjunct fellow Timothy B. Lee, a Ph.D. student in computer science at Princeton University and an affiliate of the Center for Information Technology Policy, commented on the Estonian cyberattacks last year:

[S]ome mission‐​critical activities, including voting and banking, are carried out via the Internet in some places. But to the extent that that’s true, the lesson of the Estonian attacks isn’t that the Internet is “critical infrastructure” on par with electricity and water, but that it’s stupid to build “critical infrastructure” on top of the public Internet. There’s a reason that banks maintain dedicated infrastructure for financial transactions, that the power grid has a dedicated communications infrastructure, and that computer security experts are all but unanimous that Internet voting is a bad idea.[7]

Tim has also noted that the Estonia attacks did not reach parliament, ministries, banks, and media‐​just their Web sites. Access to some businesses and government agencies went down, but their core functions were not compromised.

Yet this policy‐​of keeping critical functions away from the Internet‐​has received almost no discussion in the recent major reports on cybersecurity. The White House’s Cyberspace Policy Review did not highlight this approach,[8] and the President’s speech presenting the review did not either. The CSIS report also did not emphasize this simple, straightforward method for securing truly critical functions.

Where security is truly at a premium, the lion’s share of securing infrastructure against cyberattack can be achieved by the simple policy of fully decoupling it from the Internet.

“Criticality” has become a popular line to draw in discussions of cybersecurity, of course, and the meaning of the term is in no way settled. A 2003 Congressional Research Service report explored the dimensions of the concept at the time.[9] My study of “criticality” is cursory, but the CSIS report’s suggestion is sensible, if loosely drawn:

[C]ritical means that, if the function or service is disrupted, there is immediate and serious damage to key national functions such as U.S. military capabilities or economic performance. It does not mean slow erosion or annoying disruptions.[10]

In my mind, criticality should probably turn on whether compromise of the resource would immediately and proximately endanger life and health. Immediacy is an important limitation because resources that can be promptly repaired to prevent harm should be made resilient that way rather than treated as critical infrastructure.

Proximity to harm is also important to prevent “criticality” grade‐​inflation. The loss of electric power for even an hour will kill people on respirators in hospitals, for example, but the proximate solution to such foreseeable risks is to have backup power systems at hospitals‐​not to make the entire electricity grid critical infrastructure on that basis.

If it is to be a focal point for cybersecurity policies, the notion of “critical infrastructure” must be sharply circumscribed. Given the special treatment accorded critical infrastructure by government, private entities will all clamor for that status, and the government will be stuck protecting thousands of things that are kind of important, rather than the networks and data that are immediately needed for protecting life and health.

Keeping the small universe of truly critical infrastructure entirely separate from the public Internet, and encouraging private operators of critical infrastructure to do so, is a policy that has not received enough discussion so far. It deserves a great deal more.

But this is one among dozens of policy choices to deal with thousands of problems. The many complex challenges lumped together as “cybersecurity” cannot be solved by any one expert, group of experts, legislature, regulatory body, or commission. It has too many moving parts.

Rather than trying to address cybersecurity in toto, I recommend addressing the problem at a level once‐​removed: By asking what systems we should use to address cybersecurity. There are a variety of social mechanisms, each with merits and demerits.

Cybersecurity Through Contract

In my testimony so far, I have argued against over‐​generalization and over‐​heated rhetoric around cybersecurity. Cybersecurity is many different problems, only some of which are urgent.

None of this is to deny that cybersecurity is a serious and important challenge. I applaud the work of the Defense Department to secure its critical information, and find very interesting DARPA’s innovative work to develop networks over which our military branches can conduct their very important functions. These are two examples among many government‐​wide efforts to secure true critical infrastructure.

But what about the rest of the country’s communications and data infrastructure? Is the entire nation’s cyberstuff a “strategic national asset,” as the president suggested in his speech on cybersecurity?[11] Should it all come under a military or quasi‐​military command‐​and‐​control operation?

The CSIS study called for a “comprehensive national security strategy for cyberspace” and stated accordingly and unflinchingly that the government should “regulate cyberspace.”[12] The report also laid our cybersecurity woes at the feet of the market: “We have deferred to market forces in the hope that they would produce enough security to mitigate national security threats. It is not surprising that … industrial organization and overreliance on the market has not produced success.”[13]

Competition and markets should not be passed over in favor of regulation. Indeed, the argument for regulation begs the central question: What do we want from our technical infrastructures so that we have appropriate security? What would a cybersecurity regulation say? Nobody yet knows.

To illustrate, FISMA the Federal Information Security Management Act, has not taken care of cybersecurity for the federal government. Federal chief information security officers and others rightly criticize the government’s self‐​regulation for its focus on compliance reporting and paperwork at the expense of addressing known problems.[14]

If the federal government knew how to do cybersecurity well, FISMA would be a to‐​do list that more or less secured the federal enterprise. We would not have the cybersecurity problem all agree we have. But the practices that lead to successful cybersecurity have not yet been discovered. Regulations to implement these undiscovered practices would not help.

Success in cybersecurity is not easy to define. Professor Ed Felten from Princeton University’s Center for Information Technology Policy points out that the ideal is not perfect security, but optimal security‐​the efficient point where investments in security avoid equal or greater losses.[15] Communications and computing devices are meant to process, display, and transmit information that they often acquire from other resources. To make them useful, we must embrace the risk of opening them up to other computers, software, and data. Some level of insecurity is what makes the Internet, computing, and “cyberspace” so useful and valuable.

Again, the question is what processes we can use to discover optimal or near‐​optimal cybersecurity products and behaviors, then propagate them throughout the society.

Criticisms of the market are not misplaced, though they may be mis‐​focused. The market for communications and computing technologies is very immature. Many products are rushed to market without adequate security testing. Many are delivered with insecure settings enabled by default. My impression also is that most are sold without any warranty of fitness for the purposes users will put them to, leaving all risk of failure with buyers who are poorly positioned to make sound security judgments. There are several ways to address these problems.

As this committee is aware, the federal government is one of the largest purchasers‐​if not the largest purchaser‐​of information technology in the world. This is not the preferred state of affairs from my perspective, but there is no reason to deny that its purchasing decisions can affect the improvement of products available on the market.

Thanks to entities like the National Institute of Standards and Technology, the federal government is also one of the most sophisticated purchasers of technology. As other witnesses and advocates have articulated better than I can, the government can drive maturation in the market for technology products by setting standards and defaults for the products and services it buys.

The federal government can also insist on shifting the risk of loss from the buyer to the seller. Contracts with technology sellers can include guarantees that their products are fit for the purposes to which they will be put‐​including, of course, secure operation.

Federal buyers should expect to pay more if they demand fitness and security guarantees, of course, but more secure products have more value. Sellers will have to do more thorough development and more rigorous security testing. Because they currently bear little or no risk of loss, technology sellers will probably howl at the prospect of bearing risk, but ready to step in will be technology sellers willing to produce better, more secure, and more reliable products for the premium that gets them.

As a large market participant, the federal government can have a good influence on the security ecology without resorting to intrusive regulation. Whether it creates a “gold standard” for security in technologies purchased in the private sector, or whether it moves the market toward contract‐​based liability for technology sellers, the federal government can help the technology market mature.

Cybersecurity Through Tort Liability

There is more to criticism of the market for cybersecurity than “lack of maturity,” however. There is also an arguable market failure in the area of technology products and services, caused by a lack of maturity in the law. I was pleased that the executive summary of the White House Cyberspace Policy Review cited a short paper I wrote arguing that updated tort law would be superior to regulation for curing the market.[16]

A market failure exists when the market price of a good does not include the costs or benefits of externalities (harmful or beneficial side effects that occur in the production, distribution, or consumption of a good). Producers or consumers may have little incentive to alter activities that contribute to air pollution, for example, when the costs of pollution do not affect their costs. Likewise, users of computers that are insecure may harm the network or other users, such as when malware infects a computer and uses it to launch spam or distributed denial‐​of‐​service attacks.

When there is no contractual relations between the parties, getting network operators, data owners, and computer users to internalize risks can be done one of two ways: Regulation‐​you mandate certain behaviors‐​or liability‐​you make them pay for harms they cause others. Regulation and liability each have strengths and weaknesses, but I believe a liability regime is ultimately superior.

One of the main problems with regulation‐​especially in a dynamic field like technology‐​is that it requires a small number of people to figure out how things are going to work for an unknown and indefinite future. Those kinds of smarts do not exist.

So regulators often punt: When the Financial Services Modernization Act tasked the Federal Trade Commission with figuring out how to secure financial information, it did not do that. Instead, the “Safeguards Rule”[17] (similarly to FISMA) simply requires financial institutions to have a security plan. If something goes wrong, the FTC will go back in and either find the plan lacking or find that it was violated.

Another weakness of regulation is that it tends to be too broad. In an area where risks exist, regulation will ban entire swaths of behavior rather than selecting among the good and bad. In 1998, for example, Congress passed the Children’s Online Privacy Protection Act, and the FTC set up an impossible‐​to‐​navigate regime for parental approval of the websites their children could use.[18] Today, no child has been harmed by a site that complies with COPPA because they are so rare. The market for serving children entertaining and educational content is a shadow of what it could be.

Regulators and regulatory agencies are also subject to “capture.” Industries have historically co‐​opted the agencies intended to control them and turned those agencies toward insulating incumbents from competition.[19]

And regulation often displaces individual justice. The Fair Credit Reporting Act preempted state law causes of action against credit bureaus that, thus, cannot be held liable for defamation when their reports wrongfully cause someone to be denied credit. “Privacy” regulations under the Health Insurance Portability and Accountability Act gave enforcement powers to an obscure office in the Department of Health and Human Services. While a compliance kabuki dance goes on overhead, people who have suffered privacy violations are diverted to seeking redress by the grace of a federal agency.

Tort liability is based on the idea that someone who does harm, or allows harm to occur, should be responsible to the injured party. The role of law and government is to prevent individuals from harming one another. When a person drives a car, builds a building, runs a hotel, or installs a light switch, he or she owes it to anyone who might be injured to keep them safe. A rule of this type could apply to owners and operators of networks and databases, and possibly even to software writers and computer owners.

A liability regime is better at discovering and solving problems than regulation. Owners faced with paying for harms they cause will use the latest knowledge and their intimacy with their businesses to protect the public. Like regulation, a liability regime will not catch a new threat the first time it appears, but as soon as a threat is known, all actors must improve their practices to meet it. Unlike regulations, which can take decades to update, liability updates automatically.

Liability also leaves more room for innovation. Anything that causes harm is forbidden, but anything that does not cause harm is allowed. Entrepreneurs who are free to experiment will discover consumer‐​beneficial products and services that improve health, welfare, life, and longevity.

Liability rules are not always crystal clear, of course, but when cases of harm are alleged in tort law, the parties meet in a courtroom before a judge, and the judge neutrally adjudicates what harm was done and who is responsible. When an agency enforces its own regulation, it is not neutral: Agencies work to “send messages,” to protect their powers and budgets, and to foster future careers for their staffs.

Especially in the high‐​tech world of today, it is hard to prove causation. The forensic skill to determine who was responsible for an information‐​age harm is still too rare. But regulation is equally subject to evasion. And liability acts not through lawsuits won, but by creating a protective incentive structure.

One risk unique to liability is that advocates will push to do more with it than compensate actual harms. Some would treat the creation of risk as a “harm,” arguing, for example, that companies should pay someone or do something about potential identity fraud just because a data breach created the risk of it. They often should, but blanket regulations like that actually promote too much information security, lowering consumer welfare as people are protected against things that do not actually harm them.

It is also true that the tort liability system has been abused in some cases. Plaintiffs’ bars have sought to turn litigation into another regulatory mechanism‐​or a cash cow. State common law reforms to meet these challenges are in order; dismissing the common law out of hand is not.

There are dozens of complexities to how the tort law would operate in the cybersecurity area, of course. The common law is a system of discovery that crafts doctrines to meet emerging challenges. I cannot predict each challenge common law courts would encounter and how they would address them, but the growth of common law doctrines to prevent harm is an important alternative to the heavy hand of regulation.

As complex and changing as cyber security is, the federal government has no capability to institute a protective program for the entire country. While it secures its own networks, the federal government should observe the growth of state common law duties that require network operators, data owners, and computer users to secure their own infrastructure and assets. (They in turn will divide up responsibility efficiently by contract.) This is the best route to discovering and patching security flaws in all the implements of our information economy and society.

Between the two, contract and tort liability can provide a seamless web of cybersecurity incentives, spreading risks to the parties most capable of controlling them and bearing their costs. Regulation pushes responsibility to protect where it is politically palatable, not where it is economically most efficient or best done. Regulation often shields the private sector from liability, foisting risk onto the public‐​one of the concerns I will turn to next.

Standards, Public‐​Private Partnerships, and the Risks Thereof

As a market participant, the federal government can play an important role in promoting secure products and practices. When it leaves the role of market participant and becomes a market dominator, a regulator, a “partner,” or investor with private sector entities, a number of risks arise, including threats to privacy and civil liberties, weakened competition and innovation, and waste of taxpayer dollars. I will address selected examples of NIST and DHS activity in that light.

As a standard‐​setting organization for the federal government, NIST is a valuable resource‐​not just for the government but for the cybersecurity ecology. But standards are tricky business. What may be appropriate in one context may not be in another

An area of keen interest to me as an advocate for privacy and civil liberties is the avoidance of a national ID system in the United States. My book, Identity Crisis: How Identification is Overused and Misunderstood, sought to reveal the demerits in having a U.S. national ID. The REAL ID Act of 2005, which attempted to create a national ID system in the United States, has foundered for a variety of reasons. Unfortunately, a bill recently introduced in the Senate would seek to revive this national ID program.[20]

Accurate identification or “identity security” is important in some contexts, but less so in others. Anonymity and obscurity are important protections for Americans’ privacy and freedom to speak and act as they wish. Ultimately, I believe a diverse and competitive identity and credentialing system will deliver all the benefits that digital identity systems can provide, without the surveillance.

So I was concerned to see one bullet point in the testimony of Cita Furlani from NIST at your recent joint hearing. She characterized NIST’s identity and credentialing management standard for federal employees and contractors (FIPS 201) as “becoming the de facto national standard.”[21]

It is unclear exactly what this means, of course, and I do not view FIPS 201 as the foremost threatened national ID standard at this time. But the needs in identity and credentialing outside the federal government are quite different from those within the government. The same market dominance that makes the federal government such a potential boon to cybersecurity could make it an equal bane to privacy and civil liberties should FIPS 201 be adopted widely by state governments for their employees, by states for their drivers’ licenses and IDs, and in private‐​sector employment and access control. The same is probably true of other standards in other ways.

Cybersecurity standard‐​setting for federal government purchasing and use should present few problems. It can often be beneficial when it drives forward the cybersecurity marketplace. But pressing standards onto the private sector where they are not a good fit‐​in delicate areas such as personal information handling‐​creates concerns.

Professor Schneider from Cornell said it well in your first hearing of this series:

[T]he Internet is as much a social construct as a technological one, and we need to understand what effects proposed technological changes could have; forgoing social values like anonymity and privacy (in some sense, analogous to freedom of speech and assembly) in order to make the Internet more trustworthy might significantly limit the Internet’s utility to some, and thus not be seen as progress.[22]

A different array of concerns arises from nominal “public‐​private partnerships.” The concept is much ballyhooed among governments and corporations because it suggests happiness and cooperation. But I am not enthusiastic about a joining of hands between the government and the corporate sector.

Public‐​private partnerships take many forms, of course. The least objectionable are information‐​sharing arrangements like the Department of Homeland Security’s US-CERT, or United States Computer Emergency Readiness Team. But consumers, the society, and our economy do not get the best from corporations when they cooperate, much less when they cooperate with government. Markets squeeze the most out of the business sector when competitors are nakedly pitted against each other and forced to compete on every dimension of their products and services, including cybersecurity.

Programs like US-CERT run the risk of diminishing competition and innovation in cybersecurity. Vulnerability warning is not a public good; it can be provided privately by companies competing against each other to do the best job for their clients. “Free” taxpayer‐​funded vulnerability warning will tend to squeeze private providers out of the market.

This risks lowering overall consumer welfare, especially if it leads to cybersecurity monoculture. “Monoculture” is the idea that uniformity among security systems is a weakness. In a security monoculture, one flaw could be exploited in many domains at once, bringing them all down and creating problems that would not have materialized in a diverse security environment.

With US-CERT this is only a risk. Public‐​private partnerships of other stripes raise more powerful concerns.

Earlier in my testimony, I wrote about how liability can promote cybersecurity. It is equally the case that the absence of liability can degrade security. If public‐​private partnerships confuse lines of responsibility for security, the results can be very bad indeed.

Consider how responsibility for passenger air transportation was mixed before the 9/11 attacks. Airlines nominally provided security, but they had to obey the dictates of the Federal Aviation Administration. Were something bad to happen, both entities were in a position to deny responsibility.

Flying a plane into a building had been written about in a 1994 novel‐​and kamikaze attacks were, of course, a tactic of the Japanese in World War II‐​but on 9/11 hijacking protocols had not been seriously revamped since the 1970s, when absconding to Cuba was the chief goal of most airline takeovers.

After 9/11, neither airlines nor the Federal Aviation Administration shouldered responsibility. The airlines moved swiftly to capitalize on emotion and patriotism, getting Congress to shield them from liability, give them an infusion of taxpayer dollars, and take over their security obligations. This “public‐​private partnership” in security was a disaster from start to finish, and remains so. The party ultimately bearing the loss‐​and still at risk today‐​was the American taxpayer and traveler.

This illustration is not to suggest that cybersecurity failures threaten attacks equivalent to 9/11. It is simply to suggest that the better role of the government is to stand apart from industry and to arbitrate liability when a company has failed to meet its contractual or tort‐​based obligations.

Public‐​private partnerships may also be conduits for transferring taxpayer funds to corporations, or to universities who do research for corporations. While reviewing the testimonies presented to you in earlier hearings, I was impressed by the nearly uniform requests for taxpayer money.

Much of the money requested would go to research that industry needs to do a good job. In other words, it is research they would fund themselves in the absence of a subsidy. Using a small amount of money taken from each taxpayer, Congress can give money to corporations and claim a role in the production of security, even though the corporations would have put their own money to that use themselves. This is another form of “partnership” where the American taxpayer loses.

When the federal government abandons the role of market participant and neutral arbiter, difficulties arise. Though NIST standards are useful for the federal government‐​and many of them can apply well in the private sector‐​they may not be appropriately forced on the private sector when the government is market‐​dominant. Government‐​corporate collaboration raises many risks: security monoculture; mixed responsibility and weakened security; and simple waste of taxpayer dollars.

Cybersecurity is special, but not so special that principles about the limited role of government should go by the wayside. We will get the best security and the best deal for taxpayers and the public if the government remains within its proper sphere.

Conclusion

Cybersecurity is a huge topic, and I have ranged widely across it in my imperfect testimony. I hope it is more clear that “cybersecurity” is a bigger, more multi‐​faceted problem than the government can solve, and government certainly cannot solve the whole range of cybersecurity problems quickly.

Happily, with a few exceptions, cybersecurity is also less urgent than many commentators allege. “Cyberattack” or “cyberterrorism” might be replaced by “cybersapping” of the country’s assets and technology as the threat we should promptly and diligently address. There is no argument, of course, that cybersecurity is not important.

I am concerned that the policy of keeping true critical infrastructure off the public Internet has been lost in the cybersecurity cacophony. It is a simple, elegant practice that will take care of many threats against truly essential assets.

The government will not fix the nation’s cybersecurity. Your goal as policymakers should be one level removed: to determine the system that will best discover and propagate good cybersecurity practices.

As a market participant, the federal government is well positioned to effect the cybersecurity ecology positively, with NIST standards integral to that process. The federal government may also advance cybersecurity by shifting risk to sellers of technology by contract.

For the market failure that is on exhibit when insecure technology harms networks or other users, liability is a preferable mechanism to regulation for discovering who should bear the responsibility to protect.

When the federal government abandons its role of market participant and becomes a market dominator, regulator, “partner,” or investor with private sector entities, a number of risks arise, including threats to privacy and civil liberties, weakened competition and innovation, and waste of taxpayer dollars.

I appreciate the chance to share these ideas with you, and I hope that they will aid the committee’s deliberations.


[1] Center for a New American Security, “Developing a National Cybersecurity Strategy” web page (visited June 23, 2009) http://​www​.cnas​.org/​n​o​d​e​/2818.

[2] CSIS Commission on Cybersecurity for the 44th Presidency, “Securing Cyberspace for the 44th Presidency,” p. 15 (2008) http://​www​.csis​.org/​m​e​d​i​a​/​c​s​i​s​/​p​u​b​s​/​0​8​1​2​0​8​_​s​e​c​u​r​i​n​g​c​y​b​e​r​s​p​a​c​e​_​4​4.pdf [hereinafter “CSIS Report”].

[6] See “Jay Rockefeller: Internet Should Have Never Existed,” YouTube (posted Mar. 20, 2009) http://​www​.youtube​.com/​w​a​t​c​h​?​v​=​C​t​9​x​z​X​UQLuY.

[9] John Moteff et al., Resources, Science, and Industry Division, Congressional Research Service, “Critical Infrastructures: What Makes an Infrastructure Critical?”, CRS Order Code RL31556 (updated Jan. 29, 2003) http://​www​.fas​.org/​i​r​p​/​c​r​s​/​R​L​3​1​5​5​6.pdf.

[10] CSIS Report, p. 44.

[12] CSIS Report, pp. 1–2.

[13] CSIS Report, p. 12.

[16] Much of Jim Harper, “Government‐​Run Cyber Security? No, Thanks,” Cato Institute TechKnowledge #123 (March 13, 2009) https://​www​.cato​.org/​t​e​c​h​/​t​k​/​0​9​0​3​1​3​-​t​k​.html, is incorporated into this testimony.

[18] See Federal Trade Commission, “You, Your Privacy Policy, and COPPA: How to Comply with the Children’s Online Privacy Protection Act” web page (visited June 23, 2009) http://​www​.ftc​.gov/​b​c​p​/​e​d​u​/​p​u​b​s​/​b​u​s​i​n​e​s​s​/​i​d​t​h​e​f​t​/​b​u​s​5​1.pdf.

[19] See Timothy B. Lee, “The Durable Internet: Preserving Network Neutrality without Regulation,” Cato Policy Analysis #626 (Nov. 12, 2008) https://​www​.cato​.org/​p​u​b​_​d​i​s​p​l​a​y​.​p​h​p​?​p​u​b​_​i​d​=9775.

[21] Testimony of Ms. Cita Furlani, Director, Information Technology Laboratory, National Institute of Standards and Technology (NIST), to a hearing entitled “Agency Response to Cyberspace Policy Review,” Subcommittee on Technology & Innovation, Committee on Science and Technology, United States House of Represenatives, p. 4 (June 16, 2009) http://​democ​rats​.sci​ence​.house​.gov/​M​e​d​i​a​/​f​i​l​e​/​C​o​m​m​d​o​c​s​/​h​e​a​r​i​n​g​s​/​2​0​0​9​/​T​e​c​h​/​1​6​j​u​n​/​F​u​r​l​a​n​i​_​T​e​s​t​i​m​o​n​y.pdf.

[22] Testimony of Dr. Fred B. Schneider, Samuel B. Eckert Professor of Computer Science, Cornell University, to a hearing entitled “Cyber Security R&D,” Subcommittee on Technology & Innovation, Committee on Science and Technology, United States House of Representatives, p. 4 (June 10, 2009) http://​democ​rats​.sci​ence​.house​.gov/​M​e​d​i​a​/​f​i​l​e​/​C​o​m​m​d​o​c​s​/​h​e​a​r​i​n​g​s​/​2​0​0​9​/​R​e​s​e​a​r​c​h​/​1​0​j​u​n​/​S​c​h​e​i​d​e​r​_​T​e​s​t​i​m​o​n​y.pdf.

About the Author