Dealing with Cyberattacks

May 20, 2020 • Publications
By Martin C. Libicki

Among the many types of threats that Americans worry about today,1 cyberattacks rank near the top.2 In March 2013, James Clapper, the director of national intelligence, named cyberattacks — with the potential for states or nonstate actors to manipulate U.S. computer systems and thereby corrupt, disrupt, or destroy critical infrastructure — as the greatest short‐​term threat to U.S. national security.3 Earlier in the year, the head of Homeland Security announced that she believes a “cyber 9/11” could happen “imminently.“4 The year before, Robert Mueller, then director of the Federal Bureau of Investigation, mused that the cyberthreat might replace the terrorist threat as the nation’s top security priority.5 A year before that, Admiral Michael Mullen, head of the Joint Chiefs of Staff, was quoted as saying, “I really believe cyber is one of two existential threats that are out there, the other being nuclear weapons.“6 The fear was extant outside the Beltway as well; a survey of security professionals (admittedly, a self‐​interested source) found that “79 percent believe that there will be [in 2013] some sort of large‐​scale attack on the information technology powering some element of the United States’s infrastructure — and utilities and financial institutions were the most likely targets.“7

It is not as though there have been no incidents, even serious ones. Six years ago, a flood of Internet traffic unleashed by Russia (or at least Russians) knocked many of Estonia’s public and private services offline, a nontrivial event in a country that was a world leader in putting a wide variety of transactions online (e.g., paying for parking). By that point, Chinese cyberespionage had penetrated multiple military, defense industrial, civilian, private‐​sector, and nongovernmental organizations within the United States to extract reams of sensitive data, such as the blueprints for the F-35 fighter.8 In late 2009 and early 2010, the Stuxnet cyberattacks (of likely U.S. and Israeli provenance) managed to destroy as much as 10 percent of Iran’s nuclear enrichment centrifuge capacity by inserting instructions into the centrifuges that caused them to spin at erratic rates, thereby inducing their mechanical failure.

Although the risks of cyberespionage and cyberattack are real, the perception of such risks may be greater than their reality. General Keith Alexander, commander of U.S. Cyber Command, has characterized cyberexploitation of U.S. computer systems as the “greatest transfer of wealth in world history.” A January 2013 Defense Science Board report noted that cybersecurity risks should be managed with improved defenses and deterrence, including “up to a nuclear response in the most extreme case.” However, nobody has ever died from a cyberattack, and only one (disputed) cyberattack has crippled a piece of critical infrastructure.9

Exacerbating the potential for exaggeration, from the perspective of crisis management, is the fact that sources and consequences of cyberattacks are often ambiguous, particularly during the initial period of information gathering following detection of an attack.10 Cyberattacks have neither fingerprints nor the smell of gunpowder. Hackers disguise their intrusions as legitimate traffic (lest they be filtered out by firewalls and other security devices) and by bouncing their data packets from servers in various countries. Victims of advanced persistent threats (extended intrusions into organization networks for the purpose of espionage) are often unaware for months or even years that they have been penetrated. By way of further example, it is unclear that the Iranians knew why their centrifuges were breaking down prematurely before they read about the covert cybersabotage campaign against them in the New York Times. In determining the source of an attack, political leaders are forced to rely on their intelligence agencies, which may have difficulty translating into a persuasive argument the sources and methods that helped them arrive at their conclusions, especially when dealing with such a highly technical issue.

This chapter discusses a hypothetical major cyberattack on U.S. critical infrastructure. It examines its potential motivation, onset, and consequences and then turns to how the United States might or might not respond to minimize the cost and likelihood of such attacks. The amount of damage inflicted could be serious but is unlikely to be catastrophic — though the damage could be magnified considerably by hasty and ill‐​considered overreaction.

Suppose the United States Suffered a Cyberattack

Under such circumstances, what should the United States do if, in fact, it discovered it was subject to a major cyberattack? To illustrate the conundrums associated with doing anything at all, imagine a scenario in which Iran carries out a major cyberattack in the context of the ongoing standoff (as of late 2013) between the United States and Iran over the latter’s nuclear program.

As of this writing, Iran maintains a stockpile of low‐​enriched uranium and centrifuges and refuses to address International Atomic Energy Agency concerns about the possible military dimensions of its nuclear program.11 The United States and Israel have stated that Iran will be prevented from obtaining a nuclear weapon. That international opposition to Iran’s unwillingness to verify the peaceful nature of its nuclear program has given rise to increasing diplomatic tensions and to a tightening sanctions regime. The closer Iran inches toward Israel’s unstated redline for action (believed to be 240 kilograms of low‐​enriched uranium), the greater the likelihood of a military response to thwart Iran’s development or to set it back many years. The United States may attempt to compel Iranian cooperation by bolstering its naval or air forces near Iran, by enhancing its readiness in visible ways, or by acquiescing to Israel’s stated intent to carry out such a strike.

During such a crisis, therefore, suppose a major cyberattack takes place against U.S. critical infrastructure, and, according to U.S. officials, it has all the marks of having been carried out by Iran. Essentially, a cyberattack uses deliberately corrupted streams of bytes to infiltrate computers or computerized systems to damage, disrupt, or gain access to them, often causing the infected computer or systems to stop working correctly (if at all). In an era when very little is not computer controlled, there is very little that could not go wrong. Electric power and natural gas could stop flowing, industrial accidents could be induced, bank deposits could disappear (more plausibly, be illicitly transferred), government records might be scrambled, military equipment could fail on the battlefield, and personal computer hard drives could be converted into gibberish — in theory. In practice, very little of that mayhem has actually taken place, and certainly nothing on the kind of scale that would warrant comparison with the destructive attacks of a war or even something along the lines of what happened on 9/11.

By now, after more than 20 years of concern and a half dozen years of presidential‐​level emphasis, the sectors whose systems are at greatest risk to a cyberattack are also those that have taken it on themselves, or have been directed, to tighten their security. What makes a cyberattack possible, however, is that the level of protection is inevitably uneven and cyberwarfare itself is inherently full of surprises.

Nevertheless, the presumption that the Iranians might soon carry out a cyberattack is not a priori outrageous. Among the world’s potential confrontations, one between the United States and Iran has the greatest potential for a significant cybercomponent. The Iranians have not forgotten Stuxnet, the attack on its nuclear centrifuge facility in Natanz, and they may still want revenge. Beginning in late 2012, cyberattackers, whom the U.S. intelligence community linked to Iran, penetrated the network of the Saudi Arabian national oil and gas company (Aramco) and effectively trashed 30,000 of its computers.12 The machines had to be thrown out. Rasgas, a Qatari corporation, received similar treatment.13 In late 2012, a group referring to itself as Izz ad‐​Din al‐​Qassam Cyber Fighters, linked to Iran by anonymous U.S. intelligence officials, carried out distributed denial‐​of‐​service attacks that disabled user access to the electronic‐​banking websites of large American banks for several hours.14 Finally, in May 2013, U.S. officials claimed, according to the Wall Street Journal,15 that “Iranian hackers were able to gain access to control‐​system software that could allow them to manipulate oil or gas pipelines.” As of this writing, they have yet to demonstrate such control or to make use of their supposed access to wreak actual harm. Iranian officials, incidentally, denied responsibility for all of those alleged attacks.

Iran has displayed a persistent willingness to violate international norms and law by using security forces and proxies to conduct terror attacks abroad, and past instances demonstrate that Iran has the talent and motive to successfully attack the United States. Thus, it would not be surprising, in the midst of escalating tensions over Iran’s nuclear program, for a major cyberattack to be launched against the United States. Given that Iran is a middling cyberspace power, it would most likely engage in a broad set of simultaneous attacks against U.S. critical infrastructure. Most critical systems are well protected against sophisticated attackers — but because the United States is vast, heterogeneous, and complex, there are lagging sheep in the flock: systems whose owners have not paid adequate attention to cybersecurity, those whose security was undermined by trusted but feckless organizations (e.g., a supplier), or those that diverted their attention from the security of their networks long enough for vulnerabilities to get in. Hundreds of thousands of computers may have their hard drives trashed in such an attack, forcing each to be tediously reformatted or discarded, even as irreplaceable files are destroyed. Emergency 911 dispatch services may be disrupted by a distributed denial‐​of‐​service attack against the Internet switches of a phone exchange, inadvertently resulting in deaths from delayed medical attention.

Because a significant cyberattack on U.S. critical infrastructure would draw a large share of news headlines, what would the Iranians hope to gain? They may want to signal that whoever considers attacking Iran (whether by air strike or cyberstrike) cannot do so with impunity. Such an attack may also be used to disrupt military preparations. Perhaps Iran cannot prevent the timely completion of military prerequisites to an air strike (operational military systems are more hardened against attack than are civilian systems). Yet it could distract U.S. political leadership and give pause to U.S. allies that are then contemplating their support of a military attack against Iran. A variant on such a motive may be that Iranian officials hope to divert any potential air strike from going all out against nuclear sites and toward physical sites associated with the cyberattacks (e.g., key nodes, intelligence agencies).

Although the fact of a crisis with Iran over its nuclear program increases the possibility that it will unleash an attack it has held in reserve (sophisticated cyberattacks take months, sometimes years, to prepare), other evidence would be needed to determine which systems are going to be attacked and, more important, how they would be attacked. Warning is a difficult concept in cyberspace. An attack of the sort described here has two primary elements: (a) penetration into a system coupled with the insertion of malware and then (b) the attack itself. If alert system administrators knew which vulnerability the attackers were using to worm their way into an organization’s system, such a vulnerability would likely be closed or routed around, ending that particular threat entirely (or forcing the attackers to find another route in). If they knew other aspects of the attack’s signature (e.g., the IP address it would appear to originate from, certain byte strings in the proposed malware, or telltale characteristics of a phishing e‐​mail), they could set their incoming packet filters accordingly. Although the systems penetrated may number in the thousands (consider how few major systems have not been penetrated over the course of their life), the number of cyberattacks (as opposed to cyberespionage penetrations) themselves has been too few to establish an empirical link between indicators and onset. There is little ipso facto in the relationship between the preparations for an attack and an attack itself that points to the anticipated timing of the attack. The discovery of an attacker’s penetration into a system might mean that an attack is in the planning process, or it might mean that the system’s files are being copied for subsequent inspection and analysis (aka espionage).

A major part of the warning process may occur within the U.S. intelligence community if evidence of an impending attack emerges during its own penetration of foreign networks. In contrast to a conventional military warning that can be safely passed to commanders who prepare their operational units appropriately, many, perhaps most, of the people who need to be warned in this scenario are in the private sector. Unless processes exist to distribute the warning efficiently, what the intelligence community knows may not matter.

The Problematic Nature of Attribution

Knowing who carried out an attack is not an absolute prerequisite to doing something about it once it happens. Even if the identity of the attacker may provide a clue as to the modus operandi of an attack, thus its etiology and thus what steps would more efficiently root it out of the system, most of the knowledge required to clean and restore a system is discoverable within the system itself. Attribution is somewhat — but only somewhat — more important if the aim is to dismantle the infrastructure that carried out such an attack (as a way of stopping the attack and inhibiting its repetition). However, if the United States believes that the only way to prevent a repetition — either by the attacker or by anyone else with similar ideas — is to retaliate for such an attack, then it would be really helpful to know who did it. Retaliation with incorrect attribution is unwarranted aggression. Retaliation without having taken the time and trouble to prove that the target of retaliation was, or at least was responsible for, the attacker will look like unwarranted aggression to much of the rest of the world.

Even an attack that occurred during a burgeoning U.S.-Iran crisis does not itself mean that it was ordered by the Iranian government. It could have been carried out by a group that considered the attack a favor to Iran, such as Hezbollah. In that case, some of the attack might be subcontracted to nonstate perpetrators, in particular the Russian cyber‐​Mafia. Alternatively, it could have been carried out by a rogue faction within Iran in an effort to exacerbate tensions between the United States and Iran, perhaps fearing that Iran would otherwise yield to West‐​imposed constraints on its nuclear program. An aggressive U.S. response, attackers might calculate, could make it politically impossible for Iran to cede to U.S. demands or to raise the stature of hawks vis‐​à‐​vis doves in Tehran. Alternatively, the cyberattack could have been carried out by another state or nonstate entity and made to look of Iranian origin to focus U.S. attention toward Iran, either because the real attackers are foes of Iran (e.g., representing Sunni interests) or to distract U.S. leadership from what may be early signs of crisis elsewhere. Finally, the cyberattack may have been a gratuitous act motivated by hostility, either Iran’s or another actor’s (believing the United States would respond to Iran regardless).

The likelihood of being able to correctly and confidently attribute such an attack generally rises in the days and weeks that follow it, much as it does for domestic criminal cases. If the United States is lucky enough to be watching inside Iranian networks at the moment of attack (and Iran actually did it), then attribution will come more quickly, perhaps nearly instantly. If not, but the attack has the same modus operandi (e.g., similar software, the same originating IP addresses) as other attributed attacks, then attribution is likely to be hastened. But attribution would be difficult if the attackers worked to avoid getting caught (rather than hiding behind the smokescreen of plausible deniability) and, for instance, isolated themselves from the Internet when building tools or took care to use approaches that they themselves had never used before. Technical attribution can go only so far, and if it still leaves matters uncertain, resolving the identity of the perpetrator for policymakers could take weeks and months.

Attributing a false‐​flag attack would feature similar timelines. A false‐​flag attacker must have the same intelligence as the United States regarding the modus operandi of the framed attacker (in this case, Iran). Yet if it copies the known modus operandi of the framed attacker too well, its actions may be inherently suspicious because the attack will look as though it was intentionally falsely attributed because of contradictory evidence (e.g., Iran’s related behavior). There has yet to be a known serious cyberattack that was later revealed as a false‐​flag operation — or a goading attack. Those with the capacity to carry out such an attack are likely counting on a strong reaction from the target. Success at attribution may depend on the operational security of the attackers — or, more specifically, whatever pains the Iranians (if it was Iranians) take to hide their tracks. American officials and proponents of a hard‐​line stance on cyberattacks have tried to argue that determining attribution is getting pretty good. In late 2012, then defense secretary Leon Panetta observed: “Over the last two years, [the Department of Defense] has made significant investments in forensics to address this problem of attribution, and we’re seeing the returns on that investment. Potential aggressors should be aware that the United States has the capacity to locate them and to hold them accountable for their actions that may try to harm America.“16 Stewart Baker, former assistant secretary of homeland security, claims that there has been an “attribution revolution” that puts paid to the notion that cyberattacks can be carried out with impunity.17 Furthermore, if Iran is trying to make a point not to attack it (rather than, for instance, not to station forces in the Middle East), an attitude of “who, me?” hardly helps drive home the point.

Apart from declaring as much, it is unclear how the United States would demonstrate to potential attackers that determining attribution has, in fact, improved. Furthermore, proving attribution to oneself differs from proving it to others. The fact that al Qaeda carried out the 9/11 attacks is accepted in the United States (“truthers” aside), but that attribution has been the minority view in Islamic countries. Unless the attacker wants others to know it is responsible, it has every incentive to deny accusations, even, and especially, after reprisals.

Recent events suggest that even incidents such as Syria’s deadly use of chemical agents are difficult to attribute convincingly, particularly to those who oppose U.S. retaliation against Syria for carrying out chemical attacks. In Syria’s case, those who accused Syria had the preponderance of evidence on their side (vis‐​à‐​vis claims that Syrian rebels carried out the attack); the area of attack was (albeit begrudgingly and with delays) opened to United Nations inspection. By comparison, it is harder to imagine neutral inspectors being allowed to examine (let alone in sufficient detail) the systems that hackers had struck. Hampering the efforts to attribute the attack was the U.S. insistence on classifying the incriminating details that members of Congress were shown, but no one without a U.S. security clearance (and the need to know) could see — and even the members of Congress were not shown everything.18 Given the great secrecy associated with anything having to do with cyberwarfare coupled with the understandable desire not to reveal proprietary features of the systems that had been attacked, it is highly unlikely that attribution of a cyberattack would be more transparent — or convincing.

Cyberattacks as a Foreseeable and Somewhat Preventable Contingency

An attack as large as posited would be unprecedented. No comparable major cyberattack has occurred since the Internet became accessible to the world’s public 20 years ago. Although prior absence is no proof that it will never happen, it may be premature to declare a major attack inevitable. All the trend lines — good and bad — are rising at the same time: (a) the sophistication of attackers and defenders; (b) the salience of cyberattack as a weapon, but also the rising sensitivity to the prospect that such attacks are possible and must be countered; (c) the bandwidth available for organizing a flooding attack, but also to ward it off; and (d) the complexity of operational software (which increases the number of places where vulnerabilities can be found), but also the complexity of security software and systems (which deepens the number of levels an attack must overcome to succeed).

The unalloyed bad news is that Iran is starting to see cyberwarfare as deniable terrorism. The good news is that software companies are rethinking the architectural features of software systems that permit malware to exist. Unfortunately, the bad news is becoming a reality at a greater speed than the good. Alarmist sentiment suggests a predisposition to treat such attacks as strategically motivated (e.g., by the attacker’s desire to defeat the United States) and worthy of response (if only to signal strategic resolve).

Technical and political measures may affect the likelihood that such an attack will take place or, if it does, that it will have serious consequences. The details of a major cyberattack will influence and limit the tools available to the United States to mitigate a crisis arising from a cyberattack. To the extent that the effects of a cyberattack resemble the effects of natural disasters or industrial accidents (e.g., power outages), government officials can take advantage of the institutions and experiences developed since the 9/11 attacks. Yet the Department of Homeland Security’s National Cyber Incident Response Plan is not so much a set of instructions as it is a set of guidelines to ensure that whatever actions take place are coordinated through that department.

Steps to encourage U.S. infrastructure owners to be more resilient and to ensure that their systems can degrade gracefully are beneficial. That matter is different from reducing vulnerabilities (which is also important). Eliminating vulnerabilities reduces the likelihood and severity of a cyberattack. Resiliency allows the overall system (e.g., the national electric grid) to function without great disruption even when an individual system (e.g., a power plant) goes down. Yet it is unclear whether the government can tell infrastructure owners anything they do not already know about resiliency, and an attempt to force them into resilience by specifying performance standards (or after‐​the‐​fact penalties) is likely to meet pushback from institutions opposed to yet more federal regulations. Furthermore, such an effort is likely to generate a fixed set of rules (see the Federal Information Security Management Act) rather than actions tailored to the exigencies of each infrastructure at hand.

Technical measures to reduce the likelihood of an attack include reducing vulnerabilities in software, encouraging better systems management (e.g., enforcing least privilege access measures to system controls, critical nodes, databases, and machine controls), and encouraging the use of security tools that can detect and thwart attacks in progress. Because almost all of the critical systems in this country are in private hands (and all critical software is developed privately, albeit some by nonprofit groups), the policy options usually involve research and development and other funding, a mix of incentives and penalties, and politically sensitive regulations encouraging the adoption of better security practices. Sharing intelligence with potential victims is useful, but not nearly as vital as sharing information about vulnerabilities with those who write and maintain the software that can be exploited in an attack. Resilience is another feature of systems that limits the effects of being attacked — over and above whatever features exist to keep systems from being attacked in the first place. One particularly useful form of resilience is the ability to absorb damage without passing it on to peers: for example, useful features for an electric power grid to have to prevent the kind of cascading effects seen in the regional power outage of August 2003, or features of the banking system that prevent catastrophe with one bank from leading to catastrophes in associated banks. One can only guess how much the development of better attribution techniques could inhibit cyberattacks. Attribution is not simply a technical problem, but an intelligence problem (much as crime solving usually entails more than collecting on‐​scene evidence). Taking useful advantage of good attribution is a political problem. If the purported attacker lives in the United States or a like‐​minded country, the well understood criminal sanctions can be used. But if the attacker lives under the protection of a not‐​so‐​likeminded country (and avoids foreign travel19), the question is less criminal than political. Effective action pivots on questions such as how much certainty does one need to act on what one knows and what if the purported attacker (or its protectors) counterretaliate.

If the United States labels the attack as a crime and seeks prosecution, it would entail, in this scenario, seeking evidence from the purported attacker’s country and, if perpetrators can be identified, seeking extradition. If, as is likely, the attacker’s country declines cooperation, then the basis for further hostile action by the United States would not be the apparent crime, but the continued attempts to conceal and deny, the nature of which could be made manifest.

How Bad Would It Be?

The immediate and direct damage from a major cyberattack can range from zero to tens of billions of dollars (e.g., from a broad outage of electric power). Direct casualties would likely be few, and indirect causalities may have to be inferred from guessing what would have happened if, say, emergency 911 service had not been taken down. In this essay’s scenario, total damage would likely be less than $1 billion.

Indirect effects may be larger if a cyberattack causes a great loss of confidence — in the banking system, for example, which could trig ger a recession. But it is a stretch to argue that even a cyberattack that stopped the banking system completely (much less the sort that merely prevented 24–7 access to a bank’s website) would damage customers’ confidence that their bank accounts would maintain their integrity. NASDAQ’s three‐​hour shutdown on August 22, 2013, for example, did not spark a wave of selling.20 It would require data corruption (e.g., depositors’ accounts being zeroed out) rather than temporary disruption, before an attack would likely cause depositors to question whether their deposits are safe.

Is corruption that easy to carry out, however? To put the question another way, what kind of technique would allow hackers to reduce a depositor’s account balance without allowing them to increase the balance of another depositor — such as themselves? If that transfer were possible, why don’t more such state‐​sponsored hackers go into business for themselves?

So although one hesitates to say that a major cyberattack can never ever be as catastrophic as the 9/11 attacks (or natural events such as Hurricane Katrina or Superstorm Sandy for that matter), the world has been living with the threat from cyberspace for nearly a quarter century, and nothing remotely close to such destruction has taken place.

The ultimate damage, however, may well depend on the extent of the reaction from the United States. A major cyberattack on the United States could conceivably lead to restrictions on the use of the Internet, which could limit the development of e‐​commerce or suppress innovation of leading information technology firms (just as export restrictions because of national security concerns harmed the leadership position of the U.S. satellite industry). Or a reaction could sharply reduce the amount of privacy that U.S. citizens (think they) enjoy on the Internet. Therein lies the key question. A minor cyberattack is more like a crime than a national security event. Should even a major cyberattack be considered a national security event, one capable of calling forth an enormous many‐​tentacle national security creature as a response? After World War II, American politics exhibited a profound tendency to react more energetically to challenges that could be subsumed within the national security umbrella. Note the 1956 passage of the National Interstate and Defense Highways Act or the way in which Russia’s 1957 Sputnik launch intensified federal interest in local education (and led to the adoption of new math). That tendency waxes and wanes: President Carter’s attempt to persuade Congress to put the nation’s energy affairs in order by referring to the effort required as the “moral equivalent of war” fared less well.

Then came 9/11, which was immediately considered an act of war rather than an enormous crime. The United States reacted to the deaths of 3,000 and damage of roughly $100 billion by waging two wars, which killed more than 6,000 Americans, wounded tens of thousands more, and cost at least $1.5 trillion (depending on how postconflict costs are counted). In part, the disproportionate response reflects the failure to make the distinction between national security and personal security. The only way that jihadist terrorists could conceivably have threatened the sovereignty or the Constitution of the United States was to have sparked state‐​ending sectarian violence between enraged opponents and defensive supporters — something that Iraq suffered in 2006–2007 and may possibly have restarted in 2013. There was no prospect of that in the United States. Rather, the government assumed responsibility for preventing the deaths of Americans from further attacks (reflecting a very high ratio of funds spent to lives saved). Clearly, vengeance was another impetus for action, but if one puts that all‐​too‐​human motive aside, one must, therefore, ask whether going to war twice in the Islamic world was the most cost‐​effective way of preventing future incidents of large‐​scale death (particularly once U.S. airlines finished installing hardened cabin doors, thereby making it nearly impossible to take over a plane and crash it into a building).21

Compared with terrorism involving conventional explosives, the ratio of death and destruction from cyberattacks is likely to be several orders of magnitude lower; in that respect, 9/11 was an outlier among terrorist attacks, with the March 11, 2004, Madrid attacks or the July 7, 2005, London attacks being more typical. It is by no means clear what the worst plausible disaster emanating from cyberspace might be (it is far clearer that it would not come from Iran, whose skills at cyberwarfare likely pale in comparison with China’s, much less Russia’s). Doomsayers argue that a coordinated attack on the national power grid that resulted in the loss of electric power for years would lead to widespread death from disease (absent refrigeration of medications) and starvation (the preelectrified farm sector was far less productive than today’s). But even if their characterization of the importance of electricity were not exaggerated (it is), killing electric power for that long requires that equipment with lengthy repair times (e.g., transformers, few of which are made here) be broken.

Can cyberattacks have physical effects? Stuxnet would suggest it can; but more than three years have elapsed since it was revealed, and there has yet to be a Stuxnet II. It seems more and more to have been an exceptional event. In essence, if the United States and Israel had carried out the attack, then Iran, a country without a great depth of experience in running complex industrial operations, was up against two first‐​rate cybersavvy countries. One can see what a tempting target was Natanz, the crippling of which could increase the security of both Israel and the United States. It is hard to think of many other targets anywhere in the world whose destruction would so clearly benefit the national security of a potential cyberattacker.

Stuxnet featured four zero‐​day exploits: exploits that take advantage of vulnerabilities in software that the software provider has not patched, frequently because the software provider is unaware they exist (whereas the hackers are quite aware they exist). Four zero days is three more than even the most sophisticated cyberattacks have. Iran was getting nearly no help in operating and maintaining Natanz (against such hazards as cyberattacks, for instance). Natanz had no active Internet connection (and thus no obvious way to become infected), and no one had yet broken anything with a real‐​world cyberattack.22 The Iranians could have easily ascribed the self‐​destruction of their centrifuges to many causes other than a cyberattack (such as the fact that it was getting parts of questionable reliability from channels of questionable legality), thereby allowing the malware to do its job over and over. As it is, Stuxnet destroyed few if any centrifuges that were already on the floor and completely programmed when the malware appeared.23 It affected only those centrifuges that were being newly programmed (or perhaps reprogrammed) before entering (or perhaps reentering) their cascades. Today, the four zero‐​day vulnerabilities have been long patched, the possibility that cyberattacks could destroy machinery has been established, and the notion that air gapping suffices to protect against all cyberattacks has been refuted — to that extent the immune systems of machine‐​control networks to future attacks have been prepared. The extrapolation from Stuxnet to the destruction of the North American grid remains a much longer leap.

If, therefore, the death count is zero, or at least low by terrorism standards, there will be few dramatic videos, and the popular cry for vengeance is likely to be muted. Some will direct their ire to the institutions whose fecklessness about cybersecurity allowed such an attack to happen. One would hope that some share of the policy community will frame the response question as this: what is the best way of preventing future attacks? If so, it is by no means clear that retaliation would come out on top, particularly if policymakers recognize that retaliation creates pressure for counterretaliation, as history suggests it might.

First, consider retaliation limited solely to cyberspace. Such retaliation could easily lead to a series of escalated cyberattacks on both sides.24 It is unclear who might cry uncle first. The United States — its economy as well as its society — is far more dependent on the unimpeded use of networks than Iran is. Indeed, Iran may well use the risk of hostile activity on the Internet as the clinching argument in its campaign to create a separate national (referred to as a “halal”) Internet — which might have little effect on the U.S. ability to insert malware into Iran’s electronic networks, but might have a substantial effect on the U.S. ability to insert uncomfortable memes into Iran’s social networks.

Conversely, the U.S. capacity for cyberwarfare is unmatched, and the dependence of other countries on U.S. companies for software, systems administration, and cybersecurity assistance is even greater. All told, the unknowns exceed the knowns in making any such predictions. Were the attackers from North Korea (which has essentially no connectivity to the outside world from its networks) or were they nonstate actors without infrastructure, then the ability of the United States to impress them using retaliatory cyberattacks would be considerably reduced.

However, the retaliation may catch the attention of the supposed attacker and not necessarily in a good way, particularly if the supposed attacker is not the real attacker. In some ways, the reaction cycle in response to a cyberattack may be more destabilizing than it is for a kinetic attack. Bear in mind that because combat in cyberspace is presumed to take place in nanoseconds, one can ask whether there are any circumstances in which there is no need for speed. Now, couple that with the current notion of “active defense” (admittedly a term made popular because there is little agreement about what it means). At one end are actions that can be deemed fairly legitimate: for example, honeypots to capture the actions of the attacker, files with misleading information to fool those who steal them, and use of tools of intelligence to anticipate what attackers do.

At the other end are far more questionable actions, notably attacks on the networks through which cyberattacks are commanded and controlled. We would like to believe that such cyberwarfare command‐​and‐​control networks can be clearly distinguished from afar from command‐​and‐​control networks for kinetic combat or from similar networks that operate civilian infrastructures. Similarly, we would like to believe that attribution is always correct and that what looks in the first few seconds like a cyberattack (rather than, say, a software glitch or penetration for the purposes of espionage) is, in fact, a cyberattack. But would both always be true? And if they may be false, how stabilizing would it be for the determination of such matters and thus the choice of response mechanisms — which would not necessarily write a conclusion to the confrontation — to be in the hands of an organization with a vested interest in the importance of cyberwarfare, whether offensively or defensively?

What Starts in Cyberspace May Not Stay in Cyberspace

The U.S. reaction might, therefore, not be confined to cyberspace — and that’s where things get even more interesting and not necessarily in a good way. The military has adopted cyberspace as a fifth domain of warfare — a medium subject to military operations25 in much the same way that warfare can take place in the other media: land, sea, air, and outer space (in theory). Just as no serious military officer believes that the United States is not allowed to respond, say, to a ground invasion by using air power, most officers believe that the United States should be allowed to respond to a sufficiently damaging invasion of cyberspace by using kinetic combat. As colorfully described in mid‐​2011 by an anonymous military officer, “If you shut down our power grid, maybe we will put a missile down one of your smokestacks.“26 The International Strategy for Cyberspace declares (albeit not without the expected caveats): “When warranted, the United States will respond to hostile acts in cyberspace as we would to any other threat to our country. All states possess an inherent right to self‐​defense. We reserve the right to use all necessary means: diplomatic, informational, military, and economic.“27

It is the firm and oft‐​repeated position of the U.S. government that the laws of armed conflict apply equally to cyberspace as they do to the physical world. If the thrust of the argument is that what is not allowed in the physical world should not be allowed in cyberspace, then the overall effect (if such laws are universally obeyed) is to restrict cyberwarfare rather than enable it. But that which is not specifically enjoined is allowed. That is particularly the case when so many response options are legally problematic, making less optimal or unwise options the only survivors of the policy process. Thus, that formulation has a dark side: countries are allowed to respond to events in cyberspace as if they were similarly damaging events taking place in the kinetic world.

The most thorough analysis of how the laws of armed conflict might apply in cyberspace has been provided by The Tallinn Manual on the International Law Applicable to Cyber Warfare.28 According to the manual, distinctions must be made on the one hand between military and dual‐​use targets (e.g., power grids that supply military bases) and purely civilian targets on the other hand. An attack on one does not justify a response by attacking another. However, an attack of comparable consequence using cybermeans may be answered with an attack of comparable consequence using kinetic means. Using dollars as a simple‐​minded measure of consequence, we might, therefore, conclude that a cyberattack that scrambles the logistics information that one military depends on and leaves it sufficiently unready for war and that costs, say, a billion dollars to repair may be answered by a kinetic attack on the power plant of another country that levies similar costs (assume, for the sake of simplicity, no casualties). If we set aside the moral quandaries of such an exchange, important and troubling strategic ramifications are at issue.

The most important justification for thinking about Las Vegas rules for cyberspace — what starts there stays there — is that the risks of escalation from cyberwarfare are much lower than the risks of escalation from kinetic (violent) warfare. In the short run, an unprepared country can suffer significant losses of information from a cyberwar. Against a sufficiently sadistic and determined adversary, it is conceivable that every infected personal computer can have the information on its hard drive erased (as noted with Aramco and echoed in South Korea). But a short‐​run response to a withering cyberattack that focuses on backing up information and capabilities while eliminating unessential connectivity (electric grids need not be connected to the Internet; they ran well enough before the Internet was invented) could preserve most of a country’s functionality at the cost of annoyance.

In the longer run, cyberattacks are enabled by vulnerabilities in software and architectural features in computer design that allow their instruction sets to be altered. By contrast, the instruction sets of most equipment are (or, until very recently, were) fixed when they leave the factory. A computer that burned into its hardware all of its instructions — operating systems, office automation, Web browsing — would be hardened against malware. Although malware does not account for all security breaches (e.g., South Carolina’s exposure of all its tax records to hackers),29 it is very much harder to cause serious damage without it. Ultimately, systems are only as vulnerable as we want them to be: more accurately, only up to the level where the inconvenience from restraining their malleability and accessibility matches the risks from retaining that very malleability and accessibility. For that reason, a tit for tat in cyberspace can escalate to very high levels without creating unlimited damage. The difficulty of finding obvious firebreaks in cyberspace — the point beyond which no cyberattack on either side would go — is unfortunate but not necessarily fatal. To wit, an all‐​out cyberwar can be contained by the nature of cyberspace itself.

The same insouciance does not work very well when applied to kinetic warfare. There, millions can be killed even if a firebreak between conventional weapons and nuclear weapons is observed. There, the paucity of obvious firebreaks between, say, a state‐​sponsored terrorist attack or a cross‐​border raid on the one hand and total urban devastation on the other hand is a far more serious matter. Thus, if the United States responds to a cyberattack using physical force and the target of kinetic retaliation wants, itself, to counterretaliate using physical force, there may be no obvious place at which mutual damage can be limited by mutual restraint. Granted, Iran’s capacity to hurt the U.S. homeland, even with its terrorist friends, is well below the U.S. ability to hurt Iran’s homeland, but recent wars in the Middle East have not proved costless to the United States. And the precedent of responding with violence to something in cyberspace may not necessarily be restricted to the Middle East. Perhaps, needless to add, escalating from a cyberattack to a kinetic attack makes the transition to nuclear warfare more, not less, likely.

Defeat the Attacker’s Strategy and Minimize the Likelihood of Future Attacks

There are two framing principles that U.S. officials should bear in mind once they recognize a major cyberattack.

First, the United States must defeat the attacker’s strategy, insofar as it can determine as much — yet another reason to proceed deliberately in the wake of a cyberattack. If the attacker wants to deter the United States, then the duty of the United States is not to be deterred from acting. If the attacker, conversely, wants to goad the United States into a foolish reaction, then the duty of the United States is to avoid that temptation. Both seemingly contradictory aims are met if the United States downplays the incident, thereby signaling that it would take a much larger attack to deflect U.S. policy. That approach means abjuring opportunities to give teeth to a deterrence policy (which is a good reason not to enunciate one in the first place), as well as allowing opportunities to make minor concessions that might remove the attacker’s costly attacks.

Second, steps should be taken to minimize the likelihood of future attacks, whether they are already loaded (such as malware awaiting orders) or need to be engineered afresh, and whether they are being prepared by the original attacker or by a third party watching the U.S. reaction. Hitting back — by generating blackouts, dropping communications, or penetrating into and revealing the files of the attacker’s government — may achieve just that, but it also requires a high degree of certainty about who the attacker is and from whom the attacker was receiving direction. If a retaliatory strike by the United States were acknowledged by third parties as having correctly identified the targets and as being proportional, that acknowledgment should dissuade further attacks by the original attacker and signal to third parties that cyberattacks on the United States are unwise. Conversely, if the target of retaliation wants to retain its innocence, it cannot pretend that the retaliation was justified. That reaction creates the motive on its part to treat U.S. retaliation as unwarranted aggression, which then merits counterretaliation to restore the moral equilibrium. Others may convince themselves that the attacker was hit not because of its guile, but because of its opposition to the United States. With the United States portrayed as a bully, the lesson that cyberattacks against the country are unwise will be significantly diluted (if the innocent suffer, why bother being innocent?).

Determining whether a cyberattack is an act of war is not drawing a conclusion but making a decision. Cyberwars are wars of choice. Victims of a cyberattack do not face imminent destruction. They have the opportunity to ask this question: what is the most cost‐​effective way to minimize such future suffering? Depending on circumstances, the way may or may not be war. Thus, it is important to understand what the United States hopes to gain from making the attackers cease their attack. Is it worth risking physical war with Iran in order to reduce the odds that Iran could, from time to time, repeat the attacks of this scenario?

Recognizing that a cyber 9/11 cannot be ruled out, the next set of recommendations aims to ensure that the 9/12 that follows does not lead to a worse outcome.

  • The United States should avoid reacting too hastily out of fear that hesitation will lead to disaster. Computers may work in nanoseconds, but the target of any response is not the computer — in large part because, even if the attacker’s computer is destroyed, a substitute may be close at hand. The true target of a response is those who command cyberwarriors. People, however, do not work in nanoseconds; persuasion and dissuasion in cyberwar or any other form of war takes roughly the same amount of time. Among factors worth considering are (a) the ease by which the evidence of attribution can be conveyed, (b) what else is going on at the time (especially if someone is using force), (c) how the attack and the response or lack thereof is being viewed overseas, (d) the odds that the vulnerability that allowed the class of attacks can be addressed, and (e) the odds that private reaction to such an attack will be harmful rather than helpful.
  • The U.S. government ought not to take possession of the crisis unnecessarily — or at least do so only on its own terms. Otherwise, the federal government risks backing itself into a corner where it has no choice but to respond, regardless of whether doing so is wise.
  • If the crisis does not resolve itself quickly, the urge toward escalation should be tempered. On the one hand, the victim will react to what was done to it (or what it thinks was done to it). On the other, using tit‐​for‐​tat measures in cyberspace to modulate the other side’s escalation can have uncertain results.

Turning every misfortune that foreigners can inflict on the United States into a national security problem is both lazy and dangerous, particularly so for a means of attack that gives no reasonable prospect of being as damaging as the 9/11 attacks.

About the Author
Martin C. Libicki
Notes

1 This essay is an expansion of material published by the Council on Foreign Relations, http://​www​.for​eignaf​fairs​.com/​a​r​t​i​c​l​e​s​/​1​3​9​8​1​9​/​m​a​r​t​i​n​-​c​-​l​i​b​i​c​k​i​/​d​o​n​t-buy…. Reprinted and adapted by permission of Foreign Affairs Copyright 2013 by the Council on Foreign Relations Inc., http://​www​.for​eignaf​fairs​.com.

2 For example, a Washington Post poll taken in May 2012 found that 51 percent of respondents were very or fairly concerned that U.S. government computers could be targeted by a major cyberattack. “Public Concern over Cyber‐​Attacks,” Washington Post, June 6, 2012.

3 James R. Clapper, “Worldwide Threat Assessment of the US Intelligence Community,” Statement before the Senate Select Committee on Intelligence, 113th Cong., 1st sess., March 12, 2013, http://www.dni.gov/files/documents/Intelligence%20Reports/2013%20ATA%20….

4 Dara Kerr, “ ‘Cyber 9/11’ May Be on Horizon, Homeland Security Chief Warns,” CNET, January 24, 2013, http://news.cnet.com/8301–1009_3-57565763–83/cyber-9–11-may-be-on-horiz….

5 Alicia Budich, “FBI: Cyber Threat Might Surpass Terror Threat,” CBS News, Face the Nation, February 2, 2012, http://www.cbsnews.com/8301–3460_162-57370682/fbi-cyber-threat-might-su….

6 Viola Gienger, “U.S. Military Aid to Overseas Allies May Face Cuts, Mullen Says,” Bloomberg News, June 14, 2011, http://www.bloomberg.com/news/2011–06-13/pentagon-aid-to-foreign-milita….

7 Sean Gallagher, “Security Pros Predict ‘Major’ Cyber Terror Attack This Year,” Ars Technica, January 4, 2013, http://​arstech​ni​ca​.com/​s​e​c​u​r​i​t​y​/​2​0​1​3​/​0​1​/​s​e​c​u​r​i​t​y​-​p​r​o​s​-​p​r​e​d​i​c​t​-​m​a​j​o​r-cyb….

8 Siobhan Gorman, August Cole, and Yochi Dreazen, “Computer Spies Breach Fighter‐ Jet Project,” Wall Street Journal, April 21, 2009.

9 Hackers purportedly knocked the electrical system of a southern Brazilian city offline for several days. That claim (without details) was made by the Central Intelligence Agency’s Tom Donahue, “CIA Admits Cyberattacks Blacked Out Cities,” InformationWeek, January 18, 2008, http://​www​.infor​ma​tion​week​.com/​c​i​a​-​a​d​m​i​t​s​-​c​y​b​e​r​a​t​t​a​c​k​s​b​l​a​c​k​e​d​o​u​t​c​i​ti/20…, and was broadcast (with details) by the CBS news show Sixty Minutes, http://www.cbsnews.com/8301–18560_162-5555565.html. Brazilian investigators refuted that claim; they blamed sooty insulators instead. Marcelo Soares, “Brazilian Blackout Traced to Sooty Insulators, Not Hackers,” Wired, November 9, 2009, http://​www​.wired​.com/​t​h​r​e​a​t​l​e​v​e​l​/​2​0​0​9​/​1​1​/​b​r​a​z​i​l​_​b​l​a​c​kout/.

10 Security analysts often argue that the number of incidents is grossly understated. Many penetrations are never discovered (particularly if the hacker is interested only in stealing data). Furthermore, businesses, they claim, underreport cyberattacks to preserve their reputation of being secure.

11 An interim agreement with Iran in late 2013 suggests that Iran is now taking those concerns more seriously, thereby reducing the odds (or at least the intensity) of a confrontation between Iran and the United States. Hence, the danger of cyberattacks as described in this essay is putatively lower. However, the broader lessons of this essay about the treatment of cyberattacks remain no less valid.

12 Nicole Perlroth, “In Cyberattack on Saudi Firm, U.S. Sees Iran Firing Back,” New York Times, October 23, 2012.

13 “US Officials: Cyberattacks on Aramco, RasGas May Have Come from Iran,” Doha News, October 14, 2012, http://​dohanews​.co/​p​o​s​t​/​3​3​5​6​2​7​4​8​3​4​2​/​u​s​-​o​f​f​i​c​i​a​l​s​-​c​y​b​e​r​a​t​t​a​c​k​s​-​o​n​-​a​ramco….

14 Quentin Hardy, “Bank Hacking Was the Work of Iranians, Officials Say,” New York Times, January 8, 2013.

15 Siobhan Gorman and Danny Yadron, “Iran Hacks Energy Firms, U.S. Says,” Wall Street Journal, May 23, 2013.

16 Leon E. Panetta, “Remarks by Secretary Panetta on Cybersecurity to the Business Executives for National Security, New York City,” October 11, 2012, U.S. Department of Defense, http://​www​.defense​.gov/​t​r​a​n​s​c​r​i​p​t​s​/​t​r​a​n​s​c​r​i​p​t​.​a​s​p​x​?​t​r​a​n​s​c​r​i​p​t​i​d​=5136.

17 Stewart A. Baker, Partner, Steptoe & Johnson LLP, “The Attribution Revolution: Raising the Costs for Hackers and Their Customers,” Statement before the Subcommittee on Crime and Terrorism of the Senate Judiciary Committee, 113th Cong., 1st sess., May 8, 2013, http://www.judiciary.senate.gov/pdf/5–8-13BakerTestimony.pdf.

18 Alan Grayson, “On Syria Vote, Trust, but Verify,” New York Times, September 6, 2013.

19 In September 2013, Vladimir Putin issued a travel warning for Russians indicating that the United States might indict (for hacking, among other crimes) to avoid countries from which the United States could extradite them. Mark Johanson, “Russia Issues Travel Warning about US, Citing Threat of ‘Kidnapping,’ ” International Business Times, September 3, 2013, http://​www​.ibtimes​.com/​r​u​s​s​i​a​-​i​s​s​u​e​s​-​t​r​a​v​e​l​-​w​a​r​n​i​n​g​-​a​b​o​u​t​-​u​s​-​c​i​t​i​n​g-thr….

20 Matt Hunter, “NASDAQ: ‘Connectivity Issue’ Led to Three‐​Hour Shutdown,” CNBC, August 22, 2013, http://​www​.cnbc​.com/​i​d​/​1​0​0​9​68086.

21 See, for instance, Benjamin H. Friedman, Jim Harper, and Christopher A. Preble, eds., Terrorizing Ourselves: Why U.S. Counterterrorism Policy Is Failing and How to Fix It (Washington: Cato Institute, 2010); John Mueller, Overblown: How Politicians and the Terrorism Industry Inflate National Security Threats, and Why We Believe Them (New York: Free Press, 2006); and many writings of Bruce Schneier, Schneier on Security (blog), http://​www​.schneier​.com.

22 In early 2007, Idaho Laboratories demonstrated that a set of malformed instructions could cause an untended electric generator (similar to those running the trans‐​Alaska pipeline) to destroy itself. Jeanne Meserve, “Sources: Staged Cyber Attack Reveals Vulnerability in Power Grid,” CNN, September 26, 2007, http://​www​.cnn​.com/​2​0​0​7​/​U​S​/​0​9​/​2​6​/​p​o​w​e​r​.​a​t​.​risk/.

23 That inference was drawn from Jon R. Lindsay, “Stuxnet and the Limits of Cyber Warfare,” Security Studies 22, no. 3 (2013): 365–404. Furthermore, International Atomic Energy Agency inspections report no change in status to the most productive cascades at Natanz (module A24), while the 1,000 centrifuges observed disconnected were from cascades under construction (modules A26 and A28), running under vacuum but not filled with uranium hexafluoride gas (p. 390).

24 Note the heightened activity in cyberspace ascribed to the Syrian Electronic Army as the West contemplated a response to Syria’s chemical attack.

25 Department of Defense Strategy for Operating in Cyberspace, July 2011, http://​www​.defense​.gov/​n​e​w​s​/​d​2​0​1​1​0​7​1​4​c​y​b​e​r.pdf.

26 Julian Barnes and Siobhan Gorman, “Cyber Combat: Act of War,” Wall Street Journal, May 31, 2011.

27 International Strategy for Cyberspace: Prosperity, Security, and Openness in a Networked World (Washington: White House, 2011), p. 14.

28 Under the sponsorship of the NATO Cooperative Cyber Defence Centre of Excellence, http://​www​.ccd​coe​.org/​2​4​9​.html.

29 Robbie Brown, “South Carolina Offers Details of Data Theft and Warns It Could Happen Elsewhere,” New York Times, November 20, 2012.