Tag: cost-benefit

John Mueller Joins Cato

I am pleased to announce that John Mueller, a leading scholar in the fields of political science, international relations, and national security, has joined the Cato Institute as a senior fellow.

All of us at Cato are very excited to have John as a colleague. Over the last decade as a professor of political science and as the Woody Hayes Chair of National Security Studies at the Ohio State University’s Mershon Center for International Security Studies, John has taken on the conventional wisdom in the national security arena with a rare combination of accessible, breezy prose and meticulous cost-benefit analysis. In particular, he has focused on how policymakers inflate national security threats at home and abroad.

His newest book, Terror Money and Security, which he presented at a recent Cato forum, examines whether the gains in security over the past decade were worth the funds expended. For the vast majority of U.S. homeland security and counterterrorism policies, John and his co-author, Mark Stewart, resoundingly conclude “no.”

As a member of the Cato Institute, John will contribute to our multitude of programs and publications while furthering his work on the subjects of security, defense, and U.S. foreign policy. Cato is fortunate to have such a brilliant scholar join its staff.

For more Cato Institute work on foreign policy and national security, go here.

Bathtubs, Terrorists, and Overreaction

I dislike our national obsession with anniversaries and tendency to convert solemn occasions into maudlin ones; to fetishize perceived collective victimization rather than simply recognizing real victims. That kept me from joining in the outpouring of September 11 reflection, now mercifully receding. But I have reflections on the reflections.

The anniversary commentary has, happily, included widespread consideration of the notion that we overreacted to the attacks and did al Qaeda a favor by overestimating their power and making it easier for them to terrorize. Even the Wall Street Journal allowed some of the bigwigs they invited to answer their question of whether we overreacted to the attacks to say, “yes, sort of.”

Unsurprisingly, however, the Journal’s contributors, like almost every other commentator out there, did not define overreaction. It’s easy and correct to say we’ve wasted dollars and lives in response to September 11 but harder to answer the question of how much counterterrorism is too much. So this post explains how to do that, and then considers common objections to the answer.

That answer has to start with cost-benefit analysis. As I put it in my essay in Terrorizing Ourselves, a government overreaction to danger is a policy that fails cost-benefit analysis and thus does more harm than good. But when we speak of harm and good, we have to leave room for goods, like our sense of justice, that are harder to quantify.

Cost-benefit analysis of counterterrorism policies requires first knowing what a policy costs, then estimating how many people terrorists would kill absent that policy, which can involve historical and cross-national comparisons, and finally converting those costs and benefits into a common metric, usually money. Having done that analysis, you have a cost-per-life-saved-per-policy, which can be thought of as the value a policy assigns to a statistical life—the price we have decided to pay to save a life from the harm the policy aims to prevent.

Then you need to know if that price is too high. One way to do so, preferred by economists, is to compare the policy’s life value to the value that the target population uses in their life choices (insurance purchases, salary for hazardous work, and so on). These days, in the United States, a standard range for the value of a statistical life is four to eleven million dollars. If a policy costs more per life saved than that, the market value of a statistical life, then the government could probably produce more longevity by changing or ending the policy. A related concept is risk-risk or health-health analysis, which says that at some cost, a policy will cost more lives than it saves by destroying wealth used for health care and other welfare-enhancing activities. One calculation of that cost, from 2000, is $15 million.

In a new book, Terror, Security, and Money: Balancing the Risks, Benefits, and Costs of Homeland Security,* John Mueller and Mark Stewart use this approach to analyze U.S. counterterrorism’s cost-effectiveness, generating a range of estimates for lives saved for various counterterrorism activities. I haven’t yet read the published book, but in articles that form its basis, they found that most counterterrorism policies, and overall homeland security spending, spend exponentially more per-life saved than what regulatory scholars consider cost-effective.

That is a strong indication that we are overreacting to terrorism. It is not the end of the necessary analysis however, since it leaves open the possibility that counterterrorism has benefits beyond safety that justify its costs. More on that below.

Objections to this mode of analysis have four varieties. First, people have a visceral objection to valuing human life in dollars. But as I just tried to explain, policies themselves make such valuations, trading lives lost in one way for lives lost in another. So this objection amounts to an unconvincing plea to keep such tradeoffs secret and make policy in the dark.

Second, people challenge the benefit side of the ledger by arguing that terrorists are actually far more dangerous than the data says. Analysts say that weapons of mass destruction mean that future terrorists will kill far more than past ones. One response is that you should be suspicious anytime someone tells you that history is no guide to the present. It tends to be the best guide we have, for terrorism and everything else. Our analysis of terrorists’ danger should acknowledge that the last ten years included no mass terrorism, contrary to so many predictions. Another response is that one can, as Mueller and Stewart have, include high-end guesses of possible lives saved to show the upwards bounds of what counterterrorism must accomplish to make it worthwhile. The results tend to be so far-fetched that they demonstrate how excessive these policies are.

A third objection is to claim that some counterterrorism costs are actually terrorism’s costs. Government should spend heavily to avoid terrorism, this logic says, because our reaction to the attacks we would otherwise fail to prevent will cost far more. In other words, if an expensive overreaction is inevitable, it helps justify the seemingly excessive up-front cost of defenses.

One problem with this objection is that it approaches tautology by treating a policy’s cost as its own justification. See, for example, Atlantic writer Jeffrey Goldberg’s recent response to John Mueller’s observation in the Los Angeles Times that more people die annually worldwide from bathtub drowning than terrorism and the article’s suggestion that we might therefore be overreacting to the latter. Goldberg argues, essentially, that we have to overreact to terrorism lest we overreact to terrorism. Then, after his colleague James Fallows points out the logical trouble, Goldberg, without admitting error, switches to argument two above, while failing to acknowledge, let alone respond to, Mueller’s several books and small library of articles shooting that argument down.

Another problem with the inevitable overreaction argument is that overreaction might happen only following rare, shocking occasions like September 11. Future attacks might be accepted without strong demand for more expensive defenses. Moreover, the defenses might not significantly contribute to preventing attacks and overreaction.

The best objection to Mueller and Stewart’s brand of analysis is to point out counterterrorism’s non-safety benefits. The claim here is that terrorism is not just a source of mortality or economic harm, like carcinogens or storms, but political coercion that offends our values and implicates government’s most traditional function. Defenses against human, political dangers provide deterrence and a sense of justice. These benefits may be impossible to quantify. These considerations may justify otherwise excessive counterterrorism costs.

I suspect that Mueller and Stewart would agree that this argument is right except for the last sentence. Its logic serves any policy said to combat terrorism, no matter how expansive and misguided. We may want to pay a premium for our senses of justice and security, but we need cost-benefit analysis to tell us how large that premium now is. Nor should we assume that policies justified by moral or psychological ends actually deliver the goods. Were it the case that our counterterrorism policies greatly reduced public fear and blunted terrorists’ political strategy, they might indeed be worthwhile. But something closer to the opposite appears to be true. Al Qaeda wants overreaction—bragging of bankrupting the United States—and our counterterrorism policies seem as likely to cause alarm as to prevent it.

*Muller and Stewart will discuss their book at a Cato book forum on October 24. Stay tuned for signup information.

(Cross-posted from TNI’s The Skeptics.)

Making Sense of New TSA Procedures

Since they were announced recently, I’ve been working to make sense of new security procedures that TSA is applying to flights coming into the U.S.

“These new measures utilize real-time, threat-based intelligence along with multiple, random layers of security, both seen and unseen, to more effectively mitigate evolving terrorist threats,” says Secretary Napolitano.

That reveals essentially nothing of what they are, of course. Indeed, “For security reasons, the specific details of the directives are not public.”

But we in the public aren’t so many potted plants. We need to know what they are, both because our freedoms are at stake and because our tax money will be spent on these measures.

Let’s start at the beginning, with identity-based screening and watch-listing in general. A recent report in the New York Times sums it up nicely:

The watch list is actually a succession of lists, beginning with the Terrorist Identities Datamart Environment, or TIDE, a centralized database of potential suspects.  … [A]bout 10,000 names come in daily through intelligence reports, but … a large percentage are dismissed because they are based on “some combination of circular reporting, poison pens, mistaken identities, lies and so forth.”

Analysts at the counterterrorism center then work with the Terrorist Screening Center of the F.B.I. to add names to what is called the consolidated watch list, which may have any number of consequences for those on it, like questioning by the police during a traffic stop or additional screening crossing the border. That list, in turn, has various subsets, including the no-fly list and the selectee list, which requires passengers to undergo extra screening.

The consolidated list has the names of more than 400,000 people, about 97 percent of them foreigners, while the no-fly and selectee lists have about 6,000 and 20,000, respectively.

After the December 25, 2009 attempted bombing of a Northwest Airlines flight from Amsterdam into Detroit, TSA quickly established, then quickly lifted, an oppressive set of rules for travelers, including bans on blankets and on moving about the cabin during the latter stages of flights. In the day or two after a new attempt, security excesses of this kind are forgivable.

But TSA also established identity-based security rules of similar provenance and greater persistence, subjecting people from fourteen countries, mostly Muslim-dominated, to special security screening. This was ham-handed reaction, increasing security against the unlikely follow-on attacker by a tiny margin while driving wedges between the U.S. and people well positioned to help our security efforts.

Former DHS official Stewart Baker recently discussed the change to this policy on the Volokh Conspiracy blog:

The 14-country approach wasn’t a long-term solution.  So some time in January or February, with little fanfare, TSA seems to have begun doing something much more significant.  It borrowed a page from the Customs and Border Protection playbook, looking at all passengers on a flight, running intelligence checks on all of them, and then telling the airlines to give extra screening to the ones that looked risky.

Mark Ambinder lauded the new policy on the Atlantic blog, describing it thusly:

The new policy, for the first time, makes use of actual, vetted intelligence. In addition to the existing names on the “No Fly” and “Selectee” lists, the government will now provide unclassified descriptive information to domestic and international airlines and to foreign governments on a near-real time basis.

Likely, the change is, or is very much like, applying a Customs and Border Patrol program called ATS-P (Automated Targeting System - Passenger) to air travel screening.

“[ATS-P] compares Passenger Name Record (PNR) and information in [various] databases against lookouts and patterns of suspicious activity identified by analysts based upon past investigations and intelligence,” says this Congressional Research Service report.

“It was through application of the ATS-P that CBP officers at the National Targeting Center selected Umar Farouk Abdulmutallab, who attempted to detonate an explosive device on board Northwest Flight 253 on December 25, 2009, for further questioning upon his arrival at the Detroit Metropolitan Wayne County Airport.”

Is using ATS-P or something like it an improvement in the way airline security is being done? It probably is.

A watch-list works by comparing the names of travelers to the names of people that intelligence has deemed concerning. To simplify, the logic looks like something like this:

If first name=”Cat” (or variants) and last name=”Stevens”, then *flag!*

Using intelligence directly just broadens the identifiers you use, so the comparison (again simplified) might look something like this:

If biography contains “traveled in Yemen” or “Nigerian student” or “consorted with extremists”, then *flag!*

The ability to flag a potential terrorist with identifiers beyond name is a potential improvement. Such a screening system would be more flexible than one that used purely name-based matching. But using more identifiers isn’t automatically better.

The goal—unchanged—is to minimize both false positives and false negatives—that is, people flagged as potential terrorists who are not terrorists, and people not flagged as terrorists who are terrorists. A certain number of false positives are acceptable if that avoids false negatives, but a huge number of false positives will just waste resources relative to the margin of security the screening system creates. Given the overall paucity of terrorists—which is otherwise a good thing—it’s very easy to waste resources on screening.

I used the name “Cat Stevens” above because it’s one of several well known examples of logic that caused false positives. Utterly simplistic identifiers like “traveled in Yemen” will also increase false positives dramatically. More subtle combinations of identifiers and logic can do better. The questions are how far they increase false positives, and whether the logic is built on enough information to produce true positives.

So far as we know, ATS-P has never flagged a terrorist before it flagged the underwear bomber. DHS officials tried once to spin up a case in which ATS-P flagged someone who was involved in an Iraq car-bombing after being excluded from the U.S. However, I believe, as I suggested two years ago, that ATS-P flagged him as a likely visa overstayer and not as a terror threat. He may not have been a terror threat when flagged, as some reports have it that he descended into terrorism after being excluded from the U.S. This makes the incident at best an example of luck rather than skill. That I know of, nobody with knowledge of the incident has ever disputed my theory, which I think they would have done if they could.

The fact that ATS-P flagged one terrorist is poor evidence that it will “work” going forward. The program “working” in this case means that it finds true terrorists without flagging an unacceptable/overly costly number of non-terrorists.

Of course, different people are willing to accept different levels of cost to exclude terrorists from airplanes. I think I have come up with a good way to measure the benefits of screening systems like this so that costs and benefits can be compared, and the conversation can be focused.

Assume a motivated attacker that would eventually succeed. By approximating the amount of damage the attack might do and how long it would take to defeat the security measure, one can roughly estimate its value.

Say, for example, that a particular attack might cause one million dollars in damage. Delaying it for a year is worth $50,000 at a 5% interest rate. Delaying for a month an attack that would cause $10 billion in damage is worth about $42 million.

(I think it is fair to assume that any major attack will happen only once, as it will produce responses that prevent it happening twice. The devastating “commandeering” attack on air travel and infrastructure is instructive. The 9/11 attacks permanently changed the posture of air passengers toward hijackers, and subsequent hardening of cockpit doors has brought the chance of another commandeering attack very close to nil.)

A significant weakness of identity-based screening (which “intelligence-based” screening—if there’s a difference—shares) is that it is testable. A person may learn if he or she is flagged for extra scrutiny simply by traveling a few times. A person who passes through airport security six times in a two-month period and does not receive extra scrutiny can be confident enough on the seventh trip that he or she will not be specially screened. If a person does receive special scrutiny on test runs, that’s notice of being in a suspect category, so someone else should carry out a planned attack.

(“We’ll make traveling often a ground for suspicion!” might go the answer. “False positives,” my rejoinder.)

Assuming that it takes two months more than it otherwise would to recruit and clear a clean-skin terrorist, as Al Qaeda and Al Qaeda franchises have done, the dollar value of screening is $125 million. That is the amount saved (at a 5% interest rate) by delaying for one month an attack costing $15 billion (a RAND corporation estimate of the total cost of a downed airliner, public reactions included).

Let’s say that the agility of having non-name identifiers does improve screening and causes it to take three months rather than two to find a candidate who can pass through the screen. Ignoring the costs of additional false positives (though they could be very high), the value of screening rises to $187.5 million.

(There is plenty of room to push and pull on all these assumptions. I welcome comments on both the assumptions and the logic of using the time-value of delayed attacks to quantify the benefits of security programs.)

A January 2009 study entitled, “Just How Much Does That Cost, Anyway? An Analysis of the Financial Costs and Benefits of the ‘No-Fly’ List,” put the amount expended on “no-fly” listing up to that time at between $300 million and $966 million, with a medium estimate of $536 million. The study estimated yearly costs at between $51 and $161 million, with a medium estimate of $89 million.

The new screening procedures, whose contours are largely speculative, may improve air security by some margin. Their additional costs are probably unknown to anyone yet as false positive rates have yet to be determined, and the system has yet to be calibrated. Under the generous assumption that this change makes it 50% harder to defeat the screening system, the value of screening rises, mitigating the ongoing loss that identity-based screening appears to bring to our overall welfare.

Hey, if you’ve read this far, you’ll probably go one or two more paragraphs…

It’s worth noting how the practice of “security by obscurity” degrades the capacity of outside critics to contribute to the improvement of homeland security programs. Keeping the contours of this system secret requires people like me to guess at what it is and how it works, so my assessment of its strengths and weaknesses is necessarily degraded. As usual, Bruce Schneier has smart things to say on security by obscurity, building on security principles generated over more than 125 years in the field of cryptography.

DHS could tell the public a great deal more about what it is doing. There is no good reason for the security bureaucracy to insist on going mano a mano against terrorism, turning away the many resources of the broader society. The margin of information the United States’ enemies might access would be more than made up for by the strength our security programs would gain from independent criticism and testing.

Obama’s Other Massachusetts Problem

Even if Democrat Martha Coakley wins 50 percent of the vote in the race to fill the late Sen. Ted Kennedy’s (ahem) term, there are other numbers emanating from Massachusetts that present a problem for President Obama’s health plan.

On Wednesday, the Cato Institute will release “The Massachusetts Health Plan: Much Pain, Little Gain,” authored by Cato adjunct scholar Aaron Yelowitz and yours truly. Our study evaluates Massachusetts’ 2006 health law, which bears a “remarkable resemblance” to the president’s plan. We use the same methodology as previous work by the Urban Institute, but ours is the first study to evaluate the effects of the Massachusetts law using Current Population Survey data for 2008 (i.e., from the 2009 March supplement).  Since I’m sure that supporters of the Massachusetts law and the Obama plan will dismiss anything from Cato as ideologically motivated hackery: Yelowitz’s empirical work is frequently cited by the Congressional Budget Office, and includes one article co-authored with MIT health economist (and Obama administration consultant) Jonathan Gruber, under whom Yelowitz studied.

Among our findings:

  • Official estimates overstate the coverage gains under the Massachusetts law by roughly 50 percent.
  • The actual coverage gains may be lower still, because uninsured Massachusetts residents appear to be concealing their lack of insurance rather than admit to breaking the law.
  • Public programs crowded out private insurance among low-income children and adults.
  • Self-reported health improved for some, but fell for others.
  • Young adults appear to be avoiding Massachusetts as a result of the law.
  • Leading estimates understate the cost of the Massachusetts law by at least one third.

When Obama campaigns for Martha Coakley, he is really campaigning for his health plan, which means he is really campaigning for the Massachusetts health plan.

He and Coakley should explain why they’re pursuing a health plan that’s not only increasingly unpopular, but also appears to have a rather high cost-benefit ratio.

(Cross-posted at Politico’s Health Care Arena.)

What Is Seen and What Is Not Seen

Two items in Tuesday’s newspapers remind us of the often unseen costs of regulation and also of the often unseen benefits of market processes. In the Wall Street Journal, Prof. Todd Zywicki examines the likely consequences of a law to limit credit card interest rates and the fees they charge to merchants:

Card issuers might also reduce the quantity and quality of credit cards by restricting credit availability and cutting back on product innovation or ancillary card benefits. This is exactly what happened when Australian regulators imposed price controls on interchange fees in 2003: Annual fees increased an average of 22% on standard credit cards and annual fees for rewards cards increased by 47%-77%. Card issuers also reduced the generosity of their reward programs by 23%. Innovation, especially in terms of improved security and identity-theft protection, was stalled. Card issuers also increased their efforts to attract higher-risk customers who generate interest and penalty fees to offset lower interchange revenues from lower-risk transactional users.

Those are the kinds of unseen costs that most of us wouldn’t anticipate (that’s why economists talk about “unanticipated [or unintended] consequences” of action). Only after the fact were economists able to identify the specific costs of the regulation. It seemed like a good idea – limit the cost of something that consumers (voters) want. Did anyone predict the consequences? People probably predicted that annual fees would rise to compensate for the lost revenue from interchange fees. But did they anticipate a slowdown in innovation in security and identity-theft protection? Did they anticipate that card issuers would work harder to get higher-risk customers? Such regulation always impedes the optimal working of market processes, and thus inevitably delivers sub-optimal results. 

Meanwhile, we often observe conditions in the marketplace that don’t seem to make sense to us. So we assume something is wrong, maybe even corrupt. An article in the Washington Post written in a sober yet hysterical style raised the problem of “medical salesmen in the operating room.” Then, in a letter to the Post, Dr. Mark Domanski explains why it makes sense to have medical salesmen in the operating room. A Post article on the topic had been full of anecdotes about a salesman who “began his career selling hot dogs” hanging out in operating rooms and doctors who expressed outrage. If only they had thought to ask a surgeon in distant Arlington, Virginia:

I found David S. Hilzenrath’s Dec. 27 Business article, “The salesman in the operating room,” to be one-sided.

Of course, medical sales representatives work along doctors in operating rooms. As a surgeon, I always want a company rep in the operating room.

So, if you were having surgery that involved a complicated piece of equipment, wouldn’t you like somebody from the manufacturer to be there? I know I would.

Here’s why:

Remember when you tried to assemble that desk you bought from a furniture store? We all know how to use a screwdriver, but when something is off, it’s nice to know there is a number to call. What if you needed to put that desk together quickly because you needed it for something important? It would be nice if the company sent someone to make sure all the parts were there and in good order. That’s what a good rep does.

As the surgeon, I make the diagnosis and decide the treatment. No company representative tells me how to use a knife. But many products in the operating room are complex and change almost every year; they are getting better that fast.

When I am using a complex product, such as a plating system for fixing a jaw fracture, having the rep in the room ensures that the system is functional. I know all the parts will be there. I know that the right screw and plate will be handed to me at the right time.

Sometimes we call in the rep for an operation, and it turns out that the fracture does not need to be plated. No rep has ever suggested that I plate a fracture that didn’t need to be plated.

Members of Congress and activists are constantly reading articles about apparent problems and rushing off to propose legislation. These examples and countless more should remind us to think carefully before we coercively interfere in the decisions that millions, billions, of people make every day.