Tag: technology

Public Information and Public Choice

MalamudOne of the high points of last week’s Gov 2.0 Summit was transparency champion Carl Malamud’s speech on the history of public access to government information – ending with a clarion call for  government documents, data, and deliberation to be made more freely available online. The argument is a clear slam-dunk on simple grounds of fairness and democratic accountability. If we’re going to be bound by the decisions made by regulatory agencies and courts, surely at a bare minimum we’re all entitled to know what those decisions are and how they were arrived at. But as many of the participants at the conference stressed, it’s not enough for the data to be available – it’s important that it be free, and in a machine readable form. Here’s one example of why, involving the PACER system for court records:

The fees for bulk legal data are a significant barrier to free enterprise, but an insurmountable barrier for the public interest. Scholars, nonprofit groups, journalists, students, and just plain citizens wishing to analyze the functioning of our courts are shut out. Organizations such as the ACLU and EFF and scholars at law schools have long complained that research across all court filings in the federal judiciary is impossible, because an eight cent per page charge applied to tens of millions of pages makes it prohibitive to identify systematic discrimination, privacy violations, or other structural deficiencies in our courts.

If you’re thinking in terms of individual cases – even those involving hundreds or thousands of pages of documents – eight cents per page might not sound like a very serious barrier. If you’re trying to do a meta-analysis that looks for patterns and trends across the body of cases as a whole, not only is the formal fee going to be prohibitive in the aggregate, but even free access won’t be much help unless the documents are in a format that can be easily read and processed by computers, given the much higher cost of human CPU cycles. That goes double if you want to be able to look for relationships across multiple different types of documents and data sets.

All familiar enough to transparency boosters. Is there a reason proponents of limited government ought to be especially concerned with this, beyond a general fondness for openness? Here’s one reason.  Public choice theorists often point to the problem of diffuse costs and concentrated benefits as a source of bad policy. In brief, a program that inefficiently transfers a million dollars from millions of taxpayers to a few beneficiaries will create a million dollar incentive for the beneficiaries to lobby on its behalf, while no individual taxpayer has much motivation to expend effort on recovering his tiny share of the benefit of axing the program. And political actors have similarly strong incentives to create identifiable constituencies who benefit from such programs and kick back those benefits in the form of either donations or public support. What Malamud and others point out is that one thing those concentrated beneficiaries end up doing is expending resources remaining fairly well informed about what government is doing – what regulations and expenditures are being contemplated – in order to be able to act for or against them in a timely fashion.

Now, as the costs of organizing dispersed people get lower thanks to new technologies, we’re seeing increasing opportunities to form ad hoc coalitions supporting and opposing policy changes with more dispersed costs and benefits – which is good, and works to erode the asymmetry that generates a lot of bad policy. But incumbent constituencies have the advantage of already being organized and able to invest resources in identifying policy changes that implicate their interests. If ten complex regulations are under consideration, and one creates a large benefit to an incumbent constituent while imposing smaller costs on a much larger group of people, it’s a great advantage if the incumbent is aware of the range of options in advance, and can push for their favored option, while the dispersed losers only become cognizant of it when the papers report on the passage of a specific rule and slowly begin teasing out its implications.

Put somewhat more briefly: Technology that lowers organizing costs can radically upset a truly pernicious public choice dynamic, but only if the information necessary to catalyze the formation of a blocking coalition is out there in a form that allows it to be sifted and analyzed by crowdsourced methods first. Transparency matters less when organizing costs are high, because the fight is ultimately going to be decided by a punch up between large, concentrated interest groups for whom the cost of hiring experts to learn about and analyze the implications of potential policy changes is relatively trivial. As transaction costs fall, and there’s potential for spontaneous, self-identifying coalitions to form, those information costs loom much larger. The timely availability – and aggregability – of information about the process of policy formation and its likely consequences then suddenly becomes a key determinant of the power of incumbent constituencies to control policy and extract rents.

Thomas Friedman’s New Math of Democracy

52237408AW011_Meet_The_PresThomas Friedman’s New York Times column today would be astonishing in its incoherence if only Friedman hadn’t long ago sapped us of our ability to be astonished by his incoherence. Like many capital-‘d’ Democrats, Friedman has soured on democracy for failing to deliver on his policy wish list.

Watching both the health care and climate/energy debates in Congress, it is hard not to draw the following conclusion: There is only one thing worse than one-party autocracy, and that is one-party democracy, which is what we have in America today.

Why does Friedman say the United States has one-party democracy? Because the Republican Party is effectively opposing the Democratic Party’s agenda! Not even kidding. Get this:

The fact is, on both the energy/climate legislation and health care legislation, only the Democrats are really playing. With a few notable exceptions, the Republican Party is standing, arms folded and saying “no.” Many of them just want President Obama to fail. Such a waste. Mr. Obama is not a socialist; he’s a centrist. But if he’s forced to depend entirely on his own party to pass legislation, he will be whipsawed by its different factions.

Only the Democrats are really playing! You might think that would mean they can do whatever they darn well please. But no! The Democrats can’t do anything! Because the other party’s opposition is so effective! So it’s exactly as if there’s just one party: nothing gets done!

My hunch is that the Times’ editors see Friedman aiming the gun at his foot, but watching a man stupid enough to actually pull the trigger is so fun they hate to intervene. That or they’re trying to explode the myth of American meritocracy.

So where were we? Oh, yes: one-party democracy is aggravating because sometimes one party can’t do what it wants because the other party gets in the way. Sooo frustrating!!! Why have democracy at all when all you end up with is a single party stymied by the other one! And so it is that Friedman comes to wax romantic about communist central planning:

One-party autocracy certainly has its drawbacks. But when it is led by a reasonably enlightened group of people, as China is today, it can also have great advantages. That one party can just impose the politically difficult but critically important policies needed to move a society forward in the 21st century. It is not an accident that China is committed to overtaking us in electric cars, solar power, energy efficiency, batteries, nuclear power and wind power.

Nikita Kruschev, the enlightened leader of a now-defunct one-party autocracy, was also committed to overtaking the United States in technology and so much more. “We will bury you” is how he put it. At the time, more than a few left-leaning American opinionmakers suspected he was right. After all, how can inefficiently squabbling democracies possibly keep pace with undivided regimes wholly devoted to scientifically centrally planning their way into the brighter, better future? And that, children, is why we speak Russian today.

The Future of DNA as an Identifier …

… is not in doubt. But as technology advances, it will not be as strong an identifier as it has been up to now. Scientists have demonstrated that they can fabricate it.

I wrote about the qualities of identifiers - fixity, distinctiveness, and permanence - in my book Identity Crisis. The ability to fabricate DNA renders it slightly less distinctive.

Cherry Picking Climate Catastrophes: Response to Conor Clarke, Part II

Conor Clarke at The Atlantic blog, raised several issues with my study, “What to Do About Climate Change,” which Cato published last year.

One of Conor Clarke’s comments was that my analysis did not extend beyond the 21st century. He found this problematic because, as Conor put it, climate change would extend beyond 2100, and even if GDP is higher in 2100 with unfettered global warming than without, it’s not obvious that this GDP would continue to be higher “in the year 2200 or 2300 or 3758”. I addressed this portion of his argument in Part I of my response. Here I will address the second part of this argument, that “the possibility of ‘catastrophic’ climate change events — those with low probability but extremely high cost — becomes real after 2100.”

The examples of potentially catastrophic events that could be caused by anthropogenic greenhouse gas induced global warming (AGW) that have been offered to date (e.g., melting of the Greenland or West Antarctic Ice Sheets, or the shutdown of the thermohaline circulation) contain a few drops of plausibility submerged in oceans of speculation. There are no scientifically justified estimates of the probability of their occurrence by any given date. Nor are there scientifically justified estimates of the magnitude of damages such events might cause, not just in biophysical terms but also in socioeconomic terms. Therefore, to call these events “low probability” — as Mr. Clarke does — is a misnomer. They are more appropriately termed as plausible but highly speculative events.

Consider, for example, the potential collapse of the Greenland Ice Sheet (GIS). According to the IPCC’s WG I Summary for Policy Makers (p. 17), “If a negative surface mass balance were sustained for millennia, that would lead to virtually complete elimination of the Greenland Ice Sheet and a resulting contribution to sea level rise of about 7 m” (emphasis added). Presumably the same applies to the West Antarctic Ice Sheet.

But what is the probability that a negative surface mass balance can, in fact, be sustained for millennia, particularly after considering the amount of fossil fuels that can be economically extracted and the likelihood that other energy sources will not displace fossil fuels in the interim? [Remember we are told that peak oil is nigh, that renewables are almost competitive with fossil fuels, and that wind, solar and biofuels will soon pay for themselves.]

Second, for an event to be classified as a catastrophe, it should occur relatively quickly precluding efforts by man or nature to adapt or otherwise deal with it. But if it occurs over millennia, as the IPCC says, or even centuries, that gives humanity ample time to adjust, albeit at a socioeconomic cost. But it need not be prohibitively dangerous to life, limb or property if: (1) the total amount of sea level rise (SLR) and, perhaps more importantly, the rate of SLR can be predicted with some confidence, as seems likely in the next few decades considering the resources being expended on such research; (2) the rate of SLR is slow relative to how fast populations can strengthen coastal defenses and/or relocate; and (3) there are no insurmountable barriers to migration.

This would be true even had the so-called “tipping point” already been passed and ultimate disintegration of the ice sheet was inevitable, so long as it takes millennia for the disintegration to be realized. In other words, the issue isn’t just whether the tipping point is reached, rather it is how long does it actually take to tip over. Take, for example, if a hand grenade is tossed into a crowded room. Whether this results in tragedy — and the magnitude of that tragedy — depends upon how much time it takes for the grenade to go off, the reaction time of the occupants, and their ability to respond.

Lowe, et al. (2006, p. 32-33), based on a “pessimistic, but plausible, scenario in which atmospheric carbon dioxide concentrations were stabilised at four times pre-industrial levels,” estimated that a collapse of the Greenland Ice Sheet would over the next 1,000 years raise sea level by 2.3 meters (with a peak rate of 0.5 cm/yr). If one were to arbitrarily double that to account for potential melting of the West Antarctic Ice Sheet, that means a SLR of ~5 meters in 1,000 years with a peak rate (assuming the peaks coincide) of 1 meter per century.

Such a rise would not be unprecedented. Sea level has risen 120 meters in the past 18,000 years — an average of 0.67 meters/century — and as much as 4 meters/century during meltwater pulse 1A episode 14,600 years ago (Weaver et al. 2003; subscription required). Neither humanity nor, from the perspective of millennial time scales (per the above quote from the IPCC), the rest of nature seem the worse for it. Coral reefs for example, evolved and their compositions changed over millennia as new reefs grew while older ones were submerged in deeper water (e.g., Cabioch et al. 2008). So while there have been ecological changes, it is unknown whether the changes were for better or worse. For a melting of the GIS (or WAIS) to qualify as a catastrophe, one has to show, rather than assume, that the ecological consequences would, in fact, be for the worse.

Human beings can certainly cope with sea level rise of such magnitudes if they have centuries or millennia to do so. In fact, if necessary they could probably get out of the way in a matter of decades, if not years.

Can a relocation of such a magnitude be accomplished?

Consider that the global population increased from 2.5 billion in 1950 to 6.8 billion this year. Among other things, this meant creating the infrastructure for an extra 4.3 billion people in the intervening 59 years (as well as improving the infrastructure for the 2.5 billion counted in the baseline, many of whom barely had any infrastructure whatsoever in 1950). These improvements occurred at a time when everyone was significantly poorer. (Global per capita income today is more than 3.5 times greater today than it was in 1950). Therefore, while relocation will be costly, in theory, tomorrow’s much wealthier world ought to be able to relocate billions of people to higher ground over the next few centuries, if need be. In fact, once a decision is made to relocate, the cost differential of relocating, say, 10 meters higher rather than a meter higher is probably marginal. It should also be noted that over millennia the world’s infrastructure will have to be renewed or replaced dozens of times – and the world will be better for it. [For example, the ancient city of Troy, once on the coast but now a few kilometers inland, was built and rebuilt at least 9 times in 3 millennia.]

Also, so long as we are concerned about potential geological catastrophes whose probability of occurrence and impacts have yet to be scientifically estimated, we should also consider equally low or higher probability events that might negate their impacts. Specifically, it is quite possible — in fact probable — that somewhere between now and 2100 or 2200, technologies will become available that will deal with climate change much more economically than currently available technologies for reducing GHG emissions. Such technologies may include ocean fertilization, carbon sequestration, geo-engineering options (e.g., deploying mirrors in space) or more efficient solar or photovoltaic technologies. Similarly, there is a finite, non-zero probability that new and improved adaptation technologies will become available that will substantially reduce the net adverse impacts of climate change.

The historical record shows that this has occurred over the past century for virtually every climate-sensitive sector that has been studied. For example, from 1900-1970, U.S. death rates due to various climate-sensitive water-related diseases — dysentery, typhoid, paratyphoid, other gastrointestinal disease, and malaria —declined by 99.6 to 100.0 percent. Similarly, poor agricultural productivity exacerbated by drought contributed to famines in India and China off and on through the 19th and 20th centuries killing millions of people, but such famines haven’t recurred since the 1970s despite any climate change and the fact that populations are several-fold higher today. And by the early 2000s, deaths and death rates due to extreme weather events had dropped worldwide by over 95% of their earlier 20th century peaks (Goklany 2006).

With respect to another global warming bogeyman — the shutdown of the thermohaline circulation (AKA the meridional overturning circulation), the basis for the deep freeze depicted in the movie, The Day After Tomorrow — the IPCC WG I SPM notes (p. 16), “Based on current model simulations, it is very likely that the meridional overturning circulation (MOC) of the Atlantic Ocean will slow down during the 21st century. The multi-model average reduction by 2100 is 25% (range from zero to about 50%) for SRES emission scenario A1B. Temperatures in the Atlantic region are projected to increase despite such changes due to the much larger warming associated with projected increases in greenhouse gases. It is very unlikely that the MOC will undergo a large abrupt transition during the 21st century. Longer-term changes in the MOC cannot be assessed with confidence.”

Not much has changed since then. A shut down of the MOC doesn’t look any more likely now than it did then. See here, here, and here (pp. 316-317).

If one wants to develop rational policies to address speculative catastrophic events that could conceivably occur over the next few centuries or millennia, as a start one should consider the universe of potential catastrophes and then develop criteria as to which should be addressed and which not. Rational analysis must necessarily be based on systematic analysis, and not on cherry picking one’s favorite catastrophes.

Just as one may speculate on global warming induced catastrophes, one may just as plausibly also speculate on catastrophes that may result absent global warming. Consider, for example, the possibility that absent global warming, the Little Ice Age might return. The consequences of another ice age, Little or not, could range from the severely negative to the positive (if that would buffer the negative consequences of warming). That such a recurrence is not unlikely is evident from the fact that the earth entered and, only a century and a half ago, retreated from a Little Ice Age, and that history may indeed repeat itself over centuries or millennia.

Yet another catastrophe that greenhouse gas controls may cause is that CO2 not only contributes to warming, it is also the key building block of life as we know it. All vegetation is created by the photosynthesis of CO2 in the atmosphere. In fact, according to the IPCC WG I report (2007, p. 106), net primary productivity of the global biosphere has increased in recent decades, partly due to greater warming, higher CO2 concentrations and nitrogen deposition. Thus , there is a finite probability that reducing CO2 emissions would, therefore, reduce the net primary productivity of the terrestrial biosphere with potentially severe negative consequences for the amount and diversity of wildlife that it could support, as well as agricultural and forest productivity with adverse knock on effects on hunger and health.

There is also a finite probability that costs of GHG reductions could reduce economic growth worldwide. Even if only industrialized countries sign up for emission reductions, the negative consequences could show up in developing countries because they derive a substantial share of their income from aid, trade, tourism, and remittances from the rest of the world. See, for example, Tol (2005), which examines this possibility, although the extent to which that study fully considered these factors (i.e., aid, trade, tourism, and remittances) is unclear.

Finally, one of the problems with the argument that society should address low probability high impact events (assuming a probability could be estimated rather than assumed or guessed) is that it necessarily means there is a high probability that resources expended on addressing such catastrophic events will have been squandered. This wouldn’t be a problem but for the fact that there are opportunity costs associated with this.

According to the 2007 IPCC Science Assessment’s Summary for Policy Makers (p. 10), “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” In plain language, this means that the IPCC believes there is at least a 90% likelihood that anthropogenic greenhouse gas emissions (AGHG) are responsible for 50-100% of the global warming since 1950. In other words, there is an up to 10% chance that anthropogenic GHGs are not responsible for most of that warming.

This means there is an up to 10% chance that resources expended in limiting climate change would have been squandered. Since any effort to significantly reduce climate change will cost trillions of dollars (see Nordhaus 2008, p. 82), that would be an unqualified disaster, particularly since those very resources could be devoted to reducing urgent problems humanity faces here and now (e.g., hunger, malaria, safer water and sanitation) — problems we know exist for sure unlike the bogeymen that we can’t be certain about.

Spending money on speculative, even if plausible, catastrophes instead of problems we know exist for sure is like a starving man giving up a fat juicy bird in hand while hoping that we’ll catch several other birds sometime in the next few centuries even though we know those birds don’t exist today and may never exist in the future.

Assessing the Claim that CDT Opposes a National ID

It was good of Ari Schwartz to respond last week to my recent post querying whether the Center for Democracy and Technology outright opposes a national ID or simply “does not support” one.

Ari says CDT does oppose a national ID, and I believe that he honestly believes that. But it’s worth taking a look at whether the group’s actions are consistent with opposition to a national ID. I believe CDT’s actions – most recently its support of the PASS ID Act – support the creation of a national ID.

(The title of his post and some of his commentary suggest I have engaged in rhetorical excess and mischaracterized his views. Please do judge for yourself whether I’m being shrill or unfair, which is not my intention.)

First I want to address an unusual claim of Ari’s – that we already have a national ID system. If that is true, his support for PASS ID is more sensible because it is an opportunity to inject federal privacy protections into the existing system (putting aside whether it is a federal responsibility to manage a state system or systems).

Do We Already Have a National ID?

I have heard a few people suggest that we have a national ID in the form of the Social Security Number. I believe the SSN is a national identifier, but it fails the test of a national identification card or system because it is not used for identification. As we know well from the scourge of identity fraud, there is no definitive way to tie an SSN to a person. The SSN is not used for identification (at least not reliably and not alone), which is the third part of my national ID definition. (Senator Schumer might like the SSN to form the basis of a national ID system, of course.)

But Ari says something different. He does not claim any definition of “national ID” or “national ID system.” Instead, he appeals to the authority of a 2003 report from a National Academy of Sciences group entitled “Who Goes There?: Authentication Through the Lens of Privacy.” That report indeed says, “State-issued driver’s licenses are a de facto nationwide identity system” – on the second-to-last substantive page of its second-to-last substantive chapter

But this is a highly selective use of quotation. The year before, that same group issued a report called “IDs – Not That Easy: Questions About Nationwide Identity Systems.” From the beginning and throughout, that report discussed the many issues around proposals to create a “nationwide” identity system. If the NAS panel had already concluded that we have a national ID system, it would not have issued an entire report critiquing that prospect. It would have discussed the existing one as such. Ari’s one quote doesn’t do much to support the notion that we already have a national ID.

What’s more, CDT’s own public comments on the proposed REAL ID Act regulations in May 2007 said that its data-intensive “one person – one license/ID card – one record” policy would ”create a national identification system.”

If a national ID system already existed, the new policy wouldn’t create one. This is another authority at odds with the idea that we have a national ID system already.

Support of PASS ID might be forgiven if we had a national ID system and if PASS ID would improve it. But the claim we already have one is weak.

“Political Reality” and Its Manufacture

But the heart of Ari’s claim is that supporting PASS ID reflects good judgment in light of political reality.

Despite the fact that there are no federal politicians, no governors and no appointed officials from any party publicly supporting repeal of REAL ID today, CDT still says that repeal is an acceptable option. However, PASS ID would get to the same outcome, or better, in practice and has the added benefit of actually being a political possibility… . I realize that Harper has invested a lot of time fighting for the word “repeal,” but at some point we have to look at the political reality.

A “Dear Colleague” letter inviting support for a bill to repeal REAL ID circulated on the Hill last week. How many legislators will hesitate to sign on to the bill because they have heard that the PASS ID Act, and not repeal of REAL ID, is CDT’s preferred way forward?

The phrase “political reality” is more often used by advocates to craft the political reality they prefer than to describe anything truly real. Like the observer effect in experimental research, statements about “political reality” change political reality.  Convince enough people that a thing is “political reality” and the sought-after political reality becomes, simply, reality.

I wrote here before about how the National Governors Association, sensing profit, has worked diligently to make REAL ID a “political reality.” And it has certainly made some headway (though not enough). In the last Congress, the only legislation aimed at resolving the REAL ID impasse were bills to repeal REAL ID. Since then, the political reality is that Barack Obama was elected president and an administration far less friendly to a national ID took office. Democrats – who are on average less friendly to a national ID – made gains in both the House and Senate.

But how are political realities crafted? It has often been described as trying to get people on a bus. To pass a bill, you change it to get more people on the bus than get off.

The REAL ID bus was missing some important riders. It had security hawks, the Department of Homeland Security, anti-immigrant groups, DMV bureaucrats, public safety advocates, and the Bush Administration. But it didn’t have: state legislators and governors, privacy and civil liberties groups, and certain religious communities, among others.

PASS ID is for the most part an effort to bring on state legislators and governors. The NGA is hoping to broker the sale of state power to the federal government, locking in its own institutional role as a supplicant in Washington, D.C. for state political leaders.

But look who else was hanging around the bus station looking for rides! – CDT, the nominal civil liberties group. Alone it jumped on the bus, communicating to others less familiar with the issues that PASS ID represented a good way forward.

Happily, few have taken this signal. The authors of PASS ID were unable to escape the name “REAL ID,” which is a far more powerful beacon flashing national ID and all the ills that entails than CDT’s signal to the contrary.

This is not the first time that CDT’s penchant for compromise has assisted the national ID effort, though.

Compromising Toward National ID

The current push for a national ID has a short history that I summarized three years ago in a righteously titled post on the TechLiberationFront blog: “The Markle Foundation: Font of Evil II.”

Briefly, in December 2003, a group called the Markle Foundation Task Force on National Security in the Information Age recommended “both near-term measures and a longer-term research agenda to increase the reliability of identification while protecting privacy.” (Never mind that false identification was not a modus operandi of the 9/11 attacks.)

The 9/11 Commission, citing Markle, found that “[t]he federal government should set standards for the issuance of birth certificates and sources of identification, such as drivers licenses.” In December 2004, Congress passed the Intelligence Reform and Terrorism Prevention Act, implementing the recommendations of the 9/11 Commission, including national standards for drivers’ licenses and identification cards, the national ID system recommended by the Markle Task Force. And in May 2005, Congress passed a strengthened national ID system in the REAL ID Act.

An earlier post, “The Markle Foundation: Font of Evil,” has more – and the text of a PoliTech debate between myself and Stewart Baker. Security hawk Baker was a participant in the Markle Foundation group, as was national ID advocate Amitai Etzioni. So was the Center for Democracy and Technology’s Jim Dempsey.

I had many reservations about the Markle Foundation Task Force and its work product, and in an April 2005 meeting of the DHS Privacy Committee, I asked Dempsey about what qualified people to serve on that task force, whether people were invited, and what might exclude them. A month before REAL ID passed, he said:

I think the Markle Task Force at least sought balance. And people came to the table committed to dialogue. And those who came with a particular point of view, I think, were all committed to listening. And I think people’s minds were changed… . What we were committed to in the Markle Task Force was changing our minds and trying to find a common ground and to try to understand each other. And we spent the time at it. And that, I think, is reflected in the product of the task force.

There isn’t a nicer, more genuine person working in public policy than Jim Dempsey. He is the consummate honest broker, and this statement of his intentions for the Markle Foundation I believe to be characteristically truthful and earnest.

But consider the possibility that others participating on the Markle Foundation Task Force did not share Jim’s predilection for honest dialogue and compromise. It is even possible that they mouthed these ideals while working intently to advance their goals, including creation of a national ID.

Stewart Baker, who I personally like, is canny and wily, and he wants to win. I see no evidence that Amitai Etzioni changed his mind about having a national ID when he authored the recommendation in the Markle report that ultimately produced REAL ID.

Other Markle participants I have talked to were unaware of what the report said about identity-based security, national identity standards, or a national ID. They don’t even know (or didn’t at the time) that lending your name to a report also lends it your credibility. Whatever privacy or civil liberties advocates were involved with the Markle Task Force got rolled – big-time – by the pro-national-ID team.

CDT is a sophisticated Washington, D.C. operation. It is supposed to understand these dynamics. I can’t give it the pass that outsiders to Washington might get. By committing to compromise rather than any principle, and by lending its name to the Markle Foundation Task Force report, CDT gave credibility to a bad idea – the creation of a national ID.

CDT helped produce the REAL ID Act, which has taken years of struggle to beat back. And now they are at it again with “pragmatic” support for PASS ID.

CDT has been consistently compromising on national ID issues while proponents of a national ID have been doggedly and persistently pursuing their interests. This is not the behavior of a civil liberties organization. It’s why I asked in the post that precipitated this debate whether there is anything that would cause CDT to push back from the table and say No.

Despite words to the contrary, I don’t see evidence that CDT opposes having a national ID. It certainly works around the edges to improve privacy in the context of having a national ID – reducing the wetness of the water, as it were – but at key junctures, CDT’s actions have tended to support having a U.S. national ID. I remain open to seeing contrary evidence.

Would PASS ID Really Save States Money?

The proposed PASS ID Act is a national ID just like REAL ID, and it threatens privacy just as much. Some argue that a national ID under PASS ID should be palatable, though, because it reduces costs to states.

But savings to states under PASS ID are not at all clear. Let’s take a look at the costs of creating a U.S. national ID.

The REAL ID Act, passed in May 2005, required states to begin implementing a national ID system within three years. In regulations it proposed in March 2007, the Department of Homeland Security extended that draconian deadline. States would have five years, starting in May 2008, to move all driver’s license and ID card holders into REAL ID-compliant cards.

The Department of Homeland Security estimated the costs for this project at $17.2 billion dollars (net present value, 7% discount). Costs to individuals came it at nearly $6 billion – mostly in wasted time. Americans would spend more than 250 million hours filling out forms, finding birth certificates and Social Security cards, and waiting in line at the DMV.

The bulk of the costs fell on state governments, though: nearly $11 billion dollars. The top three expenditures were $5.25 billion for customer service at DMVs, $4 billion for card production, and $1.1 billion for data systems and IT. Getting hundreds of millions of people through DMVs and issuing them new cards in such a short time was the bulk of the cost.

To drive down the cost estimate, DHS pushed the implementation schedule way back. In its final rule of January 2008, it allowed states a deadline extension to December 31, 2009 just for the asking, and a second extension to May 2011 for meeting certain milestones. Then states would have until the end of 2017 to replace all cards with the national ID card. That’s just under ten years.

Then the DHS decided to assume that only 75% of people would actually get the national ID. (Never mind that whatever benefits from having a national ID drop to near zero if it is not actually “national.”)

The result was a total cost estimate of about $6.85 billion (net present value, 7% discount). Individual citizens would still spend $5.2 billion worth of their time (in undiscounted dollars) on paperwork and waiting at the DMV. But states would spend just $1.5 billion on data and interconnectivity systems; $970 million on customer service; and $953 million on card production and issuance—a total of about $2.4 billion. (All undiscounted—DHS didn’t publish estimates for the final rule the same way it published their estimates for the proposed rule.)

Maybe these cost estimates were still too high. Maybe they weren’t believable. Or maybe Americans’ love of privacy and hatred of a national ID explains it. But the lower cost estimate did not slow the “REAL ID Rebellion.” Given the costs, the complexity, the privacy consequences, and the dubious benefits, states rejected REAL ID.

Enter PASS ID, which supposedly alleviates the costs to states of REAL ID. But would it?

At a Senate hearing last week, not one, but two representatives of the National Governors Association testified in favor of PASS ID, citing their internal estimate that implementing PASS ID would cost states just $2 billion.

But there is reason to doubt that figure. PASS ID is a lot more like REAL ID – the original REAL ID – in the way that most affects costs: the implementation schedule.

Under PASS ID, the DHS would have to come up with regulations in just nine months. States would then have just one year to begin complying. All drivers’ licenses would have to be replaced in the five years after that. That’s a total of six years to review the documents of every driver and ID holder, and issue them new cards.

How did the NGA come up with $2 billion? Maybe they took the extended, watered-down, 75%-over-ten-years estimate and subtracted some for reduced IT costs. (The NGA is free to publish its methodology, of course.)

But the costs of implementing PASS ID to states are more likely to be closer to $11 billion than the $2 billion figure that the NGA puts forward. In just six years, PASS ID would send some 245 million people into DMV offices around the country demanding new cards. States will have to hire and train new employees to handle the workload. They will have to acquire new computer systems, documents scanners, data storage facilities, and so on.

There is another source for cost estimates that draws the $2 billion figure into question: the National Governors Association itself. In September 2006, it issued a report with the National Conference of State Legislatures and the American Association of Motor Vehicle Administrators finding that the costs to re-enroll drivers and ID holders over a 5-year period would cost states $8.45 billion (not discounted).

Just as with REAL ID, re-enrollment under PASS ID would undo the cost-savings and convenience that states have gained by allowing online re-issuance for good drivers and long-time residents. As the NGA said:

Efficiencies from alternative renewal processes such as Internet and mail will be lost during the re-enrollment period, and states will face increased costs from the need to hire more employees and expand business hours to meet the five year re-enrollment deadline.

Angry citizens will ask their representatives why they are being investigated like criminals just so they can exercise their right to drive.

PASS ID does reduce some of the information technology costs of REAL ID, such as requirements to use systems that still do not exist, and requirements to pay for driver background checks through the Systematic Alien Verification for Entitlements system and the Social Security Online Verification system.

But PASS ID still requires states to “[e]stablish an effective procedure to confirm that a person [applying] for a driver’s license or identification card is terminating or has terminated any driver’s license or identification card” issued under PASS ID by any other state. How do you do that? By sharing driver information. The language requiring states to provide all other states electronic access to their databases is gone, but the need to share that information is still there.

A last hope for states is that the federal government will come up with money to handle all this. But the federal government is in even tougher financial straights than many states. The federal deficit for this fiscal year is projected to reach $1.84 trillion.

Experienced state leaders recognize that the promise of federal money may not be fulfilled. The weakly funded PASS ID mandate will likely become a fully unfunded mandate.

So, does PASS ID really save states money? I wouldn’t put any money on it … .

Is Buying an iPod Un-American?

We own three iPods at my house, including a recently purchased iPod Touch. Since many of the iPod parts are made abroad, is my family guilty of allowing our consumer spending to “leak” abroad, depriving the American economy of the consumer stimulus we are told it so desperately needs? If you believe the “Buy American” lectures and legislation coming out of Washington, the answer must be yes.

Our friends at ReasonTV have just posted a brilliant video short, “Is Your iPod Unpatriotic?” With government requiring its contractors to buy American-made steel, iron, and manufactured products, is it only a matter of time before the iPod—“Assembled in China,” of all places—comes under scrutiny? You can view the video here:

In my upcoming Cato book, Mad about Trade: Why Main Street America Should Embrace Globalization, I talk about how American companies are moving to the upper regions of the “smiley curve.” The smiley curve is a way of thinking about global supply chains where Americans reap the most value at the beginning and the end of the production process while China and other low-wage countries perform the low-value assembly in the middle. In the book, I hold up our family’s iPods as an example of the unappreciated benefits of a more globalized American economy:

The lesson of the smiley curve was brought home to me after a recent Christmas when I was admiring my two teen-age sons’ new iPod Nanos. Inscribed on the back was the telling label, “Designed by Apple in California. Assembled in China.” To the skeptics of trade, an imported Nano only adds to our disturbingly large bilateral trade deficit with China in “advanced technology products,” but here in the palm of a teenager’s hand was a perfect symbol of the win-win nature of our trade with China.

Assembling iPods obviously creates jobs for Chinese workers, jobs that probably pay higher-than-average wages in that country even though they labor in the lowest regions of the smiley curve. But Americans benefit even more from the deal. A team of economists from the Paul Merage School of Business at the University of California-Irvine applied the smiley curve to a typical $299 iPod and found just what you might suspect: Americans reap most of the value from its production. Although assembled in China, an American company supplies the processing chips, a Korean company the memory chip, and Japanese companies the hard drive and display screen. According to the authors, “The value added to the product through assembly in China is probably a few dollars at most.”

The biggest winner? Apple and its distributors. Standing atop the value chain, Apple reaps $80 in profit for each unit sold—an amount higher than the cost of any single component. Its distributors, on the opposite high end of the smiley curve, make another $75. And of course, American owners of the more than 100 million iPods sold since 2001—my teen-age sons included—pocket far more enjoyment from the devices than the Chinese workers who assembled them.

To learn a whole lot more about how American middle-class families benefit from trade and globalization, you can now pre-order the book at Amazon.com.