Tag: technology

Privatize the FAA

Bloomberg is reporting more bad news for the nation’s air traffic control system, which is run by the Federal Aviation Administration. The FAA is $500 million overbudget and six years behind schedule on a $2.1 billion technology upgrade project.

The FAA has a long history of mismanaged technology projects, and so the latest screw-ups are nothing new. Yet the nation needs high-tech advances in air traffic control more than ever to ease our increasingly congested airspaces.

There is a better way to run air traffic control—a private sector way, as Canada has been demonstrating. In 1996, Canada converted its government air traffic control system to a private nonprofit corporation. Nav Canada has been a smashing success, providing an excellent model for possible U.S. reforms.

A December 24 story in the Financial Post describes how Nav Canada is a world leader in efficiency, safety, and technology under private management. “A once troubled government asset, the country’s civil air traffic controller was privatized 14 years ago and is now a shining example of how to create a global technology leader out of a hulking government bureaucracy.” It really is an impressive story of pro-market reform.  

Canada’s system recently won an award from the International Air Transport Association. The IATA said that “Nav Canada is a global leader in the efficient implementation and reliable delivery of air traffic control procedures and technologies.”

We should have that type of efficient air traffic control system in this country. Privatizing the FAA should be a high priority for the next Congress.

See here for a discussion on privatizing air traffic control.

The Current Wisdom

NOTE:  This is the first in a series of monthly posts in which Senior Fellow Patrick J. Michaels reviews interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

The Current Wisdom only comments on science appearing in the refereed, peer-reviewed literature, or that has been peer-screened prior to presentation at a scientific congress.

The Iceman Goeth:  Good News from Greenland and Antarctica

How many of us have heard that global sea level will be about a meter—more than three feet—higher in 2100 than it was in the year 2000?  There are even scarier stories, circulated by NASA’s James E. Hansen, that the rise may approach 6 meters, altering shorelines and inundating major cities and millions of coastal inhabitants worldwide.

Figure 1. Model from a travelling climate change exhibit (currently installed at the Field Museum of natural history in Chicago) of Lower Manhattan showing what 5 meters (16 feet) of sea level rise will look like.

In fact, a major exhibition now at the prestigious Chicago Field Museum includes a 3-D model of Lower Manhattan under 16 feet of water—this despite the general warning from the James Titus, who has been EPA’s sea-level authority for decades:

Researchers and the media need to stop suggesting that Manhattan or even Miami will be lost to a rising sea. That’s not realistic; it promotes denial and panic, not a reasoned consideration of the future.

Titus was commenting upon his 2009 publication on sea-level rise in the journal Environmental Research Letters.

The number one rule of grabbing attention for global warming is to never let the facts stand in the way of a good horror story, so advice like Titus’s is usually ignored.

The catastrophic sea level rise proposition is built upon the idea that large parts of the ice fields that lay atop Greenland and Antarctica will rapidly melt and slip into the sea as temperatures there rise.  Proponents of this idea claim that the United Nations’ Intergovernmental Panel on Climate Change (IPCC), in its most recent (2007) Assessment Report,  was far too conservative in its projections of future sea level rise—the mean value of which is a rise by the year 2100 of about 15 inches.

In fact, contrary to virtually all news coverage, the IPCC actually anticipates that Antarctica will gain ice mass (and lower sea level) as the climate warms, since the temperature there is too low to produce much melting even if it warms up several degrees, while the warmer air holds more moisture and therefore precipitates more snow. The IPCC projects Greenland to contribute a couple of inches of sea level rise as ice melts around its periphery.

Alarmist critics claim that the IPCC’s projections are based only on direct melt estimates rather than “dynamic” responses of the glaciers and ice fields to rising temperatures.

These include Al Gore’s favorite explanation—that melt water from the surface percolates down to the bottom of the glacier and lubricates its base, increasing flow and ultimately ice discharge. Alarmists like Gore and Hansen claim that Greenland and Antarctica’s glaciers will then “surge” into the sea, dumping an ever-increasing volume of ice and raising water levels worldwide.

The IPCC did not include this mechanism because it is very hypothetical and not well understood.  Rather, new science argues that the IPCC’s minuscule projections of sea level rise from these two great ice masses are being confirmed.

About a year ago, several different research teams reported that while glaciers may surge from time to time and increase ice discharge rates, these surges are not long-lived and that basal lubrication is not a major factor in these surges. One research group, led by Faezeh Nick and colleagues reported that “our modeling does not support enhanced basal lubrication as the governing process for the observed changes.” Nick and colleagues go on to find that short-term rapid increases in discharge rates are not stable and that “extreme mass loss cannot be dynamically maintained in the long term” and ultimately concluding that “[o]ur results imply that the recent rates of mass loss in Greenland’s outlet glaciers are transient and should not be extrapolated into the future.”

But this is actually old news. The new news is that the commonly-reported (and commonly hyped) satellite estimates of mass loss from both Greenland and Antarctica were a result of improper calibration, overestimating ice loss by  some 50%.

As with any new technology, it takes a while to get all the kinks worked out. In the case of the Gravity Recovery and Climate Experiment (GRACE) satellite-borne instrumentation, one of the major problems is interpreting just what exactly the satellites are measuring. When trying to ascertain mass changes (for instance, from ice loss) from changes in the earth’s gravity field, you first have to know how the actual land under the ice is vertically moving (in many places it is still slowly adjusting from the removal of the glacial ice load from the last ice age).

The latest research by a team led by Xiaoping Wu from Caltech’s Jet Propulsion Laboratory concludes that the adjustment models that were being used by previous researchers working with the GRACE data didn’t do that great of a job. Wu and colleagues enhanced the existing models by incorporating land movements from a network of GPS sensors, and employing more sophisticated statistics. What they found has been turning heads.

Using the GRACE measurements and the improved model, the new estimates of the rates of ice loss from Greenland and Antarctica  are only about half as much as the old ones.

Instead of Greenland losing ~230 gigatons of ice each year since 2002, the new estimate is 104 Gt/yr. And for Antarctica, the old estimate of ~150 Gt/yr has been modified to be about 87 Gt/yr.

 How does this translate into sea level rise?

 It takes about 37.4 gigatons of ice loss to raise the global sea level 0.1 millimeter—four hundredths of an inch. In other words, ice loss from Greenland is currently contributing just over one-fourth of a millimeter of sea level rise per year, or one one-hundreth of an inch.  Antarctica’s contribution is just under one-fourth of a millimeter per year.  So together, these two regions—which contain 99% of all the land ice on earth—are losing ice at a rate which leads to an annual sea level rise of one half of one millimeter per year. This is equivalent to a bit less than 2 hundredths of an inch per year.  If this continues for the next 90 years, the total sea level rise contributed by Greenland and Antarctica by the year 2100 will amount to less than 2 inches.

 Couple this with maybe 6-8 inches from the fact that the ocean rises with increasing temperature,  temperatures and 2-3 inches from melting of other land-based ice, and you get a sum total of about one foot of additional rise by century’s end.

 This is about 1/3rd of the 1 meter estimates and 1/20th of the 6 meter estimates.

Things had better get cooking in a hurry if the real world is going to approach these popular estimates. And there are no signs that such a move is underway.

So far, the 21st century has been pretty much of a downer for global warming alarmists. Not only has the earth been warming at a rate considerably less than the average rate projected by climate models, but now the sea level rise is suffering a similar fate.

Little wonder that political schemes purporting to save us from these projected (non)calamities are also similarly failing to take hold.

References:

Nick, F. M., et al., 2009. Large-scale changes in Greenland outlet glacier dynamics triggered at the terminus. Nature Geoscience, DOI:10.1038, published on-line January 11, 2009.

Titus, J.G., et al., 2009. State and Local Governments Plan for Development of Most Land Vulnerable to Rising Sea Level along the U.S. Atlantic Coast, Environmental Research Letters 4 044008. (doi: 10.1088/1748-9326/4/4/044008).

Wu, X., et al., 2010. Simultaneous estimation of global present-day water treansport and glacial isostatic adjustment. Nature Geoscience, published on-line August 15, 2010, doi: 10.1038/NGE0938.

Speier (D-Silicon Valley) Sows Techno-panic

“Techno-Panics” are public and political crusades against the use of new media or technologies, particularly driven by the desire to protect children. As the moniker suggests, they’re not rational. Techno-panic is about imagined or trumped-up threats, often with a tenuous, coincidental, or potential relationship to the Internet. Adam Thierer and Berin Szoka of the Progress & Freedom Foundation have written extensively about techno-panics on the TechLiberationFront blog.

Talking about techno-panic does not deny the existence of serious problems. It merely identifies when policymakers and advocates lose their sense of proportion and react in ways that fail to address the genuine issues—such as censoring a web site because it reveals the fact that some few among a community of tens of millions of people will conspire to break the law.

You’d think that a congressional representative from the heart of Silicon Valley would not sow techno-panic, but here’s Jackie Speier (D-Calif.) on the Craigslist censorship issue:

“We can’t forget the victims, we can’t rest easy. Child-sex trafficking continues, and lawmakers need to fight future machinations of Internet-driven sites that peddle children.”

Of all representatives in Congress, Speier should know that Craigslist has been making it easier for law enforcement to locate and enforce the law against any perpetrators of crimes against children. Pushing them to rogue sites does law enforcement no good. Censoring Craiglist only masks the problem, which may be in the interest of politicians, but definitely not children.

GPS Tracking and a ‘Mosaic Theory’ of Government Searches

The Electronic Frontier Foundation trumpets a surprising privacy win last week in the U.S. Court of Appeals for the D.C. Circuit. In U.S. v. Maynard (PDF), the court held that the use of a GPS tracking device to monitor the public movements of a vehicle—something the Supreme Court had held not to constitute a Fourth Amendment search in U.S. v Knotts—could nevertheless become a search when conducted over an extended period.  The Court in Knotts had considered only tracking that encompassed a single journey on a particular day, reasoning that the target of surveillance could have no “reasonable expectation of privacy” in the fact of a trip that any member of the public might easily observe. But the Knotts Court explicitly reserved judgment on potential uses of the technology with broader scope, recognizing that “dragnet” tracking that subjected large numbers of people to “continuous 24-hour surveillance.” Here, the DC court determined that continuous tracking for a period of over a month did violate a reasonable expectation of privacy—and therefore constituted a Fourth Amendment search requiring a judicial warrant—because such intensive secretive tracking by means of public observation is so costly and risky that no  reasonable person expects to be subject to such comprehensive surveillance.

Perhaps ironically, the court’s logic here rests on the so-called “mosaic theory” of privacy, which the government has relied on when resisting Freedom of Information Act requests.  The theory holds that pieces of information that are not in themselves sensitive or potentially injurious to national security can nevertheless be withheld, because in combination (with each other or with other public facts) permit the inference of facts that are sensitive or secret.  The “mosaic,” in other words, may be far more than the sum of the individual tiles that constitute it. Leaving aside for the moment the validity of the government’s invocation of this idea in FOIA cases, there’s an obvious intuitive appeal to the idea, and indeed, we see that it fits our real world expectations about privacy much better than the cruder theory that assumes the sum of “public” facts must always be itself a public fact.

Consider an illustrative hypothetical.  Alice and Bob are having a romantic affair that, for whatever reason, they prefer to keep secret. One evening before a planned date, Bob stops by the corner pharmacy and—in full view of a shop full of strangers—buys some condoms.  He then drives to a restaurant where, again in full view of the other patrons, they have dinner together.  They later drive in separate cars back to Alice’s house, where the neighbors (if they care to take note) can observe from the presence of the car in the driveway that Alice has an evening guest for several hours. It being a weeknight, Bob then returns home, again by public roads. Now, the point of this little story is not, of course, that a judicial warrant should be required before an investigator can physically trail Bob or Alice for an evening.  It’s simply that in ordinary life, we often reasonably suppose the privacy or secrecy of certain facts—that Bob and Alice are having an affair—that could in principle be inferred from the combination of other facts that are (severally) clearly public, because it would be highly unusual for all of them to be observed by the same public.   Even more so when, as in Maynard, we’re talking not about the “public” events of a single evening, but comprehensive observation over a period of weeks or months.  One must reasonably expect that “anyone” might witness any of such a series of events; it does not follow that one cannot reasonably expect that no particular person or group would be privy to all of them. Sometimes, of course, even our reasonable expectations are frustrated without anyone’s rights being violated: A neighbor of Alice’s might by chance have been at the pharmacy and then at the restaurant. But as the Supreme Court held in Kyllo v US, even when some information might in principle be possible to obtain public observation, the use of technological means not in general public use to learn the same facts may nevertheless qualify as a Fourth Amendment search, especially when the effect of technology is to render easy a degree of monitoring that would otherwise be so laborious and costly as to normally be infeasible.

Now, as Orin Kerr argues at the Volokh Conspiracy, significant as the particular result in this case might be, it’s the approach to Fourth Amendment privacy embedded here that’s the really big story. Orin, however, thinks it a hopelessly misguided one—and the objections he offers are all quite forceful.  Still, I think on net—especially as technology makes such aggregative monitoring more of a live concern—some kind of shift to a “mosaic” view of privacy is going to be necessary to preserve the practical guarantees of the Fourth Amendment, just as in the 20th century a shift from a wholly property-centric to a more expectations-based theory was needed to prevent remote sensing technologies from gutting its protections. But let’s look more closely at Orin’s objections.

First, there’s the question of novelty. Under the mosaic theory, he writes:

[W]hether government conduct is a search is measured not by whether a particular individual act is a search, but rather whether an entire course of conduct, viewed collectively, amounts to a search. That is, individual acts that on their own are not searches, when committed in some particular combinations, become searches. Thus in Maynard, the court does not look at individual recordings of data from the GPS device and ask whether they are searches. Instead, the court looks at the entirety of surveillance over a one-month period and views it as one single “thing.” Off the top of my head, I don’t think I have ever seen that approach adopted in any Fourth Amendment case.

I can’t think of one that explicitly adopts that argument.  But consider again the Kyllo case mentioned above.  Without a warrant, police used thermal imaging technology to detect the presence of marijuana-growing lamps within a private home from a vantage point on a public street. In a majority opinion penned by Justice Scalia, the court balked at this: The scan violated the sanctity and privacy of the home, though it involved no physical intrusion, by revealing the kind of information that might trigger Fourth Amendment scrutiny. But stop and think for a moment about how thermal imaging technology works, and try to pinpoint where exactly the Fourth Amendment “search” occurs.  The thermal radiation emanating from the home was, well… emanating from the home, and passing through or being absorbed by various nearby people and objects. It beggars belief to think that picking up the radiation could in itself be a search—you can’t help but do that!

When the radiation is actually measured, then? More promising, but then any use of an infrared thermometer within the vicinity of a home might seem to qualify, whether or not the purpose of the user was to gather information about the home, and indeed, whether or not the thermometer was precise enough to reveal any useful information about internal temperature variations within the home.  The real privacy violation here—the disclosure of private facts about the interior of the home—occurs only when a series of very many precise measurements of emitted radiation are processed into a thermographic image.  To be sure, it is counterintuitive to describe this as a “course of conduct” because the aggregation and analysis are done quite quickly within the processor of the thermal camera, which makes it natural to describe the search as a single act: Creating a thermal image.  But if we zoom in, we find that what the Court deemed an unconstitutional invasion of privacy was ultimately the upshot of a series of “public” facts about ambient radiation levels, combined and analyzed in a particular way.  The thermal image is, in a rather literal sense, a mosaic.

The same could be said about long-distance  spy microphones: Vibrating air is public; conversations are private. Or again, consider location tracking, which is unambiguously a “search” when it extends to private places: It might be that what is directly measured is only the “public” fact about the strength of a particular radio signal at a set of receiver sites; the “private” facts about location could be described as a mere inference, based on triangulation analysis (say), from the observable public facts.

There’s also a scope problem. When, precisely, do individual instances of permissible monitoring become a search requiring judicial approval? That’s certainly a thorny question, but it arises as urgently in the other type of hypothetical case alluded to in Knotts, involving “dragnet” surveillance of large numbers of individuals over time. Here, too, there’s an obvious component of duration: Nobody imagines that taking a single photograph revealing the public locations of perhaps hundreds of people at a given instant constitutes a Fourth Amendment search. And just as there’s no precise number of grains of sand that constitutes a “heap,” there’s no obvious way to say exactly what number of people, observed for how long, are required to distinguish individualized tracking from “dragnet” surveillance.  But if we anchor ourselves in the practical concerns motivating the adoption of the Fourth Amendment, it seems clear enough that an interpretation that detected no constitutional problem with continuous monitoring of every public movement of every citizen would mock its purpose. If we accept that much, a line has to be drawn somewhere. As I recall, come to think of it, Orin has himself proposed a procedural dichotomy between electronic searches that are “person-focused” and those that are “data-focused.”  This approach has much to recommend it, but is likely to present very similar boundary-drawing problems.

Orin also suggests that the court improperly relies upon a “probabilistic” model of the Fourth Amendment here (looking to what expectations about monitoring are empirically reasonable) whereas the Court has traditionally relied on a “private facts” model to deal with cases involving new technologies (looking to which types of information it is reasonable to consider private by their nature). Without recapitulating the very insightful paper linked above, the boundaries between models in Orin’s highly useful schema do not strike me as quite so bright. The ruling in Kyllo, after all, turned in part on the fact that infrared imaging devices are not in “general public use,” suggesting that the identification of “private facts” itself has an empirical and probabilistic component.  The analyses aren’t really separate. What’s crucial to bear in mind is that there are always multiple layers of facts involved with even a relatively simple search: Facts about the strength of a particular radio signal, facts about a location in a public or private place at a particular instant, facts about Alice and Bob’s affair. In cases involving new technologies, the problem—though seldom stated explicitly—is often precisely which domain of facts to treat as the “target” of the search. The point of the expectations analysis in Maynard is precisely to establish that there is a domain of facts about macro-level behavioral patterns distinct from the unambiguously public facts about specific public movements at particular times, and that we have different attitudes about these domains.

Sorting all this out going forward is likely to be every bit as big a headache as Orin suggests. But if the Fourth Amendment has a point—if it enjoins us to preserve a particular balance between state power and individual autonomy—then as technology changes, its rules of application may need to get more complicated to track that purpose, as they did when the Court ruled that an admirably simple property rule was no longer an adequate criterion for identifying a “search.”  Otherwise we make Fourth Amendment law into a cargo cult, a set of rituals whose elegance of form is cold consolation for their abandonment of function.

Busting the Myth that Web Sites ‘Sell Your Data’

On TLF, Berin Szoka comes up just shy of ranting, but it’s a good rant against the myth that Web sites like Facebook sell or give your data to advertisers.

In targeted online advertising, the business model is generally to sell advertisers access to people based on their demographics. It is not to sell individuals’ personal and contact info. Doing the latter would undercut the advertising business model and the profitability of the web sites carrying the advertising.

I did some myth-busting of my own last year when the Wall Street Journal published erroneous information about a health-interest site called RealAge.com, which does not give or sell visitors’ data to drug companies.

Understanding how technologies and business models work is job one for crafting good public policies, but as I noted yesterday

Nor Does Tech Get D.C… .

Politico has a pretty thorough article on D.C.’s thorough ignorance of things tech.

Take a 2008 hearing before the Senate Commerce Committee about privacy and online behavior-based advertising. The discussion seemed to fall apart when Sens. Tom Carper (D-Del.), Bill Nelson (D-Fla.) and others seemed not to understand the term “cookies.”

Cookies. That’s the (utterly rudimentary) technology that was an issue a decade ago. Washington, D.C. naturally overreacted, but luckily only harmed itself. The White House recently revamped the cookie policy for federal government web sites.

It’s worth noting Tech’s thorough misapprehension of Washington, D.C. as well. Judging by how they act, most tech executives have all the insight they could pick up from Schoolhouse Rock. It seems cool and helpful to come to Washington and give money, so they do, encouraging the bears to rip open their cars looking for peanut butter.