201604

April 29, 2016 4:12PM

WSJ Reports on Canadian Air Traffic Control

In today’s Wall Street Journal, Scott McCartney reports on the superior air traffic control (ATC) system north of the border. American aviation is suffering from a bureaucratic government-run ATC, while Canada’s privatized system is moving ahead with new technologies that reduce delays and congestion.

Showing leadership and boldness, House Transportation Committee chairman Bill Shuster managed to get reforms along Canadian lines passed out of his committee. Unfortunately, Senate Republicans have thus far been too timid to move ahead with restructuring. The flying public may have to wait until a reform-minded president can push an overhaul through Congress.

Here’s some of McCartney’s reporting:   

Flying over the U.S.-Canadian border is like time travel for pilots. Going north to south, you leave a modern air-traffic control system run by a company and enter one run by the government struggling to catch up.

The model is Nav Canada, the world’s second-largest air-traffic control agency, after the U.S. Canada handles a huge volume of traffic between the U.S. and both Asia and Europe. Airlines praise its advanced technology that results in shorter and smoother flights with less fuel burn.

In Canada, pilots and controllers send text messages back and forth, reducing errors from misunderstood radio transmissions. Requests for altitude changes are automatically checked for conflicts before they even pop up on controllers’ screens. Computers look 20 minutes ahead for any planes potentially getting too close to each other. Flights are monitored by a system more accurate than radar, allowing them to be safely spaced closer together to add capacity and reduce delays.

And when flights enter U.S. airspace, pilots switch back to the old way of doing things.

The key, Nav Canada says, is its nongovernmental structure. Technology, critical to efficient airspace use these days, gets developed faster than if a government agency were trying to do it, officials say. Critics say slow technology development has been the FAA’s Achilles’ heel.

… Another innovation adopted around the world is electronic flight strips—critical information about each flight that gets changed on touch screens and passed from one controller to another electronically. Nav Canada has used them for more than 13 years. Many U.S. air controllers still use paper printouts placed in plastic carriers about the size of a 6-inch ruler that controllers scribble on.

For more on ATC, see here.

April 29, 2016 4:10PM

You Ought to Have a Look: Our Energy Future, Science Regress, and a Greening Earth

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

---

We’ll jump right into this week by highlighting an appearance by Manhattan Institute senior fellow Mark Mills on The Federalist’s Radio Hour. During his time on the show, Mills explains how the foreseeable future is going to play out when it comes to global energy production and why he says that even if you were concerned about climate change, “there really isn’t anything you can do about it.” 

Mills is one of the leading thinkers and analysts on energy systems, energy markets, and energy policy, bringing often overlooked and deeply-buried information to the forefront.

During his nearly hour-long radio segment, Mills discusses topics ranging from climate change, the world’s future energy mix, the role of technological advances, and energy policy as well as giving his opinions on both Bills Gates’ and Pope Francis’ take on all of the above. It is an entertaining and informative interview.

As a taste, here’s a transcript of a small segment:

In the life we live, and the world we live in, we have to do two things, one is deal with reality [current understanding of physics] and the moral consequences of that, and we can have aspirations. If the aspiration, which Bill Gates’ is, is to use fewer hydrocarbons, we need to support basic research.

We don’t subsidize stuff. The reason we don't subsidize stuff and make energy more expensive, is because, for me, it is morally bankrupt to increase the cost of energy for most people in most of the world. Energy should be cheaper, not more expensive. We use energy to make our lives better. We use energy to make our lives safer. We use energy to make our lives more enjoyable. Everything that we care about in the world, safety, convenience, freedom, costs energy. [emphasis added]

Mark Mills’ sentiment closely matches that which Alex Epstein explained to Congress a few weeks back and that we highlighted in our last edition. 

If you can find any time to listen to a little or a lot of Mills’ full interview, you’ll probably find that what he says to make a lot of sense. Funny, though, how much of it seems to have escaped some folks.

Next up is an article in the current issue of First Things authored by Walter Wilson titled “Scientific Regress.” If you think the title is provocative, you ought to have a look at the rest of the piece beginning with the first line “The problem with ­science is that so much of it simply isn’t.” Instead, it reflects the results of a gamed system driven by pre-conceived ideas often emanating from the science/political establishment.

In describing the current sad state of science, Wilson reflects our concerns here at the Center for the Study of Science, that external factors (e.g., influence, power, money) act to negatively shape the course of science—a negative influence that is hard to shake.

Wilson writes:

[O]nce an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm. Even if a critic is able to get his work published, pointing out that the house you’ve built together is situated over a chasm will not endear him to his colleagues or, more importantly, to his mentors and patrons.

Hear! Hear!

Wilson leaves us with this warming:

At its best, science is a human enterprise with a superhuman aim: the discovery of regularities in the order of nature, and the discerning of the consequences of those regularities. We’ve seen example after example of how the human element of this enterprise harms and damages its progress, through incompetence, fraud, selfishness, prejudice, or the simple combination of an honest oversight or slip with plain bad luck. These failings need not hobble the scientific enterprise broadly conceived, but only if scientists are hyper-aware of and endlessly vigilant about the errors of their colleagues…and of themselves. When cultural trends attempt to render science a sort of religion-less clericalism, scientists are apt to forget that they are made of the same crooked timber as the rest of humanity and will necessarily imperil the work that they do. The greatest friends of the Cult of Science are the worst enemies of science’s actual practice.

The whole piece, including the examples included therein, is an eye-opener. You ought to have a look!

And finally, we’ll leave you with a Friday Funny coming from Cartoons by Josh (a frequent commenter on the often ridiculous goings-on surrounding global warming). In this strip, Josh neatly sums up the reaction when climate gloomsayers are confronted with good news—in this case, new results showing that carbon dioxide emissions from human activities, such as the burning of fossil fuels to produce energy, are leading to a greener, more productive earth (a story that we covered in our piece “A Greening (in a Good Way) Earth”): 

Media Name: earthspeaks.png

April 29, 2016 3:20PM

Feinstein‐​Burr: The Bill That Bans Your Browser

Last week, I criticized the confused rhetorical framework that the Feinstein-Burr encryption backdoor proposal tries to impose on the ongoing Crypto Wars 2.0 debate.  In this post, I want to try to explain why technical experts have so overwhelmingly and vehemently condemned the substance of the proposal.

The first thing to note is how extraordinarily sweeping the bill is in scope.  Its mandate applies to:

device manufacturers, software manufacturers, electronic communication services, remote communication services, providers of wire or electronic communication services, providers of remote communication services, or any person who provides a product or method to facilitate a communication or to process or store data.  [emphasis added]

Any of these  "covered entities," upon reciept of a court order, must be able to either provide the government with the unencrypted "plaintext" of any data encrypted by their product or service, or provide "technical assistance" sufficient to allow the government to retrieve that plaintext or otherwise accomplish the purpose of the court order.  Penalties aren't specified, leaving judges with the implicit discretion to slap non-compliant providers and developers with contempt of court.  Moreover, "distributors of software licenses"—app stores and other software repositories—are obligated to ensure that all the software they host is capable of complying with such orders.

Some types of encrypted communications services either already comply or could comply in some reasonably obvious way with these requirements.  Others, not so much.  Because of the incredible breadth of the proposal, it's not possible to detail in a blog post all the varied challenges such a mandate would present to diverse types of software.  But let's begin by considering one type of software everyone reading this post uses daily: Your Web browser.  To the best of my knowledge, every modern Web-browser is non-compliant with Feiinstein-Burr, and would have to be pulled from every app store in U.S jurisdiction if the bill were to become law.  Let's explore why.

While ordinary users probably don't think of their Web browser as a piece of encryption software, odds are you rely on your browser to engage in encrypted communications nearly every day.  Whenever you connect to a Web-based e-mail provider like Gmail, or log in to an online banking account, or provide your credit card information to an e-commerce site, you're opening a securely encrypted HTTPS session using a global standard protocol known as Transport Layer Security or TLS (often still referred to as SSL or Secure Sockets Layer, an earlier version of the protocol).  Even sites that don't traffic in obviously sensitive data are increasingly adopting HTTPS as a default.  Any time you see a little padlock icon next to the address bar on your browser, you are using HTTPS encryption.

This is absolutely essential for two big reasons.  First, people routinely connect to the Web via WiFi access points they don't control—such as a hotel, coffee shop, or airport.  Without encryption, an unscrupulous employee of one of these businesses—or a hacker, or anyone who sets up a phony "LocalCafeWiFi" hotspot to snare the unwary—could easily vacuum up people's sensitive data.  Second, the Internet is a packet switched network that operates very differently from traditional centralized phone networks.  That means even when you're connecting to the Internet from a trusted access point, like your home or office, your data is passed like a relay race baton across perhaps dozens of different networks you don't control between your computer and the destination.  (You can use a program called Traceroute to see all the intermediary points your data passes through on the way from your computer to any given Web site.)  Without encryption, you'd have to trust this plethora of unknown intermediaries, foreign and domestic, not only to refrain from snarfing up your sensitive data, but to secure their systems against hackers looking to snarf up your data.  Which, needless to say, is impossible: You'd be a fool to undertake a commercial transaction or send a private message under those circumstances.  So it's no exaggeration to say that the Internet as we now know it, with the spectacular variety of services it supports, simply wouldn't be possible without the security provided by cryptographic protocols like TLS.

So how does TLS work?  If you're a masochist, you can wade through the the technical guidelines published by the National Institute of Standards & Technology, but here's a somewhat oversimplified version.  Bear with me—it's necessary to wade into the weeds a bit here to understand exactly why a Feinstein-Burr style mandate is so untenable.

Diffie-Hellman_Key_Exchange

When you open a secure HTTPS session with a Web server—which may, of course, be abroad and so beyond the jurisdiction of U.S. courts—your Web browser authenticates the identity of the server, agrees on a specific set of cryptographic algorithms supported by both your browser and the server, and then engages in a "handshake" process to authenticate the server's identity and negotiate a shared set of cryptographic keys for the session.  One of the most common handshake methods is a bit of mathematical sorcery called Diffie-Hellman key exchange. This allows your computer and the Web server to agree on a shared secret that even an eavesdropper monitoring the entire handshake process would be unable to determine, which is used to derive the ephemeral cryptographic session keys that encrypt the subsequent communications between the machines.  (Click the link above for the gory mathematical details, or see the image on the right for a liberal-arts-major-friendly allegorical version.)

A few features of this process are worth teasing out.  One is that a properly configured implementations of TLS give you a property known as "forward secrecy": Because a unique, unpredictable, and ephemeral key is generated for each session, old communications remain securely encrypted even if a server's long-term cryptographic keys—which remain the same over longer periods for purposes like verifying the server's identity—are later compromised.  In economic terms, that means the "return on investment" for an attacker who manages to worm their way into a server is limited: They might be able to read the communications that occur while they remain in the system undiscovered, but they don't then get to retroactively unlock any historical communications they've previously vacuumed up.  This both mitigates the the downside consequences of a successful attack and, perhaps more critically, makes it less rational for sophisticated attackers to expend extraordinary resources on compromising any given set of keys.  A recent paper by a who's-who of security experts, incidentally, pointed to forward secrecy as a feature that is both increasingly important to make standard in an escalating threat environment and particularly difficult to square with a government backdoor mandate.

Some of the reasons for that become clear when we consider another rather obvious feature of how TLS functions: The developer of your browser has nothing to do with the process after the software is released.  When I log into an e-commerce site using Firefox, Mozilla plays no role in the transaction.  The seed numbers used to negotiate session keys for each TLS-encrypted communication are generated randomly on my computer, and the traffic between my computer and the server isn't routed through any network or system controlled by Mozilla.  A user anywhere in the world with a copy of Firefox installed can use it to make secure connections without ever having anything further to do with Mozilla.  And TLS isn't designed this way because of Edward Snowden or as some insidious effort to make it impossible to execute search warrants.  It's designed that way because a lot of very bright people determined it was the best way to do secure communications between enormous numbers of arbitrary endpoints on a global packet-switched network.

Now, there are any number of ways a government agency might be able to get the contents of a targeted user's TLS-encrypted HTTPS communications.  The easiest is simply to demand them from the server side—but that will only work in cases where the server is subject to U.S. jurisdiction (or that of an ally willing to pass on the data), and the server may not itself log everything the government wants.  Advanced intelligence agencies may be able to mount various types of "active" attacks that involve interposing themselves into the communication in realtime, but developers are constantly striving to make this more difficult, to prevent criminal hackers from attempting the same trick—and while NSA may be able to manage this sort of thing, they're understandably reluctant to share their methods with local law enforcement agencies.  In any event, those "active" attacks are no help if you're trying to decrypt  intercepted HTTPS traffic after the fact.

Now an obvious, if inconvenient, question arises. Suppose a law enforcement agency comes to Mozilla or Apple or Google and says: We intercepted a bunch of traffic from a suspect who uses your browser, but it turns out a lot of it is encrypted, and we want you to decipher it for us.  What happens?  Well, as ought to be clear from the description above, they simply can't—nor can any other modern browser developer.  They're not party to the communications, and they don't have the cryptographic keys the browser generated for any session.  Which means that under Feinstein-Burr, no modern Web browsers can be hosted by an app store (or other distributor of software licenses), at least in their current forms.  

How, then, should developers redesign all these Web browsers to comply with the law?  The text of Feinstein-Burr doesn't say: Nerd harder, clever nerds! Love will find a way! You'll figure something out!  But as soon as you start thinking about classes of possible "solutions," it becomes clear there are pretty catastrophic problems with all of them.

One approach would be to make the key material generated by the browser not-so-random: Have the session keys generated by a process that looks random, but is predictable given some piece of secret information known to the developer.   The problem with this ought to be obvious:  The developer now has to safeguard what is effectively a skeleton key to every "secure" Internet communication carried out via their software.  Because the value of such a master key would be truly staggering—effectively the sum of the value of all the interceptable information transmitted via that software—every criminal organization and intelligence agency on the planet is going to have an enormous incentive to whatever resources are needed to either steal or independently derive that information.    

Another approach might be to redesign the browser so that the developer (or some other designated entity) becomes an effective intermediary to every HTTPS session, keeping a repository of billions of keys just in case law enforcement comes knocking.  This would, needless to say, be massively inefficient and cumbersome: The unique flexibility and resilience of the Internet comes precisely from the fact that it doesn't depend on these sorts of centralized bottlenecks, which means my Chrome browser doesn't suddenly become useless if Google's servers go down, or become unreachable from my location for any reason.  I don't have to go through Mountain View, California, just to open a secure connection between my home in DC and my bank in Georgia.  And, of course, it has the same problem as the previous approach: It creates a single point of catastrophic failure.  An attacker who breaches the master key repository—or is able to successfully impersonate the repository—has hit the ultimate jackpot, and will invest whatever resources are necessary to do so.

Yet another option—perhaps the simplest—is to have the browser encrypt each session key with the same public key, either the developer's or that of some government agency, and transmit it along with the communication.  Then a law enforcement agency wanting to decrypt an intercepted HTTPS session goes to the developer or government entity whose public key was used to encrypt the session key,  asks them to use their corresponding private key to unlock the session key, and uses that to decipher the actual communication.   You can already guess the problem with this, right?  That private key becomes a skeleton key that has to be kept secure against attackers, sacrificing the security advantages of forward secrecy. It has the additional problem of requiring a lot of additional anti-circumvention bells and whistles to prevent the user from trivially re-securing their session, since anyone who doesn't want the new eavesdropping "feature" just has to install a plugin or some third-party application that catches the packet with the encrypted session key before it leaves the user's machine. 

There's a further wrinkle: All complex software has vulnerabilities—like the Heartbleed bug in the widely-used OpenSSL library, which that made headlines last year as it exposing millions of users to the risk of having their secure communications compromised and spurred a frantic global patching effort.  Modern software is never really finished: New vulnerabilities are routinely discovered, need to be patched, updates must be widely disseminated and installed, and then the cycle starts all over again.  That's hard enough as it is—an exercise in dancing madly on the lip of a volcano, as John Oliver memorably put it.  Now there's the added problem of ensuring that a new update to fix an unintentional vulnerability doesn't simultaneously either break the intentional vulnerability introduced to provide law enforcement access, or interact unpredictably with any of the myriad approaches to guaranteeing government access in a way that creates a new vulnerability.  People who don't actually have to do this for a living are awfully confident this must be possible if the nerds only nerd hard enough.  The actual nerds mostly seem to agree that it isn't.

So far so awful. But now consider what this implies for the broader ecosystem—which doesn't just consist of huge corporations like Apple or Google, but individual coders, small startups, and the open source communities that collaboratively produce software like Firefox.  In principle, a lone computer science student, or a small team contributing to an open source project, can today write their own Web browser (or any other app implementing TLS) using open code libraries like OpenSSL, and release it online.   We owe much of the spectacular innovation we've seen over the past few decades to the fact that software can be produced this way: You don't even need a revenue model or a business plan or a corporate charter, just the knowledge to write code and the motivation to put it to use.  Maybe the software makes you rich and launches the next Silicon Valley behemoth, or maybe the authors release it and forget about it, or move on and pass the torch to another generation of open source contributors.  

If Feinstein-Burr becomes law, say goodbye to all that — at least when it comes to software that supports encryption of files or communications. Those existing open-code libraries don’t support government backdoors, so the developer will have to customize the code to support their chosen government access mechanism (with the attendant risk of introducing vulnerabilities in the process) and then be prepared to secure the master key material for the effective life of the software—that or run a server farm to act as a key repository and secure that. As a practical matter, the (lawful) production of secure software in the United States becomes the exclusive domain of corporate entities large and rich enough to support (and at least attempt to secure) some kind of key-escrow and law enforcement compliance infrastructure.

The probable upshot of this proposal, then, isn't just that we all become less secure as big companies choose from a menu of terrible options that will enable them to comply with decryption orders—though it's important to keep pointing that out, since legislators seem somehow convinced the experts are all lying about this.  It's that smaller developers and open source projects look at the legal compliance burdens associated with incorporating encryption in their software and decide it isn't worth the trouble.  Implementing crypto correctly is already hard enough; add the burden of designing and maintaining a Feinstein-Burr compliance strategy and a lot of smaller developers and open source projects are going to conclude it's not worth the trouble. 

In an environment of dire and growing cybersecurity threats, in other words, legislators seem determined to dissuade software developers from adopting better security practices.  That would be a catastrophically bad idea, and urging developers to "nerd harder" doesn't make it any less irresponsible.

April 29, 2016 2:47PM

Napoleon and Trump, Advancing on the Capital

It is said, perhaps not reliably, that the following headlines appeared in a Paris newspaper, perhaps Le Moniteur Universel, in 1815 as Napoleon escaped from exile on Elba and advanced through France:

March 9

THE ANTHROPOPHAGUS HAS QUITTED HIS DEN

March 10

THE CORSICAN OGRE HAS LANDED AT CAPE JUAN

March 11

THE TIGER HAS ARRIVED AT CAP

March 12

THE MONSTER SLEPT AT GRENOBLE

March 13

THE TYRANT HAS PASSED THOUGH LYONS

March 14

THE USURPER IS DIRECTING HIS STEPS TOWARDS DIJON

March 18

BONAPARTE IS ONLY SIXTY LEAGUES FROM THE CAPITAL

He has been fortunate enough to escape his pursuers

March 19

BONAPARTE IS ADVANCING WITH RAPID STEPS, BUT HE WILL NEVER ENTER PARIS

March 20

NAPOLEON WILL, TOMORROW, BE UNDER OUR RAMPARTS

March 21

THE EMPEROR IS AT FONTAINEBLEAU

March 22

HIS IMPERIAL AND ROYAL MAJESTY arrived yesterday evening at the Tuileries, amid the joyful acclamation of his devoted and faithful subjects

And I think about that story whenever I see articles like this one in this morning's Washington Post:



GOP elites are now resigned to Donald Trump as their nominee

Philip Rucker writes:

An aura of inevitability is now forming around the controversial mogul. Trump smothered his opponents in six straight primaries in the Northeast and vacuumed up more delegates than even the most generous predictions foresaw. He is gaining high-profile ­endorsements by the day — a legendary Indiana basketball coach Wednesday, two House committee chairmen Thursday.

Which is not exactly the rush of support that any normal frontrunner would be getting by this point. But the article is full of Republican leaders saying things like “People are realizing that he’s the likely nominee,” and "More and more people hope he wins that nomination on the first ballot because they do not want to see a convention that explodes into total chaos." Not exactly profiles in courage, these leaders. As Dan McLaughlin tweeted last night:

20 years from now - maybe 2 years from now - everyone in the GOP will want to say they were against Trump now.

But the stories are everywhere today: Republicans coming to accept their conquest by Trump. For a brief explanation of why they should not, I recommend Jay Cost's tweets as captured on Storify and my own contribution to a National Review symposium in January:

From a libertarian point of view — and I think serious conservatives and liberals would share this view—Trump’s greatest offenses against American tradition and our founding principles are his nativism and his promise of one-man rule.

Not since George Wallace has there been a presidential candidate who made racial and religious scapegoating so central to his campaign. Trump launched his campaign talking about Mexican rapists and has gone on to rant about mass deportation, bans on Muslim immigration, shutting down mosques, and building a wall around America. America is an exceptional nation in large part because we’ve aspired to rise above such prejudices and guarantee life, liberty, and the pursuit of happiness to everyone. Equally troubling is his idea of the presidency—his promise that he’s the guy, the man on a white horse, who can ride into Washington, fire the stupid people, hire the best people, and fix everything. He doesn’t talk about policy or working with Congress. He’s effectively vowing to be an American Mussolini, concentrating power in the Trump White House and governing by fiat. It’s a vision to make the last 16 years of executive abuse of power seem modest.

This is no brief for any other current presidential candidate. The major-party candidates seem as tragically un-libertarian to me as any group of candidates ever. But Trump seems dangerously uninformed, unmoored, erratic, threatening, and megalomaniacal in a way that transcends mere ideology.

Republicans like to praise the "greatest generation." Nobody's ever going to call the Republicans who rolled over for Donald Trump the greatest generation. Nor do they seem to be emulating their hero, Winston Churchill, who famously said:

Let us therefore brace ourselves to our duties, and so bear ourselves, that if the British Empire and its Commonwealth last for a thousand years, men will still say, This was their finest hour.

As Dan McLaughlin suggests, Republicans should be asking themselves, What will I say when my son asks, What did you do when Donald Trump knocked on the Republican party's door, Daddy?

April 29, 2016 2:41PM

Projecting the Impacts of Rising CO2 on Future Crop Yields in Germany

Noting that the influence of atmospheric CO2 on crop growth is “still a matter of debate,” and that “to date, no comprehensive approach exists that would represent all related aspects and interactions [of elevated CO2 and climate change on crop yields] within a single modeling environment,” Degener (2015) set out to accomplish just that by estimating the influence of elevated CO2 on the biomass yields of ten different crops in the area of Niedersachsen, Germany over the course of the 21st century.

To accomplish this lofty objective the German researcher combined soil and projected future climate data (temperature and precipitation) into the BIOSTAR crop model and examined the annual difference in yield outputs for each of the ten crops (winter wheat, barley, rye, triticale, three maize varieties, sunflower, sorghum and spring wheat) under a constant CO2 regime of 390 ppm and a second scenario in which atmospheric CO2 increased annually through the year 2100 according to the IPCC’s SRES A1B scenario. Degener then calculated the difference between the two model runs so as to estimate the quantitative influence of elevated CO2 on projected future crop yields. And what did that difference reveal?

As shown in the figure below, Degener reports that “rising [CO2] concentrations will play a central role in keeping future yields of all crops above or around today’s level.” Such a central, overall finding is significant considering Degener notes that future temperatures and precipitation within the model both changed in a way that was “detrimental to the growth of crops” (higher temperatures and less precipitation). Yet despite an increasingly hostile growing environment, according to the German researcher, not only was the “negative climatic effect balanced out, it [was] reversed by a rise in CO2” (emphasis added), leading to yield increases on the order of 25 to 60 percent.

Figure 1. Biomass yield difference (percent change) between model runs of constant and changing atmospheric CO2 concentration. A value of +20% indicates biomass yields are 20% higher when modeled using increasing CO2 values with time (according to the SRES A1B scenario of the IPCC) instead of a fixed 390 ppm for the entire run.

Figure 1. Biomass yield difference (percent change) between model runs of constant and changing atmospheric CO2 concentration. A value of +20% indicates biomass yields are 20% higher when modeled using increasing CO2 values with time (according to the SRES A1B scenario of the IPCC) instead of a fixed 390 ppm for the entire run.

The results of this model-based study fall in line with the previous work of Idso (2013), who calculated similar CO2-induced benefits on global crop production by mid-century based on real-world experimental data, both of which studies reveal that policy prescriptions designed to limit the upward trajectory of atmospheric CO2 concentrations can have very real, and potentially serious, repercussions for global food security.

 

References

Degener, J.F. 2015. Atmospheric CO2 fertilization effects on biomass yields of 10 crops in northern Germany. Frontiers in Environmental Science 3: 48, doi: 10.3389/fenvs.2015.00048.

Idso, C.D. 2013. The Positive Externalities of Carbon Dioxide: Estimating the Monetary Benefits of Rising Atmospheric CO2 Concentrations on Global Food Production. Center for the Study of Carbon Dioxide and Global Change, Tempe, AZ.

April 29, 2016 1:37PM

The Challenges of Restraint in U.S. Grand Strategy

Seeking to calm fears of a rising China’s new assertiveness in the most recent issue of Foreign Affairs, professors Stephen G. Brooks and William G. Wohlforth argue that the United States has less to worry about than most believe. China is extremely unlikely to become a superpower peer anytime in the next few decades. The real test for the United States, they say, will be adapting to a “world of lasting U.S. military preeminence and declining U.S. economic dominance.”

As proponents of the “deep engagement” camp in the roiling debate over American grand strategy, Brooks and Wohlforth have long opposed arguments for a more restrained foreign policy. It is surprising, then, that a long section of their essay is devoted to the importance of exercising restraint, as is their conclusion that the “chief threat to the world’s preeminent power arguably lies within.”

Brooks and Wohlforth discuss four different challenges to exercising the appropriate restraint in the years ahead:

  1. The temptation to bully or exploit allies.
  2. Overreacting when other states such as China exercise their growing clout on the international stage.
  3. Intervening in places where its core national interests are not at stake.
  4. Adopting overly aggressive military postures in the face of challenges to its interests around the world.

Each of these challenges is real and important. But rather than problems that the United States will begin facing over the next several decades, these issues are exactly the ones that have plagued the United States since the end of the Cold War. All one needs to do is read the daily news for plentiful examples of how the United States already struggles to cope with what Christopher Preble has called the “power problem.”

In truth, the fact that Brooks and Wohlforth feel obligated to discuss the need for restraint at such length reinforces two critical arguments that we at Cato have been making for a long time.

First, the United States’ strategic situation is so secure thanks to geography and its nuclear triad that even China’s incredible economic rise and increasing military assertiveness can do little to threaten U.S. national security. In fact, contrary to the news headlines, the United States faces a less dangerous world than at any time in memory. Other “threats” to American security like Russia, Iran, or North Korea, are primarily threats to those nations’ neighbors, not the United States. Engaging those countries simply risks escalating conflicts that add nothing to American national security. Terrorism, while a real threat, is a threat to American lives and property, not to national security.

Second, U.S. preeminence creates temptations to act in ways that are both unnecessary for national security and counterproductive. The ability to project massive amounts of military power led the United States, in the wake of 9/11, to spend trillions of dollars and thousands of lives chasing imaginary threats in the Middle East. Intervention in Afghanistan, Iraq, and Libya have destroyed societies and unleashed chaos. Despite these warnings, presidential candidates continue to call for indiscriminate exercise of American military power abroad in a vain effort to bring the world under control.

Brooks and Wohlforth’s warning about the challenges of restraint is timely. China’s rise, Russia’s saber rattling, the scourge of Islamist terrorism, and unrest and upheaval in the Middle East are just a few of the temptations calling out to American interventionists today. New temptations to shape and control the world will follow as surely as the sun rises. Now would not be too soon to organize plans for restrained responses to current and future concerns.

Unfortunately, the prospects for restraint look very poor. In the absence of any serious external checks on its behavior, the United States must rely on internal sources of restraint. Sadly, the United States lacks the internal checks and balances that would help prevent foreign policy adventurism. Waging unending war is unthinkably expensive, but the American economy is large enough to sustain foolish foreign policies. The American public is tired of war after more than a decade of making a mess in the Middle East, but neither polls nor elections provide a sufficient bulwark against the elite consensus. Even though polls show that the public has little appetite for international activism, both the Republican and Democratic foreign policy establishments remain deeply committed to interventionist strategies of various flavors. And even when Congress does raise objections to presidential maneuvers it matters little. Congress long ago ceded most of its meaningful authority on foreign policy to the executive branch. Moreover, the president’s advantages with respect to information and the news media makes winning arguments extremely difficult for the opposition.

Thus the challenges Brooks and Wohlforth identify will eventually be a list of the failures of American foreign policy over the next several decades. As has always been the case, these failures will hurt other nations and peoples more than they will hurt the United States, which will mostly spend money that its citizens could have used for more productive purposes. It is sobering nonetheless to think how much better everyone would be if the United States could manage to exercise greater restraint.

April 29, 2016 11:49AM

When Should Courts Defer to White‐​Collar Prosecution Settlements?

Deferred prosecution agreements and their close relatives non-prosecution agreements (DPAs/NPAs) have become a major tool of white-collar prosecution in recent years. Typically, a business defendant in exchange for escape from the costs and perils of trial agrees to some combination of cash payment, non-monetary steps such as a shakeup of its board or manager training, and submission to future oversight by DoJ or other monitors. Not unlike plea bargains in more conventional criminal prosecution, these deals dispense with the high cost of a trial; they also dispense with the need for the government to prove its allegations in the first place. DPAs may also pledge a defendant to future behavior that a court would never have ordered, or conversely fail to include remedies that a court would probably have ordered. And they may be drawn up with the aim of shielding from harm — or, in some other cases, undermining — the interests of third parties, such as customers, employees, or business associates of the targeted defendant, or foreign governments.

So there was a flurry of interest last year when federal district judge Richard Leon in Washington, D.C., declined to approve a waiver, necessary under the Speedy Trial Act, for a DPA settling charges that Fokker Services, a Dutch aerospace company, sold U.S.-origin aircraft systems to foreign governments on the U.S. sanctions list, including Iran, Sudan, and Burma. While acknowledging that under principles of prosecutorial discretion the Department of Justice did not have to charge Fokker at all, Judge Leon said given that it had, the judiciary could appropriately scrutinize whether the penalties were too low.

Now a three-judge panel of the D.C. Circuit has unanimously overruled Judge Leon. It pointed out that under well settled law, charging decisions are entrusted to the DoJ or other executive branch prosecutors, not the judiciary, and that judges may not intervene to insist that additional or more stringent charges be filed -- and that is what the pattern in this case amounted to, in the appeals panel’s view.

So far so good, you might think. But the language of the appellate ruling in places might be read to suggest that courts should simply defer to the Justice Department’s judgment and green-light the DPAs it may negotiate, period. And that would be disturbing, since over-lenience is only one of the possible problems with these devices. Noting the rule-of-law concerns that scholars have voiced about DPAs, Michael Greve writes that the new Fokker Services decision “in sharp contrast, oozes with ‘trust your friendly prosecutor’ language” and speaks of dispensing with “seeking a conviction that the prosecution may believe would be difficult to obtain or would have undesirable collateral consequences.” Greve adds: “Inquiring minds might want to know whether the conviction would be ‘difficult to obtain’ for practical reasons — or because the charges are preposterous and brought for reasons bordering on extortion. …No judicial scrutiny means more than boundless prosecutorial discretion. It means mobilizing the courts to create a due process façade for highly suspect bargains.” Let's hope the ruling isn't read that way.