Can regulators carry out two separate and sometimes conflicting missions at the same time? More importantly, should politicians want them to do both or leave bureaucrats dedicated to a single mission? Scholars have written tomes about the causes and consequences of regulatory failure, but few have taken a novel and rigorous quantitative approach to explain how organizational design and political pressure can make or break regulatory performance.

In Structured to Fail, George Washington University public policy professor Christopher Carrigan surveys three well-known regulatory failures: the Mineral Management Service’s (MMS) role in the 2010 Deepwater Horizon explosion and oil spill, the Federal Reserve’s dual mandate to fight inflation and unemployment, and Japan’s Nuclear and Industrial Safety Agency (NISA), which was tasked with both promoting and overseeing that country’s nuclear power industry.

All three of those agencies had multiple and sometimes conflicting duties, referred to as “goal ambiguity.” How could the Federal Reserve promote economic growth, limit inflation, and regulate financial institutions during the Great Recession? Many would argue it failed in at least two of those duties. How could the MMS regulate safety, promote energy development, and collect revenue effectively? After the Deepwater Horizon spill, politicians who had only recently praised the agency decided to split it in three, concluding that its distinct and conflicting missions contributed to the worst oil spill in U.S. history. Similar criticism was leveled against NISA following the 2011 Fukushima disaster.

Goal ambiguity already has an expansive literature. Carrigan brings a deep statistical understanding of what drives successful agencies and the realization that goal ambiguity alone didn’t cause regulatory failure.

Quantitative approach / There is a general consensus that “multipurpose” agencies perform worse and produce inferior outcomes compared to agencies that solely regulate or simply do not regulate at all. In the words of former health and human services secretary Donna Shalala, “If you try to do everything, you’ll accomplish nothing.”

This might be an apt aphorism for government, but proving this, beyond the occasional anecdote, is difficult. Thankfully, a legacy of the George W. Bush administration and Rob Portman’s tenure as director of the Office of Management and Budget was the creation and use of a government-wide rating tool designed to measure agency performance. The Program Assessment Rating Tool (PART) studied the performance of nearly every federal program from 2002 to 2008.

To use the PART data, Carrigan laboriously matched each federal program to each agency and, using descriptive statistics, tests, and regression analyses, examined the performance of multipurpose regulators, more traditional regulators, and agencies not responsible for regulation. Some critics might argue that a heavy-handed regulator that does an efficient job of destroying an industry, raising prices for consumers, or protecting industry incumbents at the expense of competition might perform well on PART scores. This may be true, but it is not Carrigan’s aim to determine the merit of regulatory output or the stringency of various rules. Instead, he is interested in the relative performance of agencies across the federal government, even if the agencies’ activities make devotees of limited government squeamish.

Skeptics might note that PART scores are hardly perfect. After all, it’s the executive branch rating the performance of its own regulators and programs. Confirmation bias and conflicts of interest abound in this model. Carrigan concedes this point, but stresses that PART scores demonstrate no obvious biases and were computed for almost all government programs, making them universal.

What’s amazing about the book is the sheer amount of information Carrigan compiled: a dataset of 144 federal agencies and 1,062 programs across six years. He then cleverly produced his own rating system: an average PART score from 0 to 100 for each agency. As previewed, multipurpose regulators fared worse than other regulators and nonregulators. On average, they score 32% lower than other regulators and 17% worse than nonregulators. But why? Is goal ambiguity the sole cause? Is it simply the fault of politicians for designing agencies with conflicting mandates?

The answers to the questions above might be yes, but Carrigan goes several steps further to consider agency design, organization, and operation. There is no perfect agency or regulatory program, as much as some politicians might like to take credit. As Brian Mannix of the George Washington University Regulatory Studies Center has described, every program suffers from the “Planner’s Paradox.” (See “The Planner’s Paradox,” Summer 2003.) That is, there is a tendency of the regulator or agency to assume planned solutions are superior to unplanned market answers. The planner might employ reams of data, analysis, and expertise, but the planner also brings biases to the decision and often cannot see beyond the four corners of the regulation. To the planner, the solution is elegant, maybe even perfect. When the program fails five years later, the paradox is finally revealed.

For the MMS, failure took much longer than five years. The agency was born from the U.S. Geological Survey, primarily from its failure to balance royalty collections with its science mission. When then–interior secretary James Watt created the MMS through a series of orders in 1982, the move was greeted with widespread approval from politicians, the Government Accountability Office, and even Time. Leading up to the Deepwater Horizon explosion, even President Barack Obama and members of Congress generally praised the MMS, only to condemn it and break it in three after the explosion, blaming its design for the oil spill.

The two takeaways from the book are that there is no right answer that explains all regulatory failure and there is no perfect way to design a federal agency.

Explaining the failure was easy for most politicians: they had tasked it with conflicting roles. Like many large catastrophes or complex problems, there is rarely one simple explanation. The proffered explanation might play well in a 30-second campaign ad, but Carrigan notes there were a host of reasons why the MMS failed and why other regulators perform poorly.

No one cause / Beyond the obvious yet incomplete explanation of goal ambiguity, there are other factors that contribute to regulatory failure. Political pressure is high on the list. A little over a decade after it was created, Congress almost scrapped the MMS. Agency heads, in conjunction with a congressional mission to increase domestic oil production, wanted it to assume a larger role in bringing in revenue for the federal government. After the Deepwater Royalty Relief Act was passed in 1995, the MMS had a mission to increase leasing and deliver tangible results (money) back to Congress. Barring a major setback (a massive oil spill), its role as revenue generator for Congress would help to secure its future. Revenues continued to increase and Congress and the industry were satisfied—until 2010.

Many ways to fail / The two takeaways from the book are that there is no right answer that explains all regulatory failure and there is no perfect way to design or reengineer a federal agency. Carrigan cautions: even after breaking up a supposedly deficient agency, it’s not clear that the reorganized agency will excel. Congress occasionally intends for one agency to perform multiple tasks, but there are no assurances that multipurpose agencies will always fail, although they do perform worse compared to their bureaucratic brethren.

Structured to Fail provides several contemporary examples. The Consumer Financial Protection Bureau (CFPB) was established during a chaotic political climate, given vast powers, and cut off from the traditional congressional appropriations process. During its existence, it has been criticized for creating a hostile workplace, repeatedly overstepping its statutory bounds, and operating outside any political or structural checks on its authority.

As a consequence of its controversial birth, members of Congress continue to push for fundamental reform, or even abolition, of the agency. In the fall of 2017, Congress took the rare step of using the Congressional Review Act to rescind the CFPB’s arbitration rule. Despite the agency’s relative independence, it appears Congress will continue to chip away at the CFPB’s authority until lawmakers can, at minimum, install a new director. Did Democrats intend for this when they created the agency? Will fundamentally altering the CFPB remedy the defects Republicans routinely highlight? The Planner’s Paradox suggests history must be the judge.

In sum, Carrigan’s book is a fascinating tour of agency design, oversight, and regulatory failure. Although he discusses the Federal Reserve and NISA, the books is almost solely focused on the MMS and its history. Rather than conclude with previous scholars that goal ambiguity was the sole cause of the Deepwater Horizon tragedy, Carrigan goes several steps further, proving that multipurpose agencies do suffer in performance, at least compared to their peers. However, goal ambiguity alone did not cause that failure. In the future, we can only hope a member of Congress or an informed White House staffer comes across Carrigan’s research when contemplating the design of yet another federal agency.