Libertarians are no fans of the administrative state. It consists of agencies with the power to generate rules that are binding on citizens. Congress, the branch of government that our founders anticipated would “necessarily predominate” in a republican form of government, first arrogated to itself vast powers beyond their contemplation, and then delegated these powers to the executive branch. The courts have, through a series of key cases, abided this abdication of responsibility. Moreover, the courts have derelicted their own duty to dispositively rule on the acceptable interpretations of an agency’s authorizing statute, a doctrine known as Chevron deference. So too have courts allowed agencies to interpret their own formal rules, a doctrine known as Auer deference. While this latter practice dates to a 1945 case (Bowles v. Seminole Rock), it was not explicitly condoned until the Auer case in 1997. (For a very good summary of Auer and a compelling argument as to why it should be overturned, please see my Cato colleagues’ amicus brief).
These two doctrines allow for administrative agencies to exercise considerable discretion within the ambiguity – intentional or otherwise – of a statutory or regulatory text. An agency has maximum interpretive latitude when it is acting pursuant to a vaguely worded statute, but it is up to Congress to give it this long leash. Yet an agency does control the specificity of the rules which it promulgates via APA-mandated notice-and-comment procedures. In order to maximize their subsequent room to maneuver, agencies might seek to craft rules that capaciously allow for creative construal down the road. But their willingness to do so is constrained by their expectation that their interpretation will be challenged, and potentially overturned, in court. Auer, alleges it critics, gives agencies a green light to “self-delegate” via promulgating vague rules with the foreknowledge that subsequent interpretations will not be overturned by the courts.
In comes Daniel Walters, a Regulatory Fellow at the University of Pennsylvania law school. His most recent law review article empirically tests the above hypothesis: that agencies will promulgate vaguer rules in the aftermath of the Auer case than before. He finds that this is not the case, and – using an empirical approach to the study of law that ought to be much more popular – cannot dismiss the null hypothesis that there is no change in the measured vagueness of federal regulations before and after Auer. In this blog post, I would like to highlight a major shortcoming of Professor Walter’s otherwise commendable methodological effort.
Walters assembles a dataset of all 1,218 “economically significant” rules promulgated by 28 federal agencies and reviewed by OIRA. He then constructs four distinct measures of a given rule’s “vagueness”: legal vagueness, laxity, cognitive sophistication, and polysemy. Multiple operationalizations of a dependent variable in this manner are an important safeguard against a spurious inference from a model that proves not to be robust to alternate specifications. Professor Walters is to be commended for his careful and thorough approach to this test.
Armed with these measurements of vagueness, Walters then employs an interrupted time series model to detect differences in trend pre and post Auer, finding either insignificant or inconsistent results across the board when measured aggregately. He acknowledges, however, that an aggregate measure of the vagueness across all rules in the dataset might elide important differences between agencies. In a footnote, he appreciates that agencies may differ from each other in important ways, but fails to fully flesh-out the implication of such differences for his model.
Footnote 219 reads:
Some agencies may deal with subject matter that is inherently more vague compared to the matters others deal with (imagine, for instance, the difference between technical air pollution regulations and antidiscrimination regulations), and some variation might also exist because of the level of vagueness in the governing statutes that the regulations seek to implement.
Indeed. Moreover, each agency differs as to the size of its relative contribution to the aggregate dataset. The largest agency may account for 10% of all observations, while a smaller agency might account for less than 1%. These two facts, that agencies differ both in their average vagueness level and in their relative share of the total dataset, leads to the following warning: any inferences concerning pre/post trend differences in aggregate measures of vagueness must assume constant inter-agency proportions in the overall dataset. Otherwise, we may be tricked by a Simpson’s Paradox. It could plausibly be the case that all agencies see an increase in vagueness post-Auer, but that the agencies with lower baseline levels of vagueness increase their proportion of total rulemaking activity over time.
To (partially) account for this, Walters includes fixed-effects for the six agencies with the largest contributions to the aggregate dataset. That is, he looks at the change in rule vagueness pre/post Auer within a specific agency. He indeed finds considerable inter-agency heterogeneity in terms of vagueness, but little intra-agency variability in a consistently more vague direction post-Auer, as its critics would expect.
Yet the Simpson’s Paradox can be fractally applied to intra-agency change over time as well. It seems reasonable to assume that any given agency has a portfolio that consists of multiple “sub-topics”. The Department of the Interior might issue rules pertaining to land use, water rights, and Native-American reservations. Some of these sub-topics will tend to be inherently more technical (less vague) than others, as will the specificity of the multiple, distinct statutes which grant an agency authority to do X, Y, and Z. So, just as one must assume that the relative proportions of agency contributions to the aggregate metric remain constant pre/post Auer in order to dismiss the alternate hypothesis, so too must one assume a constant composition of intra-agency sub-topic composition over time. If there happens to be a secular trend across all agencies to spend an increasing proportion of their total rulemaking activity on the inherently more technical sub-topics in their portfolio, this could occur right alongside a secular trend toward increasing vagueness across all sub-topics and yet get masked in the overall agency figures, not to mention the aggregate dataset.