John Lott’s book More Guns Less Crime, first released in 1998, has completely changed the debate about the effect on crime of individuals carrying concealed weapons. Instead of the old assumption that concealed guns mean more violent crime, firearms policymaking now asks how much of the decrease in violent crime is due to concealed carriage.

Although Lott’s conclusions are very controversial, it is not controversial to say that he created and analyzed a massive data set of crime and socioeconomic statistics covering all counties in the United States for several decades. He used the data set to examine the effect of the passage of right-to-carry (RTC) concealed weapons laws on categories of violent crime such as rape, murder, and robbery. Lott has made his data set readily available to all interested researchers, which is not the usual behavior of someone trying to hide shoddy analysis.

Unfortunately, making one’s data available to others is not a common trait of economists. A few attempts by researchers to gauge the degree to which econometric analyses in economic articles can be replicated (meaning merely that the results are double-checked using the same data and identical statistical technique) indicated that many published articles in good journals could not be successfully replicated. (See the writings of B. D. McCullough.) Yet replicability is a low hurdle. If a study’s results can be replicated, it merely means that the authors are not fabricating or misreporting their results — it does not mean that their results are reliable. Nevertheless, embarrassed by the poor results from the few attempted replications, an increasing number of leading economic journals have adopted policies requiring authors to make their data and regression codes publicly available with the hope that economists would be more careful if cross-checking of results by other economists was possible.

Lott’s results have passed the replicability test with flying colors.

The more important question is whether the results reported by researchers are robust to other reasonable choices for analyzing the data. Lott provides a large array of results in an attempt to demonstrate that his findings are robust, i.e., that he has not tortured the data to get his results. Because of the controversial nature of the topic and Lott’s willingness to share data, his work has undergone a great deal of critical scrutiny checking the robustness of his findings.

This new (third) edition of More Guns, Less Crime includes much new material representing Lott’s response to his various critics. This debate is the focus of my review.

Criticism of Lott | As this edition of the book reveals, Lott’s initial research was immediately attacked with a level of vitriol that says more about some of his critics than it does about the quality of his work. Spokesmen for several gun control groups, to their shame, repeated allegations about Lott’s research that were obviously false, such as the claim that his research was funded by the gun industry because Lott was an Olin Fellow at the University of Chicago (equivalent to claiming that recipients of Rockefeller Foundation grants are in the pocket of the oil industry) or that his original journal publication on this topic was not peer reviewed. Some critics created a malicious website purporting to be his that featured their fictional version of him making ridiculous statements and answering questions in a manner that would discredit him in the eyes of anyone unaware that the site was a fake. Some critics created a brouhaha about the validity of a survey conducted by Lott even though the survey played virtually no role in his analysis. Enormous attention was given to the fact that Lott was caught in an anonymous web posting praising his teaching abilities while pretending to be someone other than himself. Although his action was foolish, it is no more relevant to this debate than is his brand of underwear.

Although academic discussions of Lott’s work have been more heated than normal, they do not appear to have gone off the deep end. There are a number of academic studies supporting Lott’s thesis and a number that are strongly critical. Are these studies carefully scientific, as we would hope, or are the authors torturing the data?

First, it is useful to understand some of the ways that the data can be sliced and diced. The analyst has to decide how to measure the effect of RTC laws on crime. Should county level data or state level data be used? Should all counties (or states) be given equal weight? What control variables should be included in the regression? What violent crime categories should be used? How should counties that have zero crimes in a category, such as murder, be treated? How much time after passage of a law is enough to determine the effect of RTC laws? What is the appropriate time period for the analysis? Although this gives a flavor of the choices, there are many more decisions than just these, and the number of possible choices for the econometric analysis is astronomical.

Unfortunately, although Lott’s book attempts to refute his critics, it is not organized in a way that makes such a refutation easy to follow. The book is arranged by topic, which means Lott discusses a single critic in several different places. Some of his arguments appear in the main text, some in appendices, and some in notes at the back of the book. I can understand that, for traditional readers of his book, it might make sense to tell a coherent story about the effect of RTC on crime, proceeding topic by topic, but I would have preferred to see him discuss all the salient aspects of a particular critique in one place. I would suggest that a website doing this would be useful. Of course, a careful reader would also need to examine the writings of the critics to make sure that Lott was fairly representing their claims.

My reading of his book and the attendant literature indicate that there are three main attacks on his work. I will discuss each of these attacks in turn.

Black and Nagin | The earliest of the criticisms, by Dan Black and Daniel Nagin, performs several alterations to the most basic of Lott’s multiple statistical approaches. First, because small counties are likely to have less reliable data, Black and Nagin remove all counties with a population of less than 100,000 (reducing the sample of counties by approximately 75 percent). Lott counters that he used weighted regressions (i.e., bigger counties are given more weight) so that small counties had only a small effect on his results. Lott also had, among his many regressions, restricted his sample for various minimum-size counties without changing his results. So it is little surprise that this change hardly affects his results.

Black and Nagin then calculate, for the larger counties, the effect of the passage of RTC for individual states. They find wildly varying coefficients. This very well could indicate a problem if the chosen classification — states — was meaningful in this context. The problem is that some of their “states” are left with almost no counties when small counties are removed (three of the ten “states” with RTC changes had only one remaining county and five states had less than four counties). Given this, a wide variation across states is not surprising.

Black and Nagin also discovered that removing Florida weakens Lott’s results. Lott retorts that almost 25 percent of the counties that experienced changes in RTC laws were in Florida. There are few empirical results in economics that would hold up if critics could choose which 25 percent of observations to remove. More importantly, Florida was not considered sui generis until it provided a way to refute Lott. Ex post, Black and Nagin can only point to the Mariel boatlift as a possible reason to remove Florida, but Lott notes that the boatlift occurred seven years prior to Florida’s passage of its RTC law and that Florida crime rates had subsided to their old levels before its RTC was changed.

Finally, Black and Nagin look at individual years before and after implementation of the law to see if there is a trend. They claim that examining the results in this way shows that Lott’s conclusions disappear. Lott had performed what appeared to be an identical analysis and expressed surprise that his results differed from theirs. Black and Nagin imply that they are merely altering Lott’s base specification, but the number of observations in their regressions appears to indicate that they are also removing Florida from the analysis, although Black and Nagin do not tell the reader this. If so, this is not a new result, just a repetition of the Florida results.

One final item should be noted about Black and Nagin. Early in their paper, they falsely claim that Lott had ignored many of these issues in his paper. This claim is so obviously false that it somewhat discredits them as impartial analysts.

Duggan | An article by Mark Duggan, published in the very highly regarded Journal of Political Economy, disputed Lott’s results both directly and indirectly. Duggan’s indirect criticism was based on looking at whether changes in the readership (by state) of the magazine Guns and Ammo (proxying for gun ownership) are associated with changes in homicides. One immediate problem is that gun ownership is not the same thing as the carriage of a concealed weapon, so both Duggan’s and Lott’s results could be correct and yet differ from one another. Also, criminals presumably do not carry guns based on RTC laws, although law-abiding citizens do. Lott’s analysis, therefore, focuses on the impact of arming law-abiding citizens, while Duggan’s analysis, using a proxy for gun ownership, includes both law-abiding citizens and criminals. Therefore, it would not be surprising if Duggan’s analysis found a less benign impact than Lott’s analysis.

Indeed, Duggan finds that increased gun magazine readership is associated with increased murder rates, although he uses state data with fewer control variables than Lott used. Lott, however, criticizes the magazine data used by Duggan. Lott claims that the publisher of Guns and Ammo had stated that approximately ten percent of the magazines were given away each year in states where crime rates were increasing. If that is true, it would bias Duggan’s results in favor of finding a positive impact of magazine subscriptions on crime even if none existed. Lott further claims that when other gun magazines are used in a similar analysis, they show a much weaker relationship with the murder rate. If Lott’s information and data are correct, these are powerful critiques of Duggan’s results.

Duggan’s direct tests consist of alterations to the basic Lott analysis. First he performs a technical correction for the measured standard error, which lowers our confidence in the results but not the size of the result. Nevertheless, four out of five violent crime categories remain statistically significant, although a typo in Duggan’s published table incorrectly indicates that only two of the five remain statistically significant. Then Duggan makes an arguable adjustment to the dates that state RTC laws took effect, which reduces somewhat the size of RTC’s impact on violent crime — but murders, rapes, and assaults still appear to have important and statistically significant reductions.

Next, Duggan makes increasingly questionable changes to Lott’s estimation procedures that, while eventually having the effect of overturning Lott’s results, do so in a rather dubious manner. In all, he makes five sets of changes that seem cumulative, although from the text it is unclear whether or not the changes are cumulative. If these changes are cumulative, then as he moves from one dubious change to another, Duggan is compounding the flimsiness as opposed to just presenting a new questionable alteration.

One change Duggan makes is to remove all the demographic and socioeconomic variables such as arrest rates, poverty rates, racial makeup, unemployment rates, and so forth that tend to explain rates of violent crime. This seems like a misguided exercise counter to normal econometric logic and common sense. Finally, Duggan includes all counties, even those so small that they had zero crimes (for a crime type) in a given year (contrast this with Black and Nagin who claim that Lott includes too many counties). He can do this since he has thrown out the arrest rate as an explanatory variable, which would otherwise be undefined and removed in Lott’s examination. In Duggan’s regression, very small counties are given equal weight to large counties and they are likely to have either very high (e.g., if someone was killed) or very low (e.g., if no one was killed) crime rates. Duggan does overturn Lott’s results, but I find his method very unconvincing.

Lott’s response to Ayres and Donahue was not very convincing and he seems to have largely ignored the question of unbalanced panels over time.

Ayres and Donahue | The final set of critics is Ian Ayres and John Donahue III, who provide what I believe to be the most persuasive criticism of Lott’s results. They originally wrote a 1998 book review of his first edition in which they seemed open-minded toward Lott’s work. Although their book review pointed out numerous possible problems (including those from Black and Nagin), they seemed to have two main concerns: First they worried that the crack epidemic might have biased Lott’s analysis because he did not take it into account. Second, they argued that the impact of RTC on robbery is the most direct test of the reasoning explaining why RTC might lower crime (since criminal and victim come into contact and the crime is an economic not emotional one), yet they believe that Lott’s measured impact of RTC on robbery is weaker than the impacts on other violent crimes, calling his results into question. These seem like reasonable concerns, although Lott responds to the robbery claim by pointing out that there are many forms of robbery besides street robbery (e.g., robbery of small retailers) and thus robbery might not be more strongly related to RTC than are other forms of violent crime.

In a far more critical 2003 critique of Lott, Ayres and Donahue raise a different set of possible problems. They use data that extended further in time and claimed that the newer data not only eliminate Lott’s results, but actually indicate that RTC increased violent crime. However, Lott also had extended the data and found that the overall results did not change.

Using statewide data, Ayres and Donahue reported that Lott’s results disappeared when some of his seemingly unimportant demographic variables were eliminated in an intuitively plausible manner. They also reported that Lott’s overall results were affected by a small number of states that had the longest histories after the passage of RTC laws, and that when the analysis was conducted with a more consistent set of states, Lott’s results disappeared or reversed. I found this last discussion particularly compelling. Lott’s response to their results based upon their removal of seemingly duplicative (collinear) demographic information was not very convincing and he seems to have largely ignored the question of unbalanced panels over time.

But these results were for states, not counties. I waited as I read the Ayres and Donahue critique for the same analysis to be applied to Lott’s main county-based methodology, but it never came. I could only conclude that Ayres and Donahue were unable to discredit his county-based results with those arguments. Instead they reverted to a variant of Black and Nagin’s criticism where results based on counties were lumped into states and the state results compared to each other, which does not seem particularly compelling given that the major variation in crime occurs across counties.

The original respect between Ayres/​Donahue and Lott also seems to have evaporated. Lott tries to diminish most of Ayers and Donahue’s work in this area by stating that it is not peer reviewed. Although that is true (i.e., it is published in law reviews), it is largely irrelevant since Ayres and Donahue are very competent analysts. Lott also is disingenuous in his explanation of the Stanford Law Review article by Florenz Plassmann and John Whitley from which he removed his name due to a dispute with the editors; Lott treats the paper as if he had little or nothing to do with it. Ayres and Donahue, for their part, after expressing how important a defect it was for crack cocaine to be left out of the original analysis, appear happy to then ignore the subsequent results on crack cocaine presented by Lott and more specifically by Carlisle Moody and Thomas Marvell, presumably because those results do not support their hypothesis. Ayres and Donahue also claim that Lott does not discuss the theoretical possibility that guns might increase crime, when in fact he had used the very story that they provide to illustrate the point. They also dismiss Lott’s claim that robbery might not be most greatly affected by RTC by ridiculing Lott’s claim with reference to the small number of bank robberies as if that was the main alternative to street robbery. Lott’s claim is deserving of a serious answer.

Conclusion | What then are we left with? First, no one has, in my opinion, credibly shown a positive relationship between RTC and violent crime. The few positive coefficients that the critics have presented do not appear to be at all robust. The critics have been better able to show particular circumstances under which Lott’s negative results partially disappear. Are those showings sufficient to say that Lott’s results are not robust? That is a hard question. Because of the prominence of this issue and given the resources and efforts that appear to have been expended by Lott’s critics, it seems like they have not gotten a lot of bang for their efforts.

My reading is that RTC has probably lowered violent crime somewhat, but not in a terribly consistent manner. That really is enough of a result to conclude that Lott’s analysis has largely held up under these criticisms. There are not many policy studies that would hold up as well under such a sustained attack.

Readings

  • “Confirming ‘More Guns, Less Crime,’ ” by Florenz Plassmann and John Whitley. Stanford Law Review, Vol. 55, No. 4 (April 2003).
  • “Crime, Deterrence, and Right-to-Carry Concealed Handguns,” by John R. Lott and David B. Mustard. Journal of Legal Studies, Vol. 26, No. 12 (December 1997).
  • “Do Right-to-Carry Laws Deter Violent Crime?” by Dan A. Black and Daniel Nagin. Journal of Legal Studies, Vol. 27 (1998).
  • “More Guns. More Crime,” by Mark Duggan. Journal of Political Economy, Vol. 109, No. 5 (2001).
  • “Nondiscretionary Concealed Weapon Laws: A Case Study of Statistics, Standard of Proof, and Public Policy,” by Ian Ayres and John J. Donohue III. American Law and Economics Review, Vol. 1 (1999).
  • “Shooting Down the More Guns, Less Crime Hypothesis,” by Ian Ayres and John J. Donohue III. Stanford Law Review, Vol. 55 (2003).
  • “The Concealed-Handgun Debate,” by John R. Lott Jr. Journal of Legal Studies, Vol. 27, No. 1 (January 1998).
  • “The Debate on Shall-Issue Laws,” by Carlisle E. Moody and Thomas B. Marvell. Econ Journal Watch, Vol. 5, No. 3 (September 2008).