Mass Tragedy Boilerplate and Rebuttal

On the road last week, and allergic to getting too heavily involved in the issue de l’heure, I only today saw Holman Jenkins’ Wall Street Journal commentary: “Can Data Mining Stop the Killing?

After the Aurora theater massacre, it might be fair to ask what kinds of things the NSA has programmed its algorithms to look for. Did it, or could it have, picked up on Mr. Holmes’s activities? And if not, what exactly are we getting for the money we spend on data mining?

Other than to collect it in a great mass along with data about all of us, the NSA could not have “picked up on” Mr. Holmes’s activities. As I wrote earlier this year about data mining’s potential for averting school shootings:

“[D]ata mining doesn’t have the capacity to predict rare events like terrorism or school shootings. The precursors of such events are not consistent the way, say, credit card fraud is. Data mining for campus violence would produce many false leads while missing real events. The costs in dollars and privacy would not be rewarded by gains in security and safety.

Jeff Jonas and I wrote about this in our 2006 Cato Policy Analysis, “Effective Counterterrorism and the Limited Role of Predictive Data Mining.”

If the NSA has data about the pathetic loser, Mr. Holmes, and if it were to let us know about it, all that would do is provide lenses for some pundit’s 20/20 hindsight. Data about past events always points to the future that occurred. But there is not enough commonality among rare and sporadic mass shootings to use their characteristics as predictors of future shootings.

Jenkins doesn’t drive hard toward concluding that data mining would have helped, but his inquiry is mass tragedy boilerplate. It’s been rebutted by me and others many times.