If two data points are enough to draw a trend line, the trend I’ve spotted is government seeking to use data mining where it doesn’t work.
A comment in the Chronicle of Higher Education recently argued that universities should start mining data about student behavior in order to thwart incipient on-campus violence.
Existing technology … offers universities an opportunity to gaze into their own crystal balls in an effort to prevent large-scale acts of violence on campus. To that end, universities must be prepared to use data mining to identify and mitigate the potential for tragedy.
No, it doesn’t. And no, they shouldn’t.
Jeff Jonas and I wrote in our 2006 Cato Policy Analysis, “Effective Counterterrorism and the Limited Role of Predictive Data Mining,” that data mining doesn’t have the capacity to predict rare events like terrorism or school shootings. The precursors of such events are not consistent the way, say, credit card fraud is.
Data mining for campus violence would produce many false leads while missing real events. The costs in dollars and privacy would not be rewarded by gains in security and safety.
The same is true of foreign uprisings. They have gross commonality—people rising up against their governments—but there will be no pattern in data from past events in, say, Egypt, that would predict how events will unfold in, say, China.
But an AP story on Military.com reports that various U.S. security and law enforcement agencies want to mine publicly available social media for evidence of forthcoming terror attacks and uprisings. The story is called “US Seeks to Mine Social Media to Predict Future.”
Gathering together social media content has privacy costs, even if each bit of data was released publicly online. And it certainly has dollar costs that could be quite substantial. But the benefits would be slim indeed.
I’m with the critics who worry about overreliance on technology rather than trained and experienced human analysts. Is it too much to think that the U.S. might have to respond to events carefully and thoughtfully as they unfold? People with cultural, historical, and linguistic knowledge seem far better suited to predicting and responding to events in their regions of focus than any algorithm.
There’s a dream, I suppose, that data mining can eliminate risk or make the future knowable. It can’t, and—the future is knowable in one sense—it won’t.