Bruce Schneier has a typically interesting post about right and wrong ways to generate suspicion. In “Recognizing ‘Hinky’ vs. Citizen Informants,” he makes the case that asking amateurs for tips about suspicious behavior will have lots of wasteful and harmful results, like racial and ethnic discrimination, angry neighbors turning each other in, and so on. But people with expertise — even in very limited domains — can discover suspicious circumstances almost automatically, when they find things “hinky.”
As an example, a Rochester Institute of Technology student was recently discovered possessing assault weapons illegally (though that’s not necessarily good policy):
The discovery of the weapons was made only by chance. A conference center worker who served in the military was walking past Hackenburg’s dorm room. The door was shut, but the worker heard the all‐too‐familiar racking sound of a weapon .…
Schneier explains this in terms of “hinky”:
Each of us has some expertise in some topic, and will occasionally recognize that something is wrong even though we can’t fully explain what or why. An architect might feel that way about a particular structure; an artist might feel that way about a particular painting. I might look at a cryptographic system and intuitively know something is wrong with it, well before I figure out exactly what. Those are all examples of a subliminal recognition that something is hinky — in our particular domain of expertise.
This straddles an important line. Is it something we “can’t fully explain,” or something that feels wrong “before [one can] figure out exactly what”? My preference is that the thing should be explainable — not necessarily at the moment suspicion arises, but at some point.
I’m reminded of the Supreme Court formulation “reasonable suspicion based on articulable fact” — it was hammered into my brain in law school. It never satisfied me because the inquiry shouldn’t end at “articulable” but at whether, subsequently, the facts were actually articulated. “The hunch of an experienced officer” is an abdication that courts have indulged far too long.
I hear fairly often of “machine learning” that might be able to generate suspicion about terrorists. The cincher is that it’s so complicated we allegedly “can’t know” exactly what caused the machine to find a particular person, place, or thing worthy of suspicion. Given their superior memories, I think machines especially should be held to the standard of articulating the actual facts considered and the inferences drawn, reasonably, to justify whatever investigation follows.