Topic: Telecom, Internet & Information Policy

I’d Be OK with Hinky, Given Post Hoc Articulation

Bruce Schneier has a typically interesting post about right and wrong ways to generate suspicion. In “Recognizing ‘Hinky’ vs. Citizen Informants,” he makes the case that asking amateurs for tips about suspicious behavior will have lots of wasteful and harmful results, like racial and ethnic discrimination, angry neighbors turning each other in, and so on. But people with expertise — even in very limited domains — can discover suspicious circumstances almost automatically, when they find things “hinky.”

As an example, a Rochester Institute of Technology student was recently discovered possessing assault weapons illegally (though that’s not necessarily good policy):

The discovery of the weapons was made only by chance. A conference center worker who served in the military was walking past Hackenburg’s dorm room. The door was shut, but the worker heard the all-too-familiar racking sound of a weapon … .

Schneier explains this in terms of “hinky”:

Each of us has some expertise in some topic, and will occasionally recognize that something is wrong even though we can’t fully explain what or why. An architect might feel that way about a particular structure; an artist might feel that way about a particular painting. I might look at a cryptographic system and intuitively know something is wrong with it, well before I figure out exactly what. Those are all examples of a subliminal recognition that something is hinky — in our particular domain of expertise.

This straddles an important line. Is it something we “can’t fully explain,” or something that feels wrong “before [one can] figure out exactly what”? My preference is that the thing should be explainable — not necessarily at the moment suspicion arises, but at some point.

I’m reminded of the Supreme Court formulation “reasonable suspicion based on articulable fact” — it was hammered into my brain in law school. It never satisfied me because the inquiry shouldn’t end at “articulable” but at whether, subsequently, the facts were actually articulated. “The hunch of an experienced officer” is an abdication that courts have indulged far too long.

I hear fairly often of “machine learning” that might be able to generate suspicion about terrorists. The cincher is that it’s so complicated we allegedly “can’t know” exactly what caused the machine to find a particular person, place, or thing worthy of suspicion. Given their superior memories, I think machines especially should be held to the standard of articulating the actual facts considered and the inferences drawn, reasonably, to justify whatever investigation follows.

Against Software Patents

Over at the American, I’ve got an article on what I regard as one of the biggest threats to the long-term vitality of the software industry: the patentability of software. Last year, we saw a company with no products of its own extort $612 million from Research in Motion, makers of the wildly popular BlackBerry mobile devices. Last month, Vonage, a company that pioneered Internet telephony, was ordered to pay $58 million to Verizon and enjoined from signing up new customers. Vonage is appearing in court today to appeal the decision. Given that Vonage has yet to turn a profit, if the injunction is upheld it’s likely to be a death sentence for the company.

The really frustrating thing about both cases—and numerous other software patent cases in recent years—is that there was no allegation that the defendants’ products were in any way based on the plaintiffs’ technologies. It’s universally agreed that RIM and Vonage developed their technologies independently. Rather, the problem is that the patents in question cover extremely broad concepts: essentially “wireless email” in NTP’s case, and “translating between Internet addresses and phone numbers” in Verizon’s. It’s simply impossible to develop a mobile device that doesn’t check email wirelessly, or an Internet telephony application that doesn’t translate between IP addresses and phone numbers.

It seems to me that these sorts of problems are almost inevitable when you allow patents on software, because software is built out of a very large number of modular components. (A typical software product might have 100,000 lines of code, and just a handful of lines of code could conceivably be considered an “invention”) If you allow a significant number of those components to be patented, it becomes prohibitively expensive for software companies to even find, much less license, all of the patents that might be relevant to their particular software. And indeed, most software companies don’t even try. Many deliberately avoid doing patent searches, because “willful” infringement can carry heightened penalties.

Software patents are a relatively new judicial innovation, and one that has never been explicitly ratified by the Supreme Court. Traditionally, the Supreme Court has held that software is essentially the description of a mathematical algorithm, and that mathematical algorithms are not eligible for patent protection. The Supreme Court opened the door to software patents a crack in 1981 when it held that a machine for curing rubber was not rendered ineligible for patent protection merely because one component of the machine was implemented through software. However, it emphasized that software per se is not eligible for patent protection.

The Court of Appeals for the Federal Circuit, which was created by Congress in 1982 and given jurisdiction over patent appeals, turned this principle on its head. In 1998, they ruled that a software patent merely had to produce “a useful, concrete and tangible result” to avoid the prohibition on patenting mathematical algorithms. Because no one would bother to patent software that didn’t produce a useful result, this effectively gave the patent office carte blanche to approve software patents.

And approve them they have. The patent office set a new record last year by issuing about 40,000 software patents. That represents hundreds of millions of dollars of patent lawyers’ and software engineers’ time that could otherwise have been spent producing useful software rather than filing for patents about it.

Luckily, the Supreme Court has an opportunity to bring this madness to an end in the case of Microsoft v. AT&T. Although the case is primarily about whether companies can be liable for copies of their software that is distributed overseas, the Software Freedom Law Center has filed an amicus brief urging the court to rule that software companies are not liable for overseas software distribution because software isn’t patentable in the first place. I think this argument is a bit of a long shot, since most of the briefs in the case did not focus on the patentability of software, however several justices in oral argument did specifically ask about the question, and the decison could open the door to a subsequent case directly addressing the question.

A La Carte Cable and the Economics of Abundance

Ars Technica reports that FCC chairman Kevin Martin is once again pledging to force cable providers to offer “a la carte” cable programming. I’ve found discussing this issue frustrating because people have surprisingly strong intuitions about it. Indeed, with the possible exception of “independence from foreign oil,” I can’t think of a single policy idea that is simultaneously so wrong-headed and so popular across the political spectrum.

But it is wrong-headed. People have this intuition that when they sign up for cable, they’re “forced” to pay for MTV to get Nickelodeon. Or conversely, that they’re “forced” to pay for Nickelodeon to get MTV. They seem to imagine that if they could just pick and choose cable channels individually, they’d be able to get the content they want and lower their overall cable bill.

The problem with this line of reasoning is that almost none of the cost of providing cable service to you is dependent on the number of channels you take. In economics jargon, cable channels have close to zero marginal cost. Once the content has been produced and the coax has been laid, it costs little or nothing to give every customer access to every channel in the bundle rather than only certain channels. So if they stop sending you Nickelodeon, it doesn’t reduce the total cost of providing you with your service. So why would you expect a price break?

Indeed, there are lots and lots of examples of bundled products and services that no one in his or her right mind would demand be unbundled. For example, why am I forced to buy the sports section with the business section in my morning paper? Why am I forced to buy evening and weekend minutes with my cellular phone plan? Why was I forced to buy a variety of software products with my new laptop? Why am I forced to take an all-you-can-eat Internet connection rather than paying for only the minutes I need?

These add-on products all have near-zero marginal cost, so it doesn’t cost the company anything extra to provide them to all customers. Indeed, in some cases, it would actually cost more to provide them on an a la carte basis. Imagine the nightmare of being a paper boy if each customer got to decide which sections of the paper he would take.

I think that’s a pretty straightforward argument, but people still seem to find it deeply counterintuitive. It occurs to me that this is an example of a point that Mike Masnick over at TechDirt has been making for a quite a while now: people find reasoning about goods with zero marginal cost deeply counterintuitive. People seem to have a strong intuition that anything that has value must also have cost, and so even if it appears to be free, you’re really paying for it somehow. But with information goods, which can be duplicated an unlimited number of times, that’s not true. Duplicating it really does cost close to nothing, and so it’s socially efficient to make it as widely available as possible.

So the right way to think about cable bundling, I think, is that you get several channels you don’t particularly want for free along with the ones you do want. Requiring a la carte programming simply takes away those free channels. People have an intuitive sense that those channels aren’t really free — that they’re really paying for them somehow. But that intuition is wrong. The extra channels really are free. And prohibiting cable channels from giving them to you really is a bad policy.

Mike has a long series of interesting posts on the economics of abundance here.

Link Analysis and 9/11

In our paper Effective Counterterrorism and the Limited Role of Predictive Data Mining, Jeff Jonas and I pointed out the uselessness of data mining for finding terrorists. The paper was featured in a Senate Judiciary Committee hearing earlier this year, and a data mining disclosure bill discussed in that hearing was recently marked up in that Committee.

On his blog, Jeff has posted some further thinking about 9/11 and searching for terrorists. He attacks a widespread presumption about that task forthrightly:

The whole point of my 9/11 analysis was that the government did not need mounds of data, did not need new technology, and in fact did not need any new laws to unravel this event!

He links to a presentation about finding the 9/11 terrorists and how it could have been done by simply following one lead to another.

Jeff feels strongly that Monday morning quarterbacking is unfair, and I agree with him. Nobody in our national security infrastructure knew the full scope of what would happen on 9/11, and so they aren’t blameworthy. Yet we should not shrink from the point that diligent seeking after the 9/11 terrorists, using traditional methods and the legal authorities existing at the time, would have found them.

Another “Piggybacking” Story

CNN reports on another example of police hysteria over “wireless theft.” Stories like this seem to pop up every few months: somebody parks their car on a residential street, opens up his laptop, and uses it to access a wireless network that’s not protected by a password. Then the police come along and arrest the guy. In the two cases reported in this story, both of which occurred in the UK, the police let them off with a warning. But in 2005, a guy was fined 500 pounds and placed on probation for a year for “stealing” Internet access.

As I argued in an op-ed last year, this is silly. Accessing someone else’s wireless network, especially for casual activities like checking your email, is the very definition of a victimless crime. I’ve done the same thing on numerous occasions, and I deliberately leave my wireless network open in the hopes that it will prove useful to my neighbors.

The only concrete harm opponents of “piggy-backing” can come up with is that the piggy-backer might commit a crime, such as downloading pirated content or child pornography, with your connection. But remember that there are now thousands of coffee shops, hotels, and other commercial locations that offer free WiFi access, and most of them don’t make any effort to verify identities or monitor usage. So someone who wants to get untraceable Internet access can go to any one of those establishments just as well as they can park outside your house.

Which isn’t to say that there are no reasons people might not want to share their network connections with the world. If sharing your Internet access creeps you out, by all means set a password. And there’s almost certainly work to be done educating users so that people are fully informed of the risks and know how to close their network if they want to do so.

But arresting people for logging into an open network is completely counterproductive. Ubiquitous Internet access is socially useful, and the vast majority of “piggy-backers” aren’t doing anything wrong. If you see someone parked on the street outside your home using your wireless network, you shouldn’t pick up the phone and call the cops. Instead, call your geeky nephew and ask him to set a password for your network. Or, even better, do nothing and consider it your good deed for the day.

The Border … Is You

Tomorrow, the House Homeland Security Committee is hosting a “Border Security Tech Fair.”

Vendors scheduled to participate include: Sightlogix, Scantech, Wattre, Hirsch, Bioscrypt, Cogent Systems, Cross Match, L1 Identity, Sagem Morpho, Motorola, L3 Communication, Authentec, Privaris, Mobilisa, and Lumidigm.

I don’t know all of these companies, so I made some educated guesses about the links (and I may have gotten the wrong division of Motorola), but it appears that fully 11 of the 15 participants are in the biometrics industry.

If you think for a minute that this is about the boundary line dividing the United States from its neighbors, I have a bridge to sell you. No wait - I have a “biometric solution” to sell you.  Mobilisa, for example, is being used to run background checks on the citizens of Clermont County, Ohio.

Participants in the Homeland Security Committee’s lunch briefing are all in the biometrics industry.  One of them, James Ziglar, wrote an op-ed in favor of a national ID in Monday’s New York Times. He claims it’s not a national ID, but then, he’s got a biometric solution to sell you.