Topic: Telecom, Internet & Information Policy

REAL ID Comment Campaign

The comment period on Department of Homeland Security regulations implementing the REAL ID Act ends early next week. A broad coalition of groups has put together a Web page urging people to submit their comments. The page has instructions for commenting, a quite helpful thing given how arcane the regulatory process is.

Feel free to comment – good, bad, or indifferent – on the regs. My views are known, but the Department of Homeland Security doesn’t know yours.

How to Reform E-Voting

On Friday, I made the case for scrapping computerized voting. Today I’m going to look at the leading legislative proposal to accomplish that goal, Rush Holt’s Voter Confidence and Increased Accessibility Act. As I wrote in a recent article, the proposal would do several things:

It bans the use of computerized voting machines that lack a voter-verified paper trail. It mandates that the paper records be the authoritative source in any recounts, and requires prominent notices reminding voters to double-check the paper record before leaving the polling place. It mandates automatic audits of at least three percent of all votes cast to detect discrepancies between the paper and electronic records. It bans voting machines that contain wireless networking hardware and prohibits connecting voting machines to the Internet. Finally, it requires that the source code for e-voting machines be made publicly available.

All of these seem to me to be big steps in the right direction. Requiring source code disclosure gives security experts like Ed Felten and Avi Rubin the opportunity to study e-voting systems and alert the authorities if major security problems are discovered. Banning Internet connections and wireless networking hardware closes off two major avenues hackers could use to compromise the machines. Perhaps most importantly, by requiring that machines produce paper records, that those records be the official record, and that the records be randomly audited, the legislation would provide a relatively high degree of certainty that even if a voting machine were hacked, we would be able to detect it and recover by using the paper records.

All in all, this seems like a good idea to me. But the legislation is not without its critics. I’ll consider two major criticisms below the fold.

One set of objections comes from state election officials. Some officials argue that some of the legislation’s requirements would be an unreasonable burden on them. I’m a strong proponent of federalism, so those concerns are worth taking seriously. But the Holt proposal appears to do a reasonably good job of respecting states’ autonomy in designing their own election procedures. For example, if a state already has an auditing procedure that differs from the procedure mandated in the Holt bill, it is permitted to continue using its own procedures so long as the National Institute for Science and Technology certifies that the state’s procedures will be no less effective. Other tweaks may be appropriate to avoid stepping on the toes of state election officials, but on the whole, the Holt legislation seems to me to strike a good balance between local autonomy and the need to ensure that federal elections are secure and transparent.

The most vocal critics of the legislation come from activists who feel the legislation does not go far enough. They believe that nothing less than an outright ban on computerized voting is acceptable. And they have some good arguments. They point out that the add-on printers now on the market are slow and unreliable, that technical glitches can lead to long lines that drive away voters, and that many voters don’t bother looking at the paper record of their vote anyway, reducing their usefulness.

These are all good reasons to prefer old-fashioned paper ballots over e-voting machines with a printer bolted on. Fortunately, the Holt bill does not require any state to use computerized voting machines. That decision is left up to the states, and activists are free to lobby state legislatures to use only paper ballots.

The activists may be right that an outright ban on computerized voting would be a simpler and more elegant solution to the problem. It would certainly make the legislation a lot shorter, since most of the bill is designed to address the defects of computerized voting machines. However, there does not appear to be much appetite for an outright e-voting ban this Congress, and I don’t think we can afford to run another election on the current crop of buggy and insecure voting machines. The Holt bill may not be perfect, but it seems like a big step in the direction of more secure and transparent elections.

The Case Against E-Voting

Ars Technica has an article about problems created by e-voting machines in the French elections on Sunday. Apparently, technical problems caused long lines, causing some voters to be turned away from the polls.

France’s problems are not an isolated incident. In November’s U.S. election, one county in Florida (ironically, the one Katherine Harris was vacating) seems to have lost about 10,000 votes, which happens to be smaller than the margin of victory between the candidates. And there were numerous smaller examples of e-voting problems all over the United States in the 2006 elections.

Those incidents by themselves would be a good argument for scrapping computerized voting. But the most important argument is more fundamental: e-voting is not, and never can be, transparent. The most important goal of any election system is that the voting process be reliable and resistant to manipulation. Transparency is a critical part of that. Transparency makes it more likely that any tampering with the election process will be detected before it can do any damage.

With e-voting, the process of recording, tabulating, and counting votes is opaque, if not completely secret. Indeed, in most cases, the source code to the e-voting machines is a trade secret, not available for public inspection. Even if the source code were available, there would still be no way to ensure that the software on a voting machine wasn’t tampered with after it was installed. This means that if someone did install malicious software onto a voting machine, there would likely be no way for us to find out until it was too late.

This isn’t just a theoretical concern. Last fall, Princeton computer science professor Ed Felten obtained an e-voting machine (a Diebold Accuvote-TS, one of the most widely used models in the United States) and created a virus that could be used to steal an election. The virus would spread from machine to machine through the memory cards that are inserted into the machines to install software upgrades. (Of course, Felten didn’t use his virus on any real voting machines or release the software to the public)

Although it may be possible to close the specific security vulnerabilities that Felten discovered, there’s no way to be sure that others wouldn’t be found in the future. Indeed, after being infected with Felten’s virus, a voting machine would behave exactly the same as a normal voting machine, except that it would innaccurately record some of the votes. Moreover, the virus could easily be designed to evade pre-election testing procedures. For example, it could be programmed to only steal votes at a particular date and time, or to only start stealing votes after a hundred votes have been cast. It would be very difficult — probably impossible — to design a testing regime that would ensure that a voting machine has not been compromised.

Therefore, the safest course of action is to stop using e-voting machines entirely, and to return to a tried-and-true technology: paper ballots. There are a variety of technologies available to count paper ballots, but probably the best choice is optical-scan machines. These have a proven track record and many state election officials have decades of experience working with them.

E-voting supporters point out that paper ballots have their own flaws. And it’s true: paper is far from perfect. But there’s an important difference between paper ballots and e-voting: stealing a paper-ballot election is extremely labor-intensive. To steal even a relatively minor race would involve stationing people at multiple precincts. Except in extremely close races, stealing a major race like Congress, governor, or president would require dozens, if not hundreds, of people. It’s very difficult to keep such a large conspiracy secret. As a result, voter fraud with paper ballots will almost always be small-scale. Occasionally, someone might get away with stealing an election for city council or a state representative, but races higher up the ticket won’t be affected.

In contrast, just one person can steal a computerized election if he’s in the right place. For example, a technician who serviced Diebold Accuvote-TS in the months before the 2006 elections could easily have developed a version of Felten’s virus and discreetly installed it on all voting machines in his service territory, which could have encompassed a county or perhaps even an entire state. In fact, he wouldn’t even have needed access to all the machines; simply by putting the software on one memory card in 2005 or early 2006, he could have started the process of spreading the virus from machine to machine as other technicians transferred memory cards among them.

The idea of a single person being able to steal a major election is much more frightening than the prospect of a larger group of people being able to steal a minor election. Of course, we should do whatever we can to prevent either scenario, but the risks posed by e-voting are clearly much more worrisome.

Fortunately, Congress is on the case, and may pass legislation restricting the use of e-voting machines during this session. In my next post, I’ll take a look at that legislation and consider some arguments for and against it.

I’d Be OK with Hinky, Given Post Hoc Articulation

Bruce Schneier has a typically interesting post about right and wrong ways to generate suspicion. In “Recognizing ‘Hinky’ vs. Citizen Informants,” he makes the case that asking amateurs for tips about suspicious behavior will have lots of wasteful and harmful results, like racial and ethnic discrimination, angry neighbors turning each other in, and so on. But people with expertise — even in very limited domains — can discover suspicious circumstances almost automatically, when they find things “hinky.”

As an example, a Rochester Institute of Technology student was recently discovered possessing assault weapons illegally (though that’s not necessarily good policy):

The discovery of the weapons was made only by chance. A conference center worker who served in the military was walking past Hackenburg’s dorm room. The door was shut, but the worker heard the all-too-familiar racking sound of a weapon … .

Schneier explains this in terms of “hinky”:

Each of us has some expertise in some topic, and will occasionally recognize that something is wrong even though we can’t fully explain what or why. An architect might feel that way about a particular structure; an artist might feel that way about a particular painting. I might look at a cryptographic system and intuitively know something is wrong with it, well before I figure out exactly what. Those are all examples of a subliminal recognition that something is hinky — in our particular domain of expertise.

This straddles an important line. Is it something we “can’t fully explain,” or something that feels wrong “before [one can] figure out exactly what”? My preference is that the thing should be explainable — not necessarily at the moment suspicion arises, but at some point.

I’m reminded of the Supreme Court formulation “reasonable suspicion based on articulable fact” — it was hammered into my brain in law school. It never satisfied me because the inquiry shouldn’t end at “articulable” but at whether, subsequently, the facts were actually articulated. “The hunch of an experienced officer” is an abdication that courts have indulged far too long.

I hear fairly often of “machine learning” that might be able to generate suspicion about terrorists. The cincher is that it’s so complicated we allegedly “can’t know” exactly what caused the machine to find a particular person, place, or thing worthy of suspicion. Given their superior memories, I think machines especially should be held to the standard of articulating the actual facts considered and the inferences drawn, reasonably, to justify whatever investigation follows.

Against Software Patents

Over at the American, I’ve got an article on what I regard as one of the biggest threats to the long-term vitality of the software industry: the patentability of software. Last year, we saw a company with no products of its own extort $612 million from Research in Motion, makers of the wildly popular BlackBerry mobile devices. Last month, Vonage, a company that pioneered Internet telephony, was ordered to pay $58 million to Verizon and enjoined from signing up new customers. Vonage is appearing in court today to appeal the decision. Given that Vonage has yet to turn a profit, if the injunction is upheld it’s likely to be a death sentence for the company.

The really frustrating thing about both cases—and numerous other software patent cases in recent years—is that there was no allegation that the defendants’ products were in any way based on the plaintiffs’ technologies. It’s universally agreed that RIM and Vonage developed their technologies independently. Rather, the problem is that the patents in question cover extremely broad concepts: essentially “wireless email” in NTP’s case, and “translating between Internet addresses and phone numbers” in Verizon’s. It’s simply impossible to develop a mobile device that doesn’t check email wirelessly, or an Internet telephony application that doesn’t translate between IP addresses and phone numbers.

It seems to me that these sorts of problems are almost inevitable when you allow patents on software, because software is built out of a very large number of modular components. (A typical software product might have 100,000 lines of code, and just a handful of lines of code could conceivably be considered an “invention”) If you allow a significant number of those components to be patented, it becomes prohibitively expensive for software companies to even find, much less license, all of the patents that might be relevant to their particular software. And indeed, most software companies don’t even try. Many deliberately avoid doing patent searches, because “willful” infringement can carry heightened penalties.

Software patents are a relatively new judicial innovation, and one that has never been explicitly ratified by the Supreme Court. Traditionally, the Supreme Court has held that software is essentially the description of a mathematical algorithm, and that mathematical algorithms are not eligible for patent protection. The Supreme Court opened the door to software patents a crack in 1981 when it held that a machine for curing rubber was not rendered ineligible for patent protection merely because one component of the machine was implemented through software. However, it emphasized that software per se is not eligible for patent protection.

The Court of Appeals for the Federal Circuit, which was created by Congress in 1982 and given jurisdiction over patent appeals, turned this principle on its head. In 1998, they ruled that a software patent merely had to produce “a useful, concrete and tangible result” to avoid the prohibition on patenting mathematical algorithms. Because no one would bother to patent software that didn’t produce a useful result, this effectively gave the patent office carte blanche to approve software patents.

And approve them they have. The patent office set a new record last year by issuing about 40,000 software patents. That represents hundreds of millions of dollars of patent lawyers’ and software engineers’ time that could otherwise have been spent producing useful software rather than filing for patents about it.

Luckily, the Supreme Court has an opportunity to bring this madness to an end in the case of Microsoft v. AT&T. Although the case is primarily about whether companies can be liable for copies of their software that is distributed overseas, the Software Freedom Law Center has filed an amicus brief urging the court to rule that software companies are not liable for overseas software distribution because software isn’t patentable in the first place. I think this argument is a bit of a long shot, since most of the briefs in the case did not focus on the patentability of software, however several justices in oral argument did specifically ask about the question, and the decison could open the door to a subsequent case directly addressing the question.

A La Carte Cable and the Economics of Abundance

Ars Technica reports that FCC chairman Kevin Martin is once again pledging to force cable providers to offer “a la carte” cable programming. I’ve found discussing this issue frustrating because people have surprisingly strong intuitions about it. Indeed, with the possible exception of “independence from foreign oil,” I can’t think of a single policy idea that is simultaneously so wrong-headed and so popular across the political spectrum.

But it is wrong-headed. People have this intuition that when they sign up for cable, they’re “forced” to pay for MTV to get Nickelodeon. Or conversely, that they’re “forced” to pay for Nickelodeon to get MTV. They seem to imagine that if they could just pick and choose cable channels individually, they’d be able to get the content they want and lower their overall cable bill.

The problem with this line of reasoning is that almost none of the cost of providing cable service to you is dependent on the number of channels you take. In economics jargon, cable channels have close to zero marginal cost. Once the content has been produced and the coax has been laid, it costs little or nothing to give every customer access to every channel in the bundle rather than only certain channels. So if they stop sending you Nickelodeon, it doesn’t reduce the total cost of providing you with your service. So why would you expect a price break?

Indeed, there are lots and lots of examples of bundled products and services that no one in his or her right mind would demand be unbundled. For example, why am I forced to buy the sports section with the business section in my morning paper? Why am I forced to buy evening and weekend minutes with my cellular phone plan? Why was I forced to buy a variety of software products with my new laptop? Why am I forced to take an all-you-can-eat Internet connection rather than paying for only the minutes I need?

These add-on products all have near-zero marginal cost, so it doesn’t cost the company anything extra to provide them to all customers. Indeed, in some cases, it would actually cost more to provide them on an a la carte basis. Imagine the nightmare of being a paper boy if each customer got to decide which sections of the paper he would take.

I think that’s a pretty straightforward argument, but people still seem to find it deeply counterintuitive. It occurs to me that this is an example of a point that Mike Masnick over at TechDirt has been making for a quite a while now: people find reasoning about goods with zero marginal cost deeply counterintuitive. People seem to have a strong intuition that anything that has value must also have cost, and so even if it appears to be free, you’re really paying for it somehow. But with information goods, which can be duplicated an unlimited number of times, that’s not true. Duplicating it really does cost close to nothing, and so it’s socially efficient to make it as widely available as possible.

So the right way to think about cable bundling, I think, is that you get several channels you don’t particularly want for free along with the ones you do want. Requiring a la carte programming simply takes away those free channels. People have an intuitive sense that those channels aren’t really free — that they’re really paying for them somehow. But that intuition is wrong. The extra channels really are free. And prohibiting cable channels from giving them to you really is a bad policy.

Mike has a long series of interesting posts on the economics of abundance here.