Topic: Telecom, Internet & Information Policy

Congress Backs Official Idiocy

Here’s Congress siding with Boston’s idiotic public officials. The Terrorist Hoax Improvements Act of 2007 would allow government officials to sue people who fail to promptly clear things up when those officials mistakenly think that they have stumbled over a terrorist plot.

There’s nothing in the bill allowing individuals or corporations to sue government officials when hare-brained overreactions interfere with their lives and business or destroy their property.

Digg, Hacking, and Civil Disobedience

Randy Picker asks when civil disobedience is acceptable, and concludes that posting HD-DVD encryption keys doesn’t cut it:

I wouldn’t think that not being able to play an encrypted high-definition DVD on your platform of choice would fall into that category. I understand fully that people disagree about whether digital rights management and the Digital Millennium Copyright Act are good copyright policy. I also understand that users can be frustrated by limitations imposed by DRM (I’ve run into those myself). But I think the DMCA (and the DRM that it makes possible) is a long, long way from the sorts of laws for which civil disobedience is an appropriate response. Simply not liking the law is not enough. There must be more, something that recognizes the nature of reasonable disagreement over law, and the range of possible legitimate variations about those laws.

Ed Felten points out some of the reasons that geeks felt so strongly about this case. Partly it was geeks’ knee-jerk opposition to censorship. Partly it’s a protest against the DMCA.

There are a variety of reasons that the DMCA is bad public policy. I presented some of them in a paper I did for Cato last year. But instead of rehashing those arguments, let me quote an excellent essay by Paul Graham about America’s heritage of hacking. Prof. Picker dismissively characterizes this week’s incident as a dispute over “being able to play an encrypted high-definition DVD on your platform of choice,” but from the perspective of computer programmers it’s about something more fundamental than that:

Hacking predates computers. When he was working on the Manhattan Project, Richard Feynman used to amuse himself by breaking into safes containing secret documents. This tradition continues today. When we were in grad school, a hacker friend of mine who spent too much time around MIT had his own lock picking kit. (He now runs a hedge fund, a not unrelated enterprise.)

It is sometimes hard to explain to authorities why one would want to do such things. Another friend of mine once got in trouble with the government for breaking into computers. This had only recently been declared a crime, and the FBI found that their usual investigative technique didn’t work. Police investigation apparently begins with a motive. The usual motives are few: drugs, money, sex, revenge. Intellectual curiosity was not one of the motives on the FBI’s list. Indeed, the whole concept seemed foreign to them.

Those in authority tend to be annoyed by hackers’ general attitude of disobedience. But that disobedience is a byproduct of the qualities that make them good programmers. They may laugh at the CEO when he talks in generic corporate newspeech, but they also laugh at someone who tells them a certain problem can’t be solved. Suppress one, and you suppress the other…

It is by poking about inside current technology that hackers get ideas for the next generation. No thanks, intellectual homeowners may say, we don’t need any outside help. But they’re wrong. The next generation of computer technology has often — perhaps more often than not — been developed by outsiders.

In 1977 there was, no doubt, some group within IBM developing what they expected to be the next generation of business computer. They were mistaken. The next generation of business computer was being developed on entirely different lines by two long-haired guys called Steve in a garage in Los Altos. At about the same time, the powers that be were cooperating to develop the official next generation operating system, Multics. But two guys who thought Multics excessively complex went off and wrote their own. They gave it a name that was a joking reference to Multics: Unix.

The latest intellectual property laws impose unprecedented restrictions on the sort of poking around that leads to new ideas. In the past, a competitor might use patents to prevent you from selling a copy of something they made, but they couldn’t prevent you from taking one apart to see how it worked. The latest laws make this a crime. How are we to develop new technology if we can’t study current technology to figure out how to improve it?

Why are programmers so violently opposed to these laws? If I were a legislator, I’d be interested in this mystery — for the same reason that, if I were a farmer and suddenly heard a lot of squawking coming from my hen house one night, I’d want to go out and investigate. Hackers are not stupid, and unanimity is very rare in this world. So if they’re all squawking, perhaps there is something amiss.

Could it be that such laws, though intended to protect America, will actually harm it? Think about it. There is something very American about Feynman breaking into safes during the Manhattan Project. It’s hard to imagine the authorities having a sense of humor about such things over in Germany at that time. Maybe it’s not a coincidence.

Hackers are unruly. That is the essence of hacking. And it is also the essence of Americanness. It is no accident that Silicon Valley is in America, and not France, or Germany, or England, or Japan. In those countries, people color inside the lines.

Digging Piracy

Something rather astonishing happened on the Internet on Tuesday. Let me start with a bit of background: Hollywood has an encryption system called AACS that it uses to scramble the content on high-definiton home video discs. Like all copy protection systems, it only took a few months before hackers found security flaws in the system. In the process they extracted a 16-byte key (basically, a very long number) that allows programmers to unlock the encrypted content.

This key had been floating around various minor websites over the last couple of months. But last month, the organization that controls the AACS system began sending cease and desist letters to various ISPs demanding that the keys be taken down from websites that were displaying them. In response, people all over the web began posting copies of the key, which is just a 16-character string.

One of the sites that had the key on it is Digg. Digg is a news website in which all of the news stories are chosen by the collective wisdom of readers. Anyone can submit a story, and then other users can vote for (called “digging”) or against (called “burying”) individual stories. The stories that get the most votes get promoted to the front page where they’re viewed by hundreds of thousands of people.

Somebody posted a story containing the AACS key, and Digg got a letter demanding that the story be removed. Digg complied. Princeton computer science professor Ed Felten describes what happened next:

Then Digg’s users revolted. As word got around about what Digg was doing, users launched a deluge of submissions to Digg, all mentioning or linking to the key. Digg’s administrators tried to keep up, but submissions showed up faster than the administrators could cancel them. For a while yesterday, the entire front page of Digg — the “hottest” pages according to Digg’s algorithms — consisted of links to the AACS key.

Last night, Digg capitulated to its users. Digg promised to stop removing links to the key, and Digg founder Kevin Rose even posted the key to the site himself.

Fred von Lohmann has a good rundown on the legal liability Digg could face from allowing the key to be posted on their site. But more interesting, I think, is the light the incident sheds on the broader debate over Internet piracy.

In a sense, Digg is a microcosm of the Internet at large. What makes the Internet so powerful is that we’re finding more and more clever ways to turn tasks that once required a human being over to machines. In the case of Digg, Kevin Rose found a way to automate the editing process. Instead of having a single human being read through all the stories and select the best ones, he created a system in which readers—who are on the site reading stories anyway—could quickly and easily choose stories for him. This has made the site extraordinarily successful.

But technology is amoral. A system that transmits news and information can just as easily be used to transmit pirated music, encryption keys, or even child pornography. Moreover, you can’t fine a computer algorithm or throw it in jail. Which means that as we automate more and more of our information-distribution systems, there are fewer and fewer ways for the legal authorities to exert control over what kinds of information is transmitted.

In the early days of the Internet, people created special-purpose tools like Napster whose primary use was to swap illicit materials. Copyright holders got those tools shut down. But increasingly, illicit information sharing is being done using the same tools we use to share legal content. Napster’s primary use was to share copyrighted music. But one of its successors, BitTorrent, is widely used to exchange legitimate content, including open source software, computer game updates, and even legitimate movie downloads. It would be unreasonable to outlaw BitTorrent, given how many legitimate uses it has.

On Tuesday, Kevin Rose had only two options: He could allow the encryption key to appear on his site, or he could shut his site down. Shutting his site down wasn’t really an option (it’s a multi-million dollar business) so his only real choice was to allow the content to be transmitted. As a society, we face precisely the same dilemma with regard to the Internet as a whole. People use the Internet to transmit information most of us think they shouldn’t be transmitting. But our only alternatives are to cripple the Internet or turn the country into a police state. Nobody wants to do either of those things, so we’re going to have to live with the fact that any information a significant number of people want to share is going to be shared. We’re going to have to find ways to adjust our copyright system to a world in which anyone who’s willing to break the law will be able to get most copyrighted content for free. As a supporter of copyright, this doesn’t make me happy. But there doesn’t seem to be anything we can do about it.

REAL ID Comment Campaign

The comment period on Department of Homeland Security regulations implementing the REAL ID Act ends early next week. A broad coalition of groups has put together a Web page urging people to submit their comments. The page has instructions for commenting, a quite helpful thing given how arcane the regulatory process is.

Feel free to comment – good, bad, or indifferent – on the regs. My views are known, but the Department of Homeland Security doesn’t know yours.

How to Reform E-Voting

On Friday, I made the case for scrapping computerized voting. Today I’m going to look at the leading legislative proposal to accomplish that goal, Rush Holt’s Voter Confidence and Increased Accessibility Act. As I wrote in a recent article, the proposal would do several things:

It bans the use of computerized voting machines that lack a voter-verified paper trail. It mandates that the paper records be the authoritative source in any recounts, and requires prominent notices reminding voters to double-check the paper record before leaving the polling place. It mandates automatic audits of at least three percent of all votes cast to detect discrepancies between the paper and electronic records. It bans voting machines that contain wireless networking hardware and prohibits connecting voting machines to the Internet. Finally, it requires that the source code for e-voting machines be made publicly available.

All of these seem to me to be big steps in the right direction. Requiring source code disclosure gives security experts like Ed Felten and Avi Rubin the opportunity to study e-voting systems and alert the authorities if major security problems are discovered. Banning Internet connections and wireless networking hardware closes off two major avenues hackers could use to compromise the machines. Perhaps most importantly, by requiring that machines produce paper records, that those records be the official record, and that the records be randomly audited, the legislation would provide a relatively high degree of certainty that even if a voting machine were hacked, we would be able to detect it and recover by using the paper records.

All in all, this seems like a good idea to me. But the legislation is not without its critics. I’ll consider two major criticisms below the fold.

One set of objections comes from state election officials. Some officials argue that some of the legislation’s requirements would be an unreasonable burden on them. I’m a strong proponent of federalism, so those concerns are worth taking seriously. But the Holt proposal appears to do a reasonably good job of respecting states’ autonomy in designing their own election procedures. For example, if a state already has an auditing procedure that differs from the procedure mandated in the Holt bill, it is permitted to continue using its own procedures so long as the National Institute for Science and Technology certifies that the state’s procedures will be no less effective. Other tweaks may be appropriate to avoid stepping on the toes of state election officials, but on the whole, the Holt legislation seems to me to strike a good balance between local autonomy and the need to ensure that federal elections are secure and transparent.

The most vocal critics of the legislation come from activists who feel the legislation does not go far enough. They believe that nothing less than an outright ban on computerized voting is acceptable. And they have some good arguments. They point out that the add-on printers now on the market are slow and unreliable, that technical glitches can lead to long lines that drive away voters, and that many voters don’t bother looking at the paper record of their vote anyway, reducing their usefulness.

These are all good reasons to prefer old-fashioned paper ballots over e-voting machines with a printer bolted on. Fortunately, the Holt bill does not require any state to use computerized voting machines. That decision is left up to the states, and activists are free to lobby state legislatures to use only paper ballots.

The activists may be right that an outright ban on computerized voting would be a simpler and more elegant solution to the problem. It would certainly make the legislation a lot shorter, since most of the bill is designed to address the defects of computerized voting machines. However, there does not appear to be much appetite for an outright e-voting ban this Congress, and I don’t think we can afford to run another election on the current crop of buggy and insecure voting machines. The Holt bill may not be perfect, but it seems like a big step in the direction of more secure and transparent elections.

The Case Against E-Voting

Ars Technica has an article about problems created by e-voting machines in the French elections on Sunday. Apparently, technical problems caused long lines, causing some voters to be turned away from the polls.

France’s problems are not an isolated incident. In November’s U.S. election, one county in Florida (ironically, the one Katherine Harris was vacating) seems to have lost about 10,000 votes, which happens to be smaller than the margin of victory between the candidates. And there were numerous smaller examples of e-voting problems all over the United States in the 2006 elections.

Those incidents by themselves would be a good argument for scrapping computerized voting. But the most important argument is more fundamental: e-voting is not, and never can be, transparent. The most important goal of any election system is that the voting process be reliable and resistant to manipulation. Transparency is a critical part of that. Transparency makes it more likely that any tampering with the election process will be detected before it can do any damage.

With e-voting, the process of recording, tabulating, and counting votes is opaque, if not completely secret. Indeed, in most cases, the source code to the e-voting machines is a trade secret, not available for public inspection. Even if the source code were available, there would still be no way to ensure that the software on a voting machine wasn’t tampered with after it was installed. This means that if someone did install malicious software onto a voting machine, there would likely be no way for us to find out until it was too late.

This isn’t just a theoretical concern. Last fall, Princeton computer science professor Ed Felten obtained an e-voting machine (a Diebold Accuvote-TS, one of the most widely used models in the United States) and created a virus that could be used to steal an election. The virus would spread from machine to machine through the memory cards that are inserted into the machines to install software upgrades. (Of course, Felten didn’t use his virus on any real voting machines or release the software to the public)

Although it may be possible to close the specific security vulnerabilities that Felten discovered, there’s no way to be sure that others wouldn’t be found in the future. Indeed, after being infected with Felten’s virus, a voting machine would behave exactly the same as a normal voting machine, except that it would innaccurately record some of the votes. Moreover, the virus could easily be designed to evade pre-election testing procedures. For example, it could be programmed to only steal votes at a particular date and time, or to only start stealing votes after a hundred votes have been cast. It would be very difficult — probably impossible — to design a testing regime that would ensure that a voting machine has not been compromised.

Therefore, the safest course of action is to stop using e-voting machines entirely, and to return to a tried-and-true technology: paper ballots. There are a variety of technologies available to count paper ballots, but probably the best choice is optical-scan machines. These have a proven track record and many state election officials have decades of experience working with them.

E-voting supporters point out that paper ballots have their own flaws. And it’s true: paper is far from perfect. But there’s an important difference between paper ballots and e-voting: stealing a paper-ballot election is extremely labor-intensive. To steal even a relatively minor race would involve stationing people at multiple precincts. Except in extremely close races, stealing a major race like Congress, governor, or president would require dozens, if not hundreds, of people. It’s very difficult to keep such a large conspiracy secret. As a result, voter fraud with paper ballots will almost always be small-scale. Occasionally, someone might get away with stealing an election for city council or a state representative, but races higher up the ticket won’t be affected.

In contrast, just one person can steal a computerized election if he’s in the right place. For example, a technician who serviced Diebold Accuvote-TS in the months before the 2006 elections could easily have developed a version of Felten’s virus and discreetly installed it on all voting machines in his service territory, which could have encompassed a county or perhaps even an entire state. In fact, he wouldn’t even have needed access to all the machines; simply by putting the software on one memory card in 2005 or early 2006, he could have started the process of spreading the virus from machine to machine as other technicians transferred memory cards among them.

The idea of a single person being able to steal a major election is much more frightening than the prospect of a larger group of people being able to steal a minor election. Of course, we should do whatever we can to prevent either scenario, but the risks posed by e-voting are clearly much more worrisome.

Fortunately, Congress is on the case, and may pass legislation restricting the use of e-voting machines during this session. In my next post, I’ll take a look at that legislation and consider some arguments for and against it.

I’d Be OK with Hinky, Given Post Hoc Articulation

Bruce Schneier has a typically interesting post about right and wrong ways to generate suspicion. In “Recognizing ‘Hinky’ vs. Citizen Informants,” he makes the case that asking amateurs for tips about suspicious behavior will have lots of wasteful and harmful results, like racial and ethnic discrimination, angry neighbors turning each other in, and so on. But people with expertise — even in very limited domains — can discover suspicious circumstances almost automatically, when they find things “hinky.”

As an example, a Rochester Institute of Technology student was recently discovered possessing assault weapons illegally (though that’s not necessarily good policy):

The discovery of the weapons was made only by chance. A conference center worker who served in the military was walking past Hackenburg’s dorm room. The door was shut, but the worker heard the all-too-familiar racking sound of a weapon … .

Schneier explains this in terms of “hinky”:

Each of us has some expertise in some topic, and will occasionally recognize that something is wrong even though we can’t fully explain what or why. An architect might feel that way about a particular structure; an artist might feel that way about a particular painting. I might look at a cryptographic system and intuitively know something is wrong with it, well before I figure out exactly what. Those are all examples of a subliminal recognition that something is hinky — in our particular domain of expertise.

This straddles an important line. Is it something we “can’t fully explain,” or something that feels wrong “before [one can] figure out exactly what”? My preference is that the thing should be explainable — not necessarily at the moment suspicion arises, but at some point.

I’m reminded of the Supreme Court formulation “reasonable suspicion based on articulable fact” — it was hammered into my brain in law school. It never satisfied me because the inquiry shouldn’t end at “articulable” but at whether, subsequently, the facts were actually articulated. “The hunch of an experienced officer” is an abdication that courts have indulged far too long.

I hear fairly often of “machine learning” that might be able to generate suspicion about terrorists. The cincher is that it’s so complicated we allegedly “can’t know” exactly what caused the machine to find a particular person, place, or thing worthy of suspicion. Given their superior memories, I think machines especially should be held to the standard of articulating the actual facts considered and the inferences drawn, reasonably, to justify whatever investigation follows.