Tag: privacy

Dynamic Marketplace, Nimble Legislature

Years ago, when I worked on Capitol Hill, a colleague invited me to attend a meeting with some university professors who had a new idea for regulation of the telecommunications sector.

“Bits,” they said. “All regulation should center on bits.”

With convergence on IP-based communications, the regulatory silos dominating telecommunications would soon be more than anachronistic. Indeed, they would be a burden on the telecom sector. Bits were the fundamental unit of measure for the coming telecommunications era, and regulation should be formed around that reality.

My colleague and I looked at each other, amused.

Figuring out the substance is 5% of the problem. The other 95% is pulling together a sufficient coalition and muting opposition to your reform. More than a decade after this meeting and with “convergence” a rather old and obvious idea, the telecom regulatory regime is unchanged.

Like these professors did with telecom, many people can imagine legislative solutions to problems in the privacy era. I often don’t agree that their solutions are good, but nonetheless the capacity to imagine a suitable regulation is only 5% of the problem. Whether a good idea can be reduced to legislative language, passed in the same form, and implemented in its original spirit—all these are reasons to be wary of the legislative enterprise. What happens if something goes wrong?

Take the example of the privacy notices that the Gramm-Leach-Bliley Act requires financial institution to send to consumers each year. At the time it passed, I argued that it was an anti-marketing law much more than a privacy law. I haven’t seen anyone argue that financial privacy has flourished since it passed. I have also expressed doubts about notice and its utility for consumers many times, including in this long post, part of an abandoned debate with Cato colleague Julian Sanchez.

But putting aside these substantive issues, I don’t think anybody believed when Gramm-Leach-Bliley passed that consumers should get annual privacy notices from financial services providers that don’t share information in the ways the law was meant to affect.

But it did require those notices, and after the law passed in late 1999, those privacy notices started to go out:

“It’s 2000, and we don’t share information about you.”

“It’s 2001, and we’re still not sharing information about you.”

“It’s 2002—still not sharing information.”

“It’s 2003—we continue to not share information about you.”

“Hey, friend, here in 2004, we’re not sharing information about you!”

And so on, and so on, and so on—meaningless notices that could only confuse consumers.

So I was amused to see yesterday—more than ten years later—that the House of Representatives passed H.R. 3506, the “Eliminate Privacy Notice Confusion Act.” If would allow financial services providers that don’t share personal information in ways relevant to the GLB Act to stop sending those meaningless notices every year.

It took Congress ten years to correct a simple, obvious mistake—something nobody intended to put into the law. How many years would it take to correct privacy law on which opinion was divided?

Online privacy is more difficult and changing than financial privacy. The weakness of artificial “privacy notice” to affect consumer awareness and behavior is starting to dawn on people. But even if we did know the right answers, I would be wary of writing them into law.

A dynamic market needs a nimble legislature overseeing it. There’s just no such thing. Prefer the market.

Your Medical Records Aren’t Secure

I have one observation about, and one minor difference with, the very good—and very concerning—Wall Street Journal opinion piece by Deborah Peel of Patient Privacy Rights. The piece announces PPR’s “Do Not Disclose” campaign around health information, which will soon be pouring into promiscuous, government-designed “electronic medical records.”

In a January 2009 speech, President Barack Obama said that his administration wants every American to have an electronic health record by 2014, and last year’s stimulus bill allocated over $36 billion to build electronic record systems. Meanwhile, the Senate health-care bill just approved by the House of Representatives on Sunday [now signed into law] requires certain kinds of research and reporting to be done using electronic health records. Electronic records, Mr. Obama said in his 2009 speech, “will cut waste, eliminate red tape and reduce the need to repeat expensive medical tests [and] save lives by reducing the deadly but preventable medical errors that pervade our health-care system.” But electronic medical records won’t accomplish any of these goals if patients fear sharing information with doctors because they know it isn’t private…

Describing how the Health Insurance Portability and Accoutability Act (HIPAA) undermined health privacy, Peel says, ”In 2002, under President George W. Bush, the right of a patient to control his most sensitive personal data—from prescriptions to DNA—was eliminated by federal regulators…” Other than the quibble about whether federal law ever gave patients anything that could be genuinely called a right, this is correct and concerning.

What’s interesting is that the policy is routinely ascribed to President Bush (not only by Peel). My suspicion is that blaming President Bush props up the dream that privacy can be maintained in a system that centralizes control of health care—if only the right party is in power.

In fact, the passage of HIPAA in 1996 (under President Bill Clinton) set the course for this outcome. The fact that HIPAA privacy was undone during the Bush administration is a coincidence convenient for his ideological and political opponents. If I’m mistaken, the proof will be the reversal of the policy during the current administration. I’m not aware of any plan for that to happen.

“Electronic record systems that don’t put patients in control of data or have inadequate security create huge opportunities for the theft, misuse and sale of personal health information,” says Peel. I agree, but more importantly, I think, public policies that don’t put patients in control create the same—or at least parallel—problems.

Transferring control of health care to the federal government transfers control of health information to the federal government. The government has interests distinct from patients, and no matter how hard one fights to protect patients’ privacy interests, the government’s interests in cost control, social engineering, and such will ineluctably win out.

Public policies that restore power to patients will restore health privacy to patients. A decade or two of exploring alternatives to patient empowerment may drive the lesson home.

Patriot Act Update

It looks as though we’ll be getting a straight one-year reauthorization of the expiring provisions of the Patriot Act, without even the minimal added safeguards for privacy and civil liberties that had been proposed in the Senate’s watered down bill.  This is disappointing, but was also eminently predictable: Between health care and the economy, it was clear Congress wasn’t going to make time for any real debate on substantive reform of surveillance law. Still, the fact that the reauthorization is only for one year suggests that the reformers plan to give it another go—though, in all probability, we won’t see any action on this until after the midterm elections.

The silver lining here is that this creates a bit of breathing room, and means legislators may now have a chance to take account of the absolutely damning Inspector General’s report that found that the FBI repeatedly and systematically broke the law by exceeding its authorization to gather information about people’s telecommunications activities. It also means the debate need not be contaminated by the panic over the Fort Hood shootings or the failed Christmas bombing—neither of which have anything whatever to do with the specific provisions at issue here, but both of which would have doubtless been invoked ad nauseam anyway.

On Fourth Amendment Privacy: Everybody’s Wrong

Everybody’s wrong. That’s sort of the message I was putting out when I wrote my 2008 American University Law Review article entitled “Reforming Fourth Amendment Privacy Doctrine.”

A lot of people have poured a lot of effort into the “reasonable expectation of privacy” formulation Justice Harlan wrote about in his concurrence to the 1967 decision in U.S. v. Katz. But the Fourth Amendment isn’t about people’s expectations or the reasonableness of their expectations. It’s about whether, as a factual matter, they have concealed information from others—and whether the government is being reasonable in trying to discover that information.

The upshot of the “reasonable expectation of privacy” formulation is that the government can argue—straight-faced—that Americans don’t have a Fourth Amendment interest in their locations throughout the day and night because data revealing it is produced by their mobile phones’ interactions with telecommunications providers, and the telecom companies have that data.

I sat down with podcaster extraordinaire Caleb Brown the other day to talk about all this. He titled our conversation provocatively: “Should the Government Own Your GPS Location?

Government-Mandated Spying on Bank Customers Undermines both Privacy and Law Enforcement

I recently publicized an interesting map showing that so-called tax havens are not hotbeds of dirty money. A more fundamental question is whether anti-money laundering laws are an effective way of fighting crime – particularly since they substantially undermine privacy.

In this new six-minute video, I ask whether it’s time to radically rethink a system that costs billions of dollars each year, forces banks to snoop on their customers, and misallocates law enforcement resources.

Big Teacher Is Watching

Researching government invasions of privacy all day, I come across my fair share of incredibly creepy stories, but this one may just take the cake.  A lawsuit alleges that the Lower Merion School District in suburban Pennsylvania used laptops issued to each student to spy on the kids at home by remotely and surreptitiously activating the webcam built into the bezel of each one. The horrified parents of one student apparently learned about this capability when their son was called in to the assistant principal’s office and accused of “inappropriate behavior while at home.” The evidence? A still photograph taken by the laptop camera in the student’s home.

I’ll admit, at first I was somewhat skeptical—if only because this kind of spying is in such flagrant violation of so many statutes that I thought surely one of the dozens of people involved in setting it up would have piped up and said: “You know, we could all go to jail for this.” But then one of the commenters over at Boing Boing reminded me that I’d seen something like this before, in a clip from Frontline documentary about the use of technology in one Bronx school.  Scroll ahead to 4:37 and you’ll see a school administrator explain how he can monitor what the kids are up to on their laptops in class. When he sees students using the built-in Photo Booth software to check their hair instead of paying attention, he remotely triggers it to snap a picture, then laughs as the kids realize they’re under observation and scurry back to approved activities.

I’ll admit, when I first saw that documentary—it aired this past summer—that scene didn’t especially jump out at me. The kids were, after all, in class, where we expect them to be under the teacher’s watchful eye most of the time anyway. The now obvious question, of course, is: What prevents someone from activating precisely the same monitoring software when the kids take the laptops home, provided they’re still connected to the Internet?  Still more chilling: What use is being made of these capabilities by administrators who know better than to disclose their extracurricular surveillance to the students?  Are we confident that none of these schools employ anyone who might succumb to the temptation to check in on teenagers getting out of the shower in the morning? How would we ever know?

I dwell on this because it’s a powerful illustration of a more general point that can’t be made often enough about surveillance: Architecture is everything. The monitoring software on these laptops was installed with an arguably legitimate educational purpose, but once the architecture of surveillance is in place, abuse becomes practically inevitable.  Imagine that, instead of being allowed to install a bug in someone’s home after obtaining a warrant, the government placed bugs in all homes—promising to activate them only pursuant to a judicial order.  Even if we assume the promise were always kept and the system were unhackable—both wildly implausible suppositions—the amount of surveillance would surely spike, because the ease of resorting to it would be much greater even if the formal legal prerequisites remained the same. And, of course, the existence of the mics would have a psychological effect of making surveillance seem like a default.

You can see this effect in law enforcement demands for data retention laws, which would require Internet Service Providers to keep at least customer transactional logs for a period of years. In face-to-face interactions, of course, our default assumption is that no record at all exists of the great majority of our conversations. Law enforcement accepts this as a fact of nature. But with digital communication, the default is that just about every activity creates a record of some sort, and so police come to see it as outrageous that a potentially useful piece of evidence might be deleted.

Unfortunately, we tend to discuss surveillance in myopically narrow terms.  Should the government be able to listen in on the phone conversations of known terrorists? To pose the question is to answer it. What kind of technological architecture is required to reliably sweep up all the communications an intelligence agency might want—for perfectly legitimate reasons—and what kind of institutional incentives and inertia does that architecture create? A far more complicated question—and one likely to seem too abstract to bother about for legislators focused on the threat of the week.

The Government Has Your Baby’s DNA

My 2004 Cato Policy Analysis, “Understanding Privacy – and the Real Threats to It,” talks about how government programs intended to do good have unintended privacy costs. “The helping hand of government routinely strips away privacy before it goes to work,” I wrote.

There could be no better illustration of that than the recent CNN report on government collection and warehousing of American babies’ DNA. “Scientists have said the collection of DNA samples is a ‘gold mine’ for doing research,” notes a sidebar to the story.

I have no doubt that it is—and that government-mandated harvesting of this highly valuable personal data from children is an unjust enrichment of the beneficiaries.