Tag: technology

Speier (D-Silicon Valley) Sows Techno-panic

“Techno-Panics” are public and political crusades against the use of new media or technologies, particularly driven by the desire to protect children. As the moniker suggests, they’re not rational. Techno-panic is about imagined or trumped-up threats, often with a tenuous, coincidental, or potential relationship to the Internet. Adam Thierer and Berin Szoka of the Progress & Freedom Foundation have written extensively about techno-panics on the TechLiberationFront blog.

Talking about techno-panic does not deny the existence of serious problems. It merely identifies when policymakers and advocates lose their sense of proportion and react in ways that fail to address the genuine issues—such as censoring a web site because it reveals the fact that some few among a community of tens of millions of people will conspire to break the law.

You’d think that a congressional representative from the heart of Silicon Valley would not sow techno-panic, but here’s Jackie Speier (D-Calif.) on the Craigslist censorship issue:

“We can’t forget the victims, we can’t rest easy. Child-sex trafficking continues, and lawmakers need to fight future machinations of Internet-driven sites that peddle children.”

Of all representatives in Congress, Speier should know that Craigslist has been making it easier for law enforcement to locate and enforce the law against any perpetrators of crimes against children. Pushing them to rogue sites does law enforcement no good. Censoring Craiglist only masks the problem, which may be in the interest of politicians, but definitely not children.

GPS Tracking and a ‘Mosaic Theory’ of Government Searches

The Electronic Frontier Foundation trumpets a surprising privacy win last week in the U.S. Court of Appeals for the D.C. Circuit. In U.S. v. Maynard (PDF), the court held that the use of a GPS tracking device to monitor the public movements of a vehicle—something the Supreme Court had held not to constitute a Fourth Amendment search in U.S. v Knotts—could nevertheless become a search when conducted over an extended period.  The Court in Knotts had considered only tracking that encompassed a single journey on a particular day, reasoning that the target of surveillance could have no “reasonable expectation of privacy” in the fact of a trip that any member of the public might easily observe. But the Knotts Court explicitly reserved judgment on potential uses of the technology with broader scope, recognizing that “dragnet” tracking that subjected large numbers of people to “continuous 24-hour surveillance.” Here, the DC court determined that continuous tracking for a period of over a month did violate a reasonable expectation of privacy—and therefore constituted a Fourth Amendment search requiring a judicial warrant—because such intensive secretive tracking by means of public observation is so costly and risky that no  reasonable person expects to be subject to such comprehensive surveillance.

Perhaps ironically, the court’s logic here rests on the so-called “mosaic theory” of privacy, which the government has relied on when resisting Freedom of Information Act requests.  The theory holds that pieces of information that are not in themselves sensitive or potentially injurious to national security can nevertheless be withheld, because in combination (with each other or with other public facts) permit the inference of facts that are sensitive or secret.  The “mosaic,” in other words, may be far more than the sum of the individual tiles that constitute it. Leaving aside for the moment the validity of the government’s invocation of this idea in FOIA cases, there’s an obvious intuitive appeal to the idea, and indeed, we see that it fits our real world expectations about privacy much better than the cruder theory that assumes the sum of “public” facts must always be itself a public fact.

Consider an illustrative hypothetical.  Alice and Bob are having a romantic affair that, for whatever reason, they prefer to keep secret. One evening before a planned date, Bob stops by the corner pharmacy and—in full view of a shop full of strangers—buys some condoms.  He then drives to a restaurant where, again in full view of the other patrons, they have dinner together.  They later drive in separate cars back to Alice’s house, where the neighbors (if they care to take note) can observe from the presence of the car in the driveway that Alice has an evening guest for several hours. It being a weeknight, Bob then returns home, again by public roads. Now, the point of this little story is not, of course, that a judicial warrant should be required before an investigator can physically trail Bob or Alice for an evening.  It’s simply that in ordinary life, we often reasonably suppose the privacy or secrecy of certain facts—that Bob and Alice are having an affair—that could in principle be inferred from the combination of other facts that are (severally) clearly public, because it would be highly unusual for all of them to be observed by the same public.   Even more so when, as in Maynard, we’re talking not about the “public” events of a single evening, but comprehensive observation over a period of weeks or months.  One must reasonably expect that “anyone” might witness any of such a series of events; it does not follow that one cannot reasonably expect that no particular person or group would be privy to all of them. Sometimes, of course, even our reasonable expectations are frustrated without anyone’s rights being violated: A neighbor of Alice’s might by chance have been at the pharmacy and then at the restaurant. But as the Supreme Court held in Kyllo v US, even when some information might in principle be possible to obtain public observation, the use of technological means not in general public use to learn the same facts may nevertheless qualify as a Fourth Amendment search, especially when the effect of technology is to render easy a degree of monitoring that would otherwise be so laborious and costly as to normally be infeasible.

Now, as Orin Kerr argues at the Volokh Conspiracy, significant as the particular result in this case might be, it’s the approach to Fourth Amendment privacy embedded here that’s the really big story. Orin, however, thinks it a hopelessly misguided one—and the objections he offers are all quite forceful.  Still, I think on net—especially as technology makes such aggregative monitoring more of a live concern—some kind of shift to a “mosaic” view of privacy is going to be necessary to preserve the practical guarantees of the Fourth Amendment, just as in the 20th century a shift from a wholly property-centric to a more expectations-based theory was needed to prevent remote sensing technologies from gutting its protections. But let’s look more closely at Orin’s objections.

First, there’s the question of novelty. Under the mosaic theory, he writes:

[W]hether government conduct is a search is measured not by whether a particular individual act is a search, but rather whether an entire course of conduct, viewed collectively, amounts to a search. That is, individual acts that on their own are not searches, when committed in some particular combinations, become searches. Thus in Maynard, the court does not look at individual recordings of data from the GPS device and ask whether they are searches. Instead, the court looks at the entirety of surveillance over a one-month period and views it as one single “thing.” Off the top of my head, I don’t think I have ever seen that approach adopted in any Fourth Amendment case.

I can’t think of one that explicitly adopts that argument.  But consider again the Kyllo case mentioned above.  Without a warrant, police used thermal imaging technology to detect the presence of marijuana-growing lamps within a private home from a vantage point on a public street. In a majority opinion penned by Justice Scalia, the court balked at this: The scan violated the sanctity and privacy of the home, though it involved no physical intrusion, by revealing the kind of information that might trigger Fourth Amendment scrutiny. But stop and think for a moment about how thermal imaging technology works, and try to pinpoint where exactly the Fourth Amendment “search” occurs.  The thermal radiation emanating from the home was, well… emanating from the home, and passing through or being absorbed by various nearby people and objects. It beggars belief to think that picking up the radiation could in itself be a search—you can’t help but do that!

When the radiation is actually measured, then? More promising, but then any use of an infrared thermometer within the vicinity of a home might seem to qualify, whether or not the purpose of the user was to gather information about the home, and indeed, whether or not the thermometer was precise enough to reveal any useful information about internal temperature variations within the home.  The real privacy violation here—the disclosure of private facts about the interior of the home—occurs only when a series of very many precise measurements of emitted radiation are processed into a thermographic image.  To be sure, it is counterintuitive to describe this as a “course of conduct” because the aggregation and analysis are done quite quickly within the processor of the thermal camera, which makes it natural to describe the search as a single act: Creating a thermal image.  But if we zoom in, we find that what the Court deemed an unconstitutional invasion of privacy was ultimately the upshot of a series of “public” facts about ambient radiation levels, combined and analyzed in a particular way.  The thermal image is, in a rather literal sense, a mosaic.

The same could be said about long-distance  spy microphones: Vibrating air is public; conversations are private. Or again, consider location tracking, which is unambiguously a “search” when it extends to private places: It might be that what is directly measured is only the “public” fact about the strength of a particular radio signal at a set of receiver sites; the “private” facts about location could be described as a mere inference, based on triangulation analysis (say), from the observable public facts.

There’s also a scope problem. When, precisely, do individual instances of permissible monitoring become a search requiring judicial approval? That’s certainly a thorny question, but it arises as urgently in the other type of hypothetical case alluded to in Knotts, involving “dragnet” surveillance of large numbers of individuals over time. Here, too, there’s an obvious component of duration: Nobody imagines that taking a single photograph revealing the public locations of perhaps hundreds of people at a given instant constitutes a Fourth Amendment search. And just as there’s no precise number of grains of sand that constitutes a “heap,” there’s no obvious way to say exactly what number of people, observed for how long, are required to distinguish individualized tracking from “dragnet” surveillance.  But if we anchor ourselves in the practical concerns motivating the adoption of the Fourth Amendment, it seems clear enough that an interpretation that detected no constitutional problem with continuous monitoring of every public movement of every citizen would mock its purpose. If we accept that much, a line has to be drawn somewhere. As I recall, come to think of it, Orin has himself proposed a procedural dichotomy between electronic searches that are “person-focused” and those that are “data-focused.”  This approach has much to recommend it, but is likely to present very similar boundary-drawing problems.

Orin also suggests that the court improperly relies upon a “probabilistic” model of the Fourth Amendment here (looking to what expectations about monitoring are empirically reasonable) whereas the Court has traditionally relied on a “private facts” model to deal with cases involving new technologies (looking to which types of information it is reasonable to consider private by their nature). Without recapitulating the very insightful paper linked above, the boundaries between models in Orin’s highly useful schema do not strike me as quite so bright. The ruling in Kyllo, after all, turned in part on the fact that infrared imaging devices are not in “general public use,” suggesting that the identification of “private facts” itself has an empirical and probabilistic component.  The analyses aren’t really separate. What’s crucial to bear in mind is that there are always multiple layers of facts involved with even a relatively simple search: Facts about the strength of a particular radio signal, facts about a location in a public or private place at a particular instant, facts about Alice and Bob’s affair. In cases involving new technologies, the problem—though seldom stated explicitly—is often precisely which domain of facts to treat as the “target” of the search. The point of the expectations analysis in Maynard is precisely to establish that there is a domain of facts about macro-level behavioral patterns distinct from the unambiguously public facts about specific public movements at particular times, and that we have different attitudes about these domains.

Sorting all this out going forward is likely to be every bit as big a headache as Orin suggests. But if the Fourth Amendment has a point—if it enjoins us to preserve a particular balance between state power and individual autonomy—then as technology changes, its rules of application may need to get more complicated to track that purpose, as they did when the Court ruled that an admirably simple property rule was no longer an adequate criterion for identifying a “search.”  Otherwise we make Fourth Amendment law into a cargo cult, a set of rituals whose elegance of form is cold consolation for their abandonment of function.

Busting the Myth that Web Sites ‘Sell Your Data’

On TLF, Berin Szoka comes up just shy of ranting, but it’s a good rant against the myth that Web sites like Facebook sell or give your data to advertisers.

In targeted online advertising, the business model is generally to sell advertisers access to people based on their demographics. It is not to sell individuals’ personal and contact info. Doing the latter would undercut the advertising business model and the profitability of the web sites carrying the advertising.

I did some myth-busting of my own last year when the Wall Street Journal published erroneous information about a health-interest site called RealAge.com, which does not give or sell visitors’ data to drug companies.

Understanding how technologies and business models work is job one for crafting good public policies, but as I noted yesterday

Nor Does Tech Get D.C… .

Politico has a pretty thorough article on D.C.’s thorough ignorance of things tech.

Take a 2008 hearing before the Senate Commerce Committee about privacy and online behavior-based advertising. The discussion seemed to fall apart when Sens. Tom Carper (D-Del.), Bill Nelson (D-Fla.) and others seemed not to understand the term “cookies.”

Cookies. That’s the (utterly rudimentary) technology that was an issue a decade ago. Washington, D.C. naturally overreacted, but luckily only harmed itself. The White House recently revamped the cookie policy for federal government web sites.

It’s worth noting Tech’s thorough misapprehension of Washington, D.C. as well. Judging by how they act, most tech executives have all the insight they could pick up from Schoolhouse Rock. It seems cool and helpful to come to Washington and give money, so they do, encouraging the bears to rip open their cars looking for peanut butter.

One From Silicon Valley: Leave Us Alone

A passionate plea from Michael Arrington TechCrunch, the number three tech blog in the country and the number four blog overall, according to Technorati’s current rankings:

Silicon Valley has fueled much of the growth in our economy over the last few decades and has created amazing (and highly profitable) companies that are making the world a much better and more interesting place to live. All that happened while the government ignored us.

We don’t want handouts. We don’t want “public-private partnerships,” and we sure as hell don’t want legislation. Just let us do our thing and maybe say thanks to those companies that create jobs by the hundreds of thousands and send in those humongous corporate tax payments on profits. Because all you can do is screw up something beautiful. Really.

While maintaining his hugely popular site, Arrington has made himself something of a controversialist. His policy preferences aren’t strictly libertarian, but his instincts are that freedom produces innovation much better than any alternative public policies.

Collecting Dots and Connecting Dots

As Jeff Stein notes over at the Washington Post, the declassified summary of the Senate Intelligence Committee’s report on the Christmas underpants bomber ought to sound awfully familiar to anyone who thumbed through the 9/11 Commission’s massive analysis of intelligence failures. Of the 14 points of failure identified by the Senate, one pertains to a failure of surveillance acquisition: the understandably vague claim that NSA “did not pursue potential collection opportunities,” which it’s impossible to really evaluate without more information. (Marc Ambinder tries to fill in some of the gaps at The Atlantic.)  The other 13 echo that old refrain: Lots of data points, nobody managing to connect them. Problems included myopic analysis—folks looking at Yemen focused on regionally-directed threats—sluggish information dissemination, misconfigured computers, and simple failure to act on information already in hand.

Yet you’ll notice that in the wake of such failures, the political response tends to be heavily weighted toward finding ways to collect more dots.  We hear calls for more surveillance cameras in our cities, more wiretapping with fewer restrictions, fancier scanners in the airport, fewer due process protections for captured suspects. Sometimes you’ll also see efforts to address the actual causes of intelligence failure, but they certainly don’t get the bulk of the attention.  And little wonder! Structural problems internal to intelligence or law enforcement agencies, or failures of coordination between them, are a dry, wonky, and often secret business. The solutions are complicated, distinctly unsexy, and (crucially) don’t usually lend themselves to direct legislative amelioration—especially when Congress has already rolled out the big new coordinating entities that were supposed to solve these problems last time around.

But demands for more power and more collection and more visible gee-whiz technology?  Well, those are simple. Those are things you can trumpet in a 700-word op-ed and brag about in press releases to your constituents. Those are things pundits and anchors can debate in without intimate knowledge of Miroesque DOJ org charts.  In short, we end up talking about the things that are easy to talk about.  We should not be under any illusions that this makes them good solutions to intel’s real problems. Hard as it is for pundits to sit silent or legislators to seem idle, sometimes the most vital reforms just don’t make for snazzy headlines.