Tag: Fourth Amendment

Wikileaks, Twitter, and Our Outdated Electronic Surveillance Laws

This weekend, we learned that the U.S. government last month demanded records associated with the Twitter accounts of several supporters of WikiLeaks—including American citizens and an elected member of Iceland’s parliament. As the New York Times observes, the only remarkable thing about the government’s request is that we’re learning about it, thanks to efforts by Twitter’s legal team to have the order unsealed. It seems a virtual certainty that companies like Facebook and Google have received similar demands.

Most news reports are misleadingly describing the order [PDF] as a “subpoena” when in actuality it’s a judicially-authorized order under 18 U.S.C §2703(d), colloquially known (to electronic surveillance geeks) as a “D-order.” Computer security researcher Chris Soghoian has a helpful rundown on the section and what it’s invocation entails, while those who really want to explore the legal labyrinth that is the Stored Communications Act should consult legal scholar Orin Kerr’s excellent 2004 paper on the topic.

As the Times argues in a news analysis today, this is one more reminder that our federal electronic surveillance laws, which date from 1986, are in dire need of an update. Most people assume their online communications enjoy the same Fourth Amendment protection as traditional dead-tree-based correspondence, but the statutory language allows the contents of “electronic communications” to be obtained using those D-orders if they’re older than 180 days or have already been “opened” by the recipient. Unlike traditional search warrants, which require investigators to establish “probable cause,” D-orders are issued on the mere basis of “specific facts” demonstrating that the information sought is “relevant” to a legitimate investigation. Fortunately, an appellate court has recently ruled that part of the law unconstitutional—making it clear that the Fourth Amendment does indeed apply to email… a mere 24 years after the original passage of the law.

The D-order disclosed this weekend does not appear to seek communications content—though some thorny questions might well arise if it had. (Do messages posted to a private or closed Twitter account get the same protection as e-mail?) But the various records and communications “metadata” demanded here can still be incredibly revealing. Unless the user is employing anonymizing technology—which, as Soghoian notes, is fairly likely when we’re talking about such tech-savvy targets—logs of IP addresses used to access a service like Twitter may help reveal the identity of the person posting to an anonymous account, as well as an approximate physical location. The government may also wish to analyze targets’ communication patterns in order to build a “social graph” of WikiLeaks supporters and identify new targets for investigation. (The use of a D-order, as opposed to even less restrictive mechanisms that can be used to obtain basic records, suggests they’re interested in who is talking to whom on the targeted services.) Given the degree of harassment to which known WikiLeaks supporters have been subject, easy access to such records also threatens to chill what the courts have called “expressive association.” But unlike traditional wiretaps, D-order requests for data aren’t even subject to mandatory reporting requirements—which means surveillance geeks may be confident this sort of thing is fairly routine, but the general public lacks any real sense of just how pervasive it is. Whatever your take on WikiLeaks, then, this rare peek behind the curtain is one more reminder that our digital privacy laws are long overdue for an upgrade.

TSA’s Strip/Grope: Unconstitutional?

Writing in the Washington Post, George Washington University law professor Jeffrey Rosen carefully concludes, “there’s a strong argument that the TSA’s measures violate the Fourth Amendment, which prohibits unreasonable searches and seizures.” The strip/grope policy doesn’t carefully escalate through levels of intrusion the way a better designed program using more privacy protective technology could.

It’s a good constutional technician’s analysis. But Professor Rosen doesn’t broach one of the most important likely determinants of Fourth Amendment reasonableness: the risk to air travel these searches are meant to reduce.

Writing in Politico last week, I pointed out that there have been 99 million domestic flights in the last decade, transporting seven billion passengers. Not one of these passengers snuck a bomb onto a plane and detonated it. Given that this period coincides with the zenith of Al Qaeda terrorism, this suggests a very low risk.

Proponents of the TSA’s regime point out that threats are very high, according to information they have. But that trump card—secret threat information—is beginning to fail with the public. It would take longer, but would eventually fail with courts, too.

But rather than relying on courts to untie these knots, Congress should subject TSA and the Department of Homeland Security to measures that will ultimately answer the open risk questions: Require any lasting security measures to be justified on the public record with documented risk management and cost-benefit analysis. Subject such analyses to a standard of review such as the Adminstrative Procedure Act’s “arbitrary and capricious” standard. Indeed, Congress might make TSA security measures APA notice-and-comment rules, with appropriate accomodation for (truly) temporary measures required by security exigency.

Claims to secrecy are claims to power. Congress should withdraw the power of secrecy from the TSA and DHS, subjecting these agencies to the rule of law.

Phone Numbers, E-Mail Addresses, and Metaphor Wars

The law normally advances by small and cautious steps—by the gradual extension of established precedents and rules to novel problems and fact patterns. Little wonder, then, that tricky questions of law often amount to conflicts between competing metaphors. Is a hard drive like a closed briefcase whose contents are all fair game for police once the “container” is legitimately opened? Or is it more like a warehouse containing hundreds or thousands of individual closed containers? If the latter, what are the “containers”? Directories? Individual files?

A similar metaphor war figures in the FBI’s effort to expand its authority to acquire information from Internet Service Providers using National Security Letters, which are issued by agents without judicial oversight, and typically forbid providers from disclosing anything about the demand for records. The Bureau had long assumed that the NSL statutes gave them broad authority to get “electronic communications transaction records”—information about your online communications, though not the contents of the communications themselves—as long as they certified that those records would be “relevant” to a national security investigation, a far lower standard than the Fourth Amendment’s “probable cause.” But in a 2008 opinion, the Bush administration’s Office of Legal Counsel rejected this interpretation, finding that NSLs could only be used to obtain the particular types of records specified in the statute, including “toll billing records.” For Internet accounts, this meant the FBI could only get “information parallel to… toll billing records for ordinary telephone service.”

The obvious question is what, exactly, constitutes information “parallel to” a toll billing record in the online context. The FBI would prefer to resolve the ambiguity by simply amending the law to give them blanket authority to acquire transaction records. In particular, according to The Washington Post, government lawyers think they can obtain “the addresses to which an Internet user sends e-mail; the times and dates e-mail was sent and received; and possibly a user’s browser history.” On its face, this sounds like a reasonable reading. An important 1979 Supreme Court case, Smith v. Maryland, held that the information contained in telephone “toll billing records”—the itemized list of calls placed and received you’d find on a standard phone bill—didn’t enjoy Fourth Amendment protection, and so unlike the contents of phone conversations themselves, could be obtained by the government without a full probable cause warrant. Surely the obvious equivalent in the online context is the list of e-mail addresses in an Internet user’s inbox and outbox? At a second glance, though, there are some problems with that metaphor, of two central kinds.

First, there’s a problem with the formal analogy. The Court in Smith supported their finding of a diminished privacy interest in toll billing records on numerous grounds.  For one, the Court noted that because one’s itemized phone bill did contain these numbers, no reasonable person could be unaware that this information was “exposed” to employees of the phone company and retained as a matter of course among the company’s business records. Of course, it’s now increasingly common for phone companies to charge a flat rate rather than billing by individual calls, and so the legislative history of the NSL statutes makes clear that by “toll billing records” they mean information that could be used to assess a charge, even if a company happened not to charge that way.

The analogy gets pretty strained when we come to Internet services, though. At the time the laws in question here were written, ISPs almost universally charged people for the amount of time they were connected, not by the number of individual e-mails sent. Now it’s much more common to simply play a flat monthly fee for broadband connection, though you also sometimes see plans where there’s a charge by the megabyte above a certain threshold of bandwidth usage. Your ISP, of course has technical access to the list of e-mail addresses you’ve communicated with—just as they have the ability to access the e-mails themselves—but no major service, as far as I know, has ever actually kept this list as a separate billing record.

But maybe that’s not the right way to apply the metaphor. Maybe what’s important is whether those to/from e-mail records are substantively “parallel to” the kind of information you’d traditionally find in telephone toll billing records. As the Smith Court observed, a list of phone numbers was far less revealing and sensitive than the actual conversation—it revealed nothing of the “purport” of the communication itself, or even who was on the call. But as soon as we start to think more carefully about how we actually use e-mail in the real world, it becomes clear that the analogy is far from perfect.

One thing lots of people do with e-mail, after all, is participate in mailing lists and discussion groups.  Records of this sort, then, are likely to reveal the membership in potentially controversial social, political, or religious groups—and the Supreme Court has also found that such membership lists enjoy First Amendment protection as a component of freedom of association. But they’d also reveal much more than that. The closest telephone analogue to a mailing list discussion is probable a conference call.  An investigator who obtained toll billing records for such a call would, at most, have learned that a certain number of people called in for a certain amount of time; they’d learn nothing about who spoke in response to whom, or how much, and who remained silent.  Someone getting  e-mail transaction records would have a much more detailed picture of who was vocal and who was silent, the order and frequency with which participants spoke, and so on. And more generally, people in practice do not use e-mail like traditional letters: They tend to have exchanges in which each individual e-mail is more like a piece of the longer conversation.

There are also many common uses of e-mail that don’t really have close analogies in the telephonic context.  If I make a purchase from Amazon, win an Ebay auction,  make an OpenTable restaurant reservation, register for a conference at a local think tank, or place a Craigslist ad, that will typically generate an automatic confirmation e-mail from the site, and the e-mail address from which the site comes will often reveal something about the nature of the transaction. (My inbox has messages from auto-confirm, order-update, ship-confirm, and  store-news @amazon.com—inherently more revealing than the mere fact that I called some mail-order vendor.) It’s not a particularly big deal in those cases, but such e-mails could also reveal that I had opened or closed or modified an account at a particular politically, sexually, or religiously oriented Web site, or subscribed to a specific publication.

For an example of just how sensitive and revealing such task-specific e-mail addresses can be, consider Craigslist in particular. The site—which for those who haven’t used it is the vast online equivalent of the newspaper’s classified section—generates an individual anonymized e-mail addresses for each ad placed, so that users don’t have to expose their own contact information to the world. Yet while this provides anonymity against the general public, it also makes those mere e-mail addresses much more revealing to the government agent who obtains transaction records. That’s because each ad can be linked to a particular e-mail address, so if you’ve sent a message to pers-1234567-ABCD [at] craigslist [dot] com, the government may not know exactly who you’ve written, but they can determine why you’re writing: To respond to an ad offering a handgun for sale, say, or one soliciting a foot fetishist for a “casual encounter.”

The point is not just that investigators shouldn’t be able to get e-mail transaction records without a probable cause warrant—though I happen to think that would be a reasonable standard. It’s that metaphors can mislead us: We need to look past the easy equivalencies between new technologies and more traditional forms of communication, and drill down to see the full range of privacy interests implicated given the real-world practices of ordinary people who use those technologies.

GPS Tracking and a ‘Mosaic Theory’ of Government Searches

The Electronic Frontier Foundation trumpets a surprising privacy win last week in the U.S. Court of Appeals for the D.C. Circuit. In U.S. v. Maynard (PDF), the court held that the use of a GPS tracking device to monitor the public movements of a vehicle—something the Supreme Court had held not to constitute a Fourth Amendment search in U.S. v Knotts—could nevertheless become a search when conducted over an extended period.  The Court in Knotts had considered only tracking that encompassed a single journey on a particular day, reasoning that the target of surveillance could have no “reasonable expectation of privacy” in the fact of a trip that any member of the public might easily observe. But the Knotts Court explicitly reserved judgment on potential uses of the technology with broader scope, recognizing that “dragnet” tracking that subjected large numbers of people to “continuous 24-hour surveillance.” Here, the DC court determined that continuous tracking for a period of over a month did violate a reasonable expectation of privacy—and therefore constituted a Fourth Amendment search requiring a judicial warrant—because such intensive secretive tracking by means of public observation is so costly and risky that no  reasonable person expects to be subject to such comprehensive surveillance.

Perhaps ironically, the court’s logic here rests on the so-called “mosaic theory” of privacy, which the government has relied on when resisting Freedom of Information Act requests.  The theory holds that pieces of information that are not in themselves sensitive or potentially injurious to national security can nevertheless be withheld, because in combination (with each other or with other public facts) permit the inference of facts that are sensitive or secret.  The “mosaic,” in other words, may be far more than the sum of the individual tiles that constitute it. Leaving aside for the moment the validity of the government’s invocation of this idea in FOIA cases, there’s an obvious intuitive appeal to the idea, and indeed, we see that it fits our real world expectations about privacy much better than the cruder theory that assumes the sum of “public” facts must always be itself a public fact.

Consider an illustrative hypothetical.  Alice and Bob are having a romantic affair that, for whatever reason, they prefer to keep secret. One evening before a planned date, Bob stops by the corner pharmacy and—in full view of a shop full of strangers—buys some condoms.  He then drives to a restaurant where, again in full view of the other patrons, they have dinner together.  They later drive in separate cars back to Alice’s house, where the neighbors (if they care to take note) can observe from the presence of the car in the driveway that Alice has an evening guest for several hours. It being a weeknight, Bob then returns home, again by public roads. Now, the point of this little story is not, of course, that a judicial warrant should be required before an investigator can physically trail Bob or Alice for an evening.  It’s simply that in ordinary life, we often reasonably suppose the privacy or secrecy of certain facts—that Bob and Alice are having an affair—that could in principle be inferred from the combination of other facts that are (severally) clearly public, because it would be highly unusual for all of them to be observed by the same public.   Even more so when, as in Maynard, we’re talking not about the “public” events of a single evening, but comprehensive observation over a period of weeks or months.  One must reasonably expect that “anyone” might witness any of such a series of events; it does not follow that one cannot reasonably expect that no particular person or group would be privy to all of them. Sometimes, of course, even our reasonable expectations are frustrated without anyone’s rights being violated: A neighbor of Alice’s might by chance have been at the pharmacy and then at the restaurant. But as the Supreme Court held in Kyllo v US, even when some information might in principle be possible to obtain public observation, the use of technological means not in general public use to learn the same facts may nevertheless qualify as a Fourth Amendment search, especially when the effect of technology is to render easy a degree of monitoring that would otherwise be so laborious and costly as to normally be infeasible.

Now, as Orin Kerr argues at the Volokh Conspiracy, significant as the particular result in this case might be, it’s the approach to Fourth Amendment privacy embedded here that’s the really big story. Orin, however, thinks it a hopelessly misguided one—and the objections he offers are all quite forceful.  Still, I think on net—especially as technology makes such aggregative monitoring more of a live concern—some kind of shift to a “mosaic” view of privacy is going to be necessary to preserve the practical guarantees of the Fourth Amendment, just as in the 20th century a shift from a wholly property-centric to a more expectations-based theory was needed to prevent remote sensing technologies from gutting its protections. But let’s look more closely at Orin’s objections.

First, there’s the question of novelty. Under the mosaic theory, he writes:

[W]hether government conduct is a search is measured not by whether a particular individual act is a search, but rather whether an entire course of conduct, viewed collectively, amounts to a search. That is, individual acts that on their own are not searches, when committed in some particular combinations, become searches. Thus in Maynard, the court does not look at individual recordings of data from the GPS device and ask whether they are searches. Instead, the court looks at the entirety of surveillance over a one-month period and views it as one single “thing.” Off the top of my head, I don’t think I have ever seen that approach adopted in any Fourth Amendment case.

I can’t think of one that explicitly adopts that argument.  But consider again the Kyllo case mentioned above.  Without a warrant, police used thermal imaging technology to detect the presence of marijuana-growing lamps within a private home from a vantage point on a public street. In a majority opinion penned by Justice Scalia, the court balked at this: The scan violated the sanctity and privacy of the home, though it involved no physical intrusion, by revealing the kind of information that might trigger Fourth Amendment scrutiny. But stop and think for a moment about how thermal imaging technology works, and try to pinpoint where exactly the Fourth Amendment “search” occurs.  The thermal radiation emanating from the home was, well… emanating from the home, and passing through or being absorbed by various nearby people and objects. It beggars belief to think that picking up the radiation could in itself be a search—you can’t help but do that!

When the radiation is actually measured, then? More promising, but then any use of an infrared thermometer within the vicinity of a home might seem to qualify, whether or not the purpose of the user was to gather information about the home, and indeed, whether or not the thermometer was precise enough to reveal any useful information about internal temperature variations within the home.  The real privacy violation here—the disclosure of private facts about the interior of the home—occurs only when a series of very many precise measurements of emitted radiation are processed into a thermographic image.  To be sure, it is counterintuitive to describe this as a “course of conduct” because the aggregation and analysis are done quite quickly within the processor of the thermal camera, which makes it natural to describe the search as a single act: Creating a thermal image.  But if we zoom in, we find that what the Court deemed an unconstitutional invasion of privacy was ultimately the upshot of a series of “public” facts about ambient radiation levels, combined and analyzed in a particular way.  The thermal image is, in a rather literal sense, a mosaic.

The same could be said about long-distance  spy microphones: Vibrating air is public; conversations are private. Or again, consider location tracking, which is unambiguously a “search” when it extends to private places: It might be that what is directly measured is only the “public” fact about the strength of a particular radio signal at a set of receiver sites; the “private” facts about location could be described as a mere inference, based on triangulation analysis (say), from the observable public facts.

There’s also a scope problem. When, precisely, do individual instances of permissible monitoring become a search requiring judicial approval? That’s certainly a thorny question, but it arises as urgently in the other type of hypothetical case alluded to in Knotts, involving “dragnet” surveillance of large numbers of individuals over time. Here, too, there’s an obvious component of duration: Nobody imagines that taking a single photograph revealing the public locations of perhaps hundreds of people at a given instant constitutes a Fourth Amendment search. And just as there’s no precise number of grains of sand that constitutes a “heap,” there’s no obvious way to say exactly what number of people, observed for how long, are required to distinguish individualized tracking from “dragnet” surveillance.  But if we anchor ourselves in the practical concerns motivating the adoption of the Fourth Amendment, it seems clear enough that an interpretation that detected no constitutional problem with continuous monitoring of every public movement of every citizen would mock its purpose. If we accept that much, a line has to be drawn somewhere. As I recall, come to think of it, Orin has himself proposed a procedural dichotomy between electronic searches that are “person-focused” and those that are “data-focused.”  This approach has much to recommend it, but is likely to present very similar boundary-drawing problems.

Orin also suggests that the court improperly relies upon a “probabilistic” model of the Fourth Amendment here (looking to what expectations about monitoring are empirically reasonable) whereas the Court has traditionally relied on a “private facts” model to deal with cases involving new technologies (looking to which types of information it is reasonable to consider private by their nature). Without recapitulating the very insightful paper linked above, the boundaries between models in Orin’s highly useful schema do not strike me as quite so bright. The ruling in Kyllo, after all, turned in part on the fact that infrared imaging devices are not in “general public use,” suggesting that the identification of “private facts” itself has an empirical and probabilistic component.  The analyses aren’t really separate. What’s crucial to bear in mind is that there are always multiple layers of facts involved with even a relatively simple search: Facts about the strength of a particular radio signal, facts about a location in a public or private place at a particular instant, facts about Alice and Bob’s affair. In cases involving new technologies, the problem—though seldom stated explicitly—is often precisely which domain of facts to treat as the “target” of the search. The point of the expectations analysis in Maynard is precisely to establish that there is a domain of facts about macro-level behavioral patterns distinct from the unambiguously public facts about specific public movements at particular times, and that we have different attitudes about these domains.

Sorting all this out going forward is likely to be every bit as big a headache as Orin suggests. But if the Fourth Amendment has a point—if it enjoins us to preserve a particular balance between state power and individual autonomy—then as technology changes, its rules of application may need to get more complicated to track that purpose, as they did when the Court ruled that an admirably simple property rule was no longer an adequate criterion for identifying a “search.”  Otherwise we make Fourth Amendment law into a cargo cult, a set of rituals whose elegance of form is cold consolation for their abandonment of function.

Compare and Contrast

Fourth Amendment:

“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

Supreme Court (Katz v. U.S.):

“[S]earches conducted outside the judicial process, without prior approval by judge or magistrate, are per se unreasonable under the Fourth Amendment—subject only to a few specifically established and well delineated exceptions.”

Washington Post:

“The Obama administration is seeking to make it easier for the FBI to compel companies to turn over records of an individual’s Internet activity without a court order if agents deem the information relevant to a terrorism or intelligence investigation.”

Internet Privacy Law Needs an Upgrade

Imagine for a moment that all your computing devices had to run on code that had been written in 1986. Your smartphone is, alas, entirely out of luck, but your laptop or desktop computer might be able to get online using a dial-up modem. But you’d better be happy with a command-line interface to services like e-mail, Usenet, and Telnet, because the only “Web browsers” anyone’s heard of in 1986 are entomologists. Cloud computing? Location based services? Social networking? No can do, though you can still get into a raging debate about the relative merits of Macs and PCs.

When it comes to federal privacy law, alas, we are running on code written in 1986: The Elecronic Communications Privacy Act, a statute that’s not only ludicrously out of date, but so notoriously convoluted and unclear that even legal experts routinely lament the “mess” of electronic privacy law. Scholar Orin Kerr has called it “famously complex, if not entirely impenetrable.” Part of the problem, to be sure, lies with the courts.  It is scandalous that in 2010, we don’t even have a definitive ruling on whether or when the Fourth Amendment requires the government to get a search warrant to read e-mails stored on a server. But the ECPA statute, meant to fill the gap left by the courts, reads like the rules of James T. Kirk’s fictional card game Fizzbin.

Suppose the police want to read your e-mail. To come into your home and look through your computer, of course, they’d need a full Fourth Amendment search warrant based on probable cause. If they want to intercept the e-mail in transit, they have to go still further and meet the “super-warrant” standards of the Wiretap Act. Once it lands on your Internet Service Provider’s server, a regular search warrant is once again the standard—assuming your ISP is providing access “to the public.” If it’s a more closed network like your work account, your employer is permitted to voluntarily hand it over. But if you read the e-mail, or leave it on the server for more than 180 days, then suddenly your ISP has become a “remote computing service” provider rather than an “electronic communications service provider” vis a vis that e-mail. So instead of a probable cause warrant, police can get a 2703(d) order based on “specific and articulable facts” showing the information is “relevant and material” to an investigation—a much lower standard—provided they notify you. Except they can ask a judge to delay notification if they think that would impede the investigation. Oh, unless your ISP is in the Ninth Circuit, where opened e-mails still get the higher level of protection until they’ve “expired in the normal course,” whatever that means.

That’s for e-mail contents.  But maybe they don’t actually need to read your e-mail; maybe they just want some “metadata”—the equivalent of scanning the envelopes of physical letters—to see if your online activity is suspicious enough to warrant a closer look.  Well, then they can get what’s called a pen/trap order based on a mere certification to a judge of “relevance” to capture that information in realtime, but without having to provide any of those “specific and articulable facts.” Unless it’s information that would reveal your location—maybe because you’re e-mailing from your smartphone—in which case, well, the law doesn’t really say, but the Justice Department thinks a pen/trap order plus one of those 2703(d) orders will do, unless it’s really specific location information, at which point they get a warrant. If they want to get those records after the fact, it’s one of those 2703(d) orders—again, unless a non-public provider like your school or employer wants to volunteer them. Oh, unless it’s a counterterror investigation, and the FBI thinks your records might be “relevant” somehow, in which case they can get them with a National Security letter, without getting a judge involved at all.

Dizzy yet? Well, a movement launched today with the aim of dragging our electronic privacy law, kicking and screaming, into the 21st century: The Digital Due Process Coalition.  They’re pushing for a streamlined law that provides clear and consistent protection for sensitive information—the kind of common sense rules you’d have thought would already be in place.  If the government wants to read the contents of your letters, they should need a search warrant—regardless of the phase of the moon when an e-mail is acquired. If they want to track your location, they should need a warrant. And all that “metadata” can be pretty revealing in the digital age—maybe some stricter oversight is in order before they start vacuuming up all our IP logs.

Reforms like these are way overdue. You wouldn’t trust your most sensitive data to software code that hadn’t gone a few years without a security patch. Why would you trust it to legal code that hasn’t had a major patch in over two decades?

On Fourth Amendment Privacy: Everybody’s Wrong

Everybody’s wrong. That’s sort of the message I was putting out when I wrote my 2008 American University Law Review article entitled “Reforming Fourth Amendment Privacy Doctrine.”

A lot of people have poured a lot of effort into the “reasonable expectation of privacy” formulation Justice Harlan wrote about in his concurrence to the 1967 decision in U.S. v. Katz. But the Fourth Amendment isn’t about people’s expectations or the reasonableness of their expectations. It’s about whether, as a factual matter, they have concealed information from others—and whether the government is being reasonable in trying to discover that information.

The upshot of the “reasonable expectation of privacy” formulation is that the government can argue—straight-faced—that Americans don’t have a Fourth Amendment interest in their locations throughout the day and night because data revealing it is produced by their mobile phones’ interactions with telecommunications providers, and the telecom companies have that data.

I sat down with podcaster extraordinaire Caleb Brown the other day to talk about all this. He titled our conversation provocatively: “Should the Government Own Your GPS Location?