Tag: privacy

Feinstein-Burr 2.0: The Crypto Backdoor Bill Is Still Alive

When it was first released back in April, a “discussion draft” of the Compliance With Court Orders Act sponsored by Sens. Dianne Feinstein (D-CA) and Richard Burr (R-NC) met with near universal derision from privacy advocates and security experts.  (Your humble author was among the critics.) In the wake of that chilly reception, press reports were declaring the bill effectively dead just weeks later, even as law enforcement and intelligence officials insisted they would continue pressing for a solution to the putative “going dark” problem that encryption creates for government eavesdroppers.  Feinstein and Burr, however, appear not to have given up on their baby: Their offices have been circulating a series of proposed changes to the bill, presumably in hopes of making it more palatable to stakeholders.  I recently got a look at some of those proposed changes. (NB: I referred to these in an earlier version of this post to a “revised draft”, which probably suggested something relatively finalized and ready to introduce.  I’ve edited the post to more accurately characterize these as changes to the previously circulated draft that are under consideration.)

To protect my source’s anonymity, I won’t post any documents, but it’s easy enough to summarize the four main changes I saw (though I’m told others are being considered):

(1)  Narrower scope

The original discussion draft required a “covered entity” to render encrypted data “intelligible” to government agents bearing a court order if the data had been rendered unintelligible “by a feature, product, or service owned, controlled, created, or provided, by the covered entity or by a third party on behalf of the covered entity.” This revision would delete “owned,” “created,” and “provided”—so the primary mandate now applies only to a person or company that “controls” the encryption process.

(2)  Limitation to law enforcement

A second change would eliminate section (B) under the bill’s definition of “court order,” which obligated recipients to comply with decryption orders issued for investigations related to “foreign intelligence, espionage, and terrorism.”  The bill would then be strictly about law enforcement  investigations into a variety of serious crimes, including federal drug crimes and their state equivalents.

 (3)  Exclusion of critical infrastructure

 A new subsection in the definition of the “covered entities” to whom the bill applies would specifically exclude “critical infrastructure,” adopting the definition of that term from 42 USC §5195c.

(4) Limitation on “technical assistance” obligations

The phrase “reasonable efforts” would be added to the definition of the “technical assistance” recipients can be required to provide. The original draft’s obligation to provide whatever technical assistance is needed to isolate requested data, decrypt it, and deliver it to law enforcement would be replaced by an obligation to make “reasonable efforts” to do these things.

It’s worth noting that I haven’t seen any suggestion they’re considering modifying the problematic mandate that distributors of software licenses,  like app stores, ensure that the software they distribute is “capable of complying” with the law. (As I’ve argued previously, it is very hard to imagine how open-source code repositories like Github could effectively satisfy this requirement.) So what would these proposed changes amount to?  Let’s take them in order.

The first change would, on face, be the most significant one by a wide margin, but it’s also the one I’m least confident I understand clearly.  If we interpret  “control” of an encryption process in the ordinary-language sense—and in particular as something conceptually distinct from “ownership,” “provision,” or “creation”—then the law becomes radically narrower in scope, but also fails to cover most of the types of cases that are cited in discussions of the “going dark” problem.  When a user employs a device or application to encrypt data with a user-generated key, that process is not normally under the “control” of the entity that “created” the hardware or software in any intuitive sense.  On the other hand, when a company is in direct control of an encryption process—as when a cloud provider applies its own encryption to data uploaded by a user—then it would typically (though by no means necessarily) retain both the ability to decrypt and an obligation to do so under existing law.  So what’s going on here?

One obvious possibility, assuming that narrow reading of “controlled,” is that the idea is to very specifically target companies like Apple that are seeking to combine the strong security of end-to-end encryption with the convenience of cloud services. At the recent Blackhat security conference, Apple introduced their “Cloud Key Vault” system. The critical innovation there was finding a way to let users users back up and synchronize across devices some of their most sensitive data—the passwords and authentication tokens that safeguard all their other sensitive data—without giving Apple itself access to the information.  The details are complex, but the basic idea, oversimplifying quite a bit, is that Apple’s backup systems will act a like a giant iPhone: User data is protected with a combination of the user’s password and a strong encryption key that’s physically locked into a hardware module and can’t be easily extracted.  Like the iPhone, it will defend against “brute force” attacks to guess the user passcode component of the decryption key by limiting the number of permissible guesses.  The critical difference is that Apple has essentially destroyed their own ability to change or eliminate that guess limit.

This may not sound like a big deal, but it addresses one of the big barriers to more widespread adoption of strong end-to-end encryption: convenience.  The encrypted messaging app Signal, for example,provides robust cryptographic security with a conspicuous downside: It’s tethered to a single device that holds a user’s cryptographic keys.  That’s because any process that involves exporting those keys so they can be synced across multiple devices—especially if they’re being exported into “the cloud”—represents an obvious and huge weak point in the security of the system as a whole.  The user wants to be able to access their cloud-stored keys from a new device, but if those keys are only protected by a weak human-memorable password, they’re highly vulnerable to brute force attacks by anyone who can obtain them from the cloud server.  That may be an acceptable risk for someone who’s backing up their Facebook password, but not so much for, say, authentication tokens used to control employee access to major corporate networks—the sort of stuff that’s likely to be a target for corporate espionage or foreign intelligence services.  Over the medium to long term, our overall cybersecurity is going to depend crucially on making security convenient and simple for ordinary users accustomed to seamlessly switching between many devices.  So we should hope and expect to see solutions like Apple’s more widely adopted.

For intelligence and law enforcement, of course, better security is a mixed blessing.  For the time being, as my co-authors and I noted in the Berkman Center report Don’t Panic, the “going dark” problem is substantially mitigated by the fact that users like to back stuff up, they like the convenience of syncing across devices—and so however unbreakable the disk encryption on a user’s device might be, a lot of useful data is still going to be obtainable from those cloud servers.  They’ve got to be nervous about the prospect of a world where all that cloud data is effectively off the table, because it becomes practical to encrypt it with key material that’s securely syncable across devices but still inaccessible, even to an adversary who can run brute force attacks, without the user’s password.

If this interpretation of idea behind the proposed narrowing is right, it’s particularly politically canny.  You declare you’re going to saddle every developer with a backdoor mandate, or break the mechanism everyone’s Web browser uses to make a secure connection, and you can expect a whole lot of pushback from both the tech community and the Internet citizenry.  Tell people you’re going to mess with technology their security already depends upon—take away something they have now—and folks get upset.  But, thanks to a well-known form of cognitive bias called “loss aversion,” they get a whole lot less upset if you prevent them from getting a benefit (here, a security improvement) most aren’t yet using.  And that will be true even if, in the neverending cybersecurity arms race, it’s an improvement that’s going to be necessary  over the long run even to preserve current levels of overall security against increasingly sophisticated attacks.

That strikes me, at least for now, as the most plausible read on the proposed “controlled by” language.  But another possibility (entirely compatible with the first) is that courts and law enforcement will construe “controlled by” more broadly than I am.  If the FBI gives Apple custody of an iPhone, which is running gatekeeper software that Apple can modify, does it become a technology “controlled by” Apple at the time the request is made, even if it wasn’t under their control at the time the data was encrypted?  If the developer of an encrypted messaging app—which, let’s assume, technically retains ownership of the software while “licensing” it to the end user—pushes out regular automated updates and runs a directory server that mediates connections between users, is there some sense in which the entire process is “controlled by” them even if the key generation and encryption runs on the user’s device?  My instinct is “no,” but I can imagine a smart lawyer persuading a magistrate judge the answer is “yes.”   One final note here: It’s a huge question mark in my mind how the mandate on app stores to ensure compliance interacts with the narrowed scope.  Can they now permit un-backdoored applications as long as the encryption process isn’t “controlled by” the software developers? How do they figure out when that’s the case in advance of litigation?

Let’s move on to the other proposed changes, which mercifully we can deal with a lot more briefly.  The exclusion of intelligence investigations from the scope of the bill seems particularly odd given that the bill’s sponsors are, after all, members of their respective chambers’ intelligence committees, with the intelligence angle providing the main jurisdictional hook for them to be taking point on the issue at all.  But it makes a bit more sense if you think of it as a kind of strategic concession in a recurring jurisdictional turf war with the judiciary committees.  The sponsors would effectively be saying: “Move our bill, and we’ll write it in a way that makes it clear you’ve got primary jurisdiction.”  Two other alternatives: The intelligence agencies, which have both intelligence gathering and cybersecurity assurance responsibilities, have generally been a lot more lukewarm than law enforcement about the prospect of legislation mandating backdoors, so this may be a way of reducing their role in the debate over the bill.  Or it may be that, given the vast amount of collection intelligence agencies engage in compared with domestic law enforcement—remember, there are nearly 95,000 foreign “targets” of electronic surveillance just under §702 of the FISA Amendments Act—technology companies are a lot more skittish about being indundated with decryption and “technical assistance” requests from those agencies, while the larger ones, at least, might expect the compliance burden to be more manageable if the obligation extends only to law enforcement.

I don’t have much insight into the motive for the proposed critical infrastructure carve-out; if I had to guess, I’d hazard that some security experts were particularly worried about the security implications of mandating backdoors in software used in especially vital systems at the highest risk of coming under attack by state-level adversaries.  That’s an even bigger concern when you recall that the United States is contemplating bilateral agreements that would let foreign governments directly serve warrants on technology companies.  We may have a “special relationship” with the British, but perhaps not so special that we want them to have a backdoor into our electrical grid.  One huge and (I would have thought) obvious wrinkle here: Telecommunications systems are a canonical example of “critical infrastructure,” which seems like a pretty big potential loophole.

The final proposed change is the easiest to understand: Tech companies don’t want to be saddled with an unlimited set of obligations, and they sure don’t want to be strictly liable to a court for an outcome they can’t possibly guarantee is achievable in every instance.  With that added limitation, however, it would become less obvious whether a company is subject to sanction if they’ve designed their products so that a successful attack always requires unreasonable effort. “We’ll happily provide the required technical assistance,” they might say, “as soon as the FBI can think up an attack that requires only reasonable effort on our part.”  It’d be a little cheeky, but they might well be able to sell that to a court as technically compliant depending on the facts in a particular case.

So those are my first pass thoughts.  Short version: Incorporating these changes—above all the first one—would yield something a good deal narrower than the original version of the bill, and therefore not subject to all the same objections that one met with. It would still be a pretty bad idea. This debate clearly isn’t going anywhere, however, and we’re likely to see a good deal more evolution before anything is formally introduced.

Update: For the lawyers who’d rather rely on something more concrete than my summaries, take the original discussion draft and make the following amendments to see what they’re talking about altering:
Section 3, subsection (a)(2) would read:

(2) SCOPE OF REQUIREMENT.—A covered entity that receives a court order referred to in paragraph (1)(A) shall be responsible only for providing data in an intelligible format if such data has been made unintelligible by a feature, product, or service controlled by the covered entity or by a third party on behalf of the covered entity.

Section 4, subsection (3)(B) would be deleted.

Section 4, subsection (4) would read:

(4) COVERED ENTITY.—

(A) IN GENERAL.— Except as provided in subparagraph (B), the term “covered entity” means a device manufacturer, a software manufacturer, an electronic communication service, a remote computing service, a provider of wire or electronic communication service, a provider of a remote computing service, or any person who provides a product or method to facilitate a communication or the processing or storage of data.

(B) EXCLUSION.— The term “covered entity” does not include critical infrastructure (as defined in section 5195c of title 42, United States Code.)

(The material before the first comma in (A) above would be new, as would all of section B.)

Section 4, subsection 12, would read:

(12) TECHNICAL ASSISTANCE.— The term “technical assistance”, with respect to a covered entity that receives a court order pursuant to a provision of law for information or data described in section 3(a)(1), includes reasonable efforts to—
(A) isolate such information or data;
(B) render such information or data in an intelligible format if the information or data has been made unintelligible by a feature, product, or service controlled by the covered entity or by a third party on behalf of the covered entity; and
(C) delivering such information or data—
(i) concurrently with its transmission; or
(ii) expeditiously, if stored by the covered entity or on a device.

Those are the changes I’ve seen floated, though again, probably not exhaustive of what’s being discussed.

The Weird World of Data (and Your Privacy)

In 2007, Judge Richard Posner found it “untenable” that attaching a tracking device to a car is a seizure. But the Supreme Court struck down warrantless attachment of a GPS device to a car on that basis in 2012. Putting a tracking device on a car makes use of it without the owner’s permission, and it deprives the owner of the right to exclude others from the car.

The weird world of data requires us to recognize seizures when government agents take any of our property rights, including the right to use and the right to exclude others. There’s more to property than the right to possession.

In an amicus brief filed with the U.S. Court of Appeals for the D.C. Circuit last week, we argued for Fourth Amendment protection of property rights in data. Recognition of such rights is essential if the protections of the Fourth Amendment are going to make it into the Information Age.

The case arises because the government seized data about the movements of a criminal suspect from his cell phone provider. The government argues that it can do so under the Stored Communications Act, which requires the government to provide “specific and articulable facts showing that there are reasonable grounds to believe that [data] are relevant and material to an ongoing criminal investigation.” That’s a lower standard than the probable cause standard of the Fourth Amendment.

As we all do, the defendant had a contract with his cell phone provider that required it to share data with others only based on “lawful” or “valid” legal processes. The better reading of that industry-standard contract language is that it gives telecom customers their full right to exclude others from data about them. If you want to take data about us that telecom companies hold for us under contract, you have to get a warrant.

Understanding U.S. v. Ackerman

The Supreme Court has eschewed the “reasonable expectation of privacy” test in its most important recent Fourth Amendment cases. It’s not certain that the trend away from the so-called “Katz test,” largely driven by Justice Scalia, will continue, and nobody knows what will replace it. But doctrinal shift is in the air. Courts are searching for new and better ways to administer the Fourth Amendment.

A good example is the Tenth Circuit’s decision last week in U.S. v. Ackerman. That court found that opening an email file was a Fourth Amendment “search,” both as a matter of reasonable expectations doctrine and the “distinct line of authority” that is emerging from the Supreme Court’s 2012 decision in U.S. v. Jones.

Here are the facts: AOL scans outgoing emails for child porn by comparing hashes of files sent through its network to hashes of known child porn. When it becomes aware of child porn, it is required by law to report them to the National Center for Missing and Exploited Children. NCMEC is a governmental entity and agent. (That point takes up the bulk of the decision; Congress has made huge grants of governmental power to the organization.) NCMEC opened the file without a warrant.

Economics Will Be Our Ruination II

Economics appears to be a neutral tool, but it often subtly embeds values that we are better off surfacing and discussing. In a recent post henceforth to be known as “Economics Will Be Our Runiation I,” I pointed out how, by preferring to measure the movement of dollars, orthodox economics treats leisure as a bad thing and laments advances in technology-based entertainments.

This installment of EWBOR focuses on an interesting and insightful article recently published in the University of Pennsylvania Law Review, “An Economic Understanding of Search and Seizure Law.” In it, George Washington University Law School professor Orin Kerr shows that the Fourth Amendment helps increase the efficiency of law enforcement by accounting for external costs of investigations. Here is his model:

The net benefit of any particular investigative step can be described as P*V – Ci – Ce, where P represents the increase in probability that the crime will be solved and successfully prosecuted, V represents the net value of a successful prosecution resulting from deterrence and incapacitation, Ci represents the internal costs of the investigative step, and Ce represents its external costs.

Ci means things like the cost of training and equipping police officers and paying their salaries, as well as their own use of their time. Ce, external costs, “include privacy harms and property losses that result from an investigation that is imposed on a suspect. They also include the loss of autonomy and freedom imposed directly on the subject of the investigation (who may be guilty or innocent) as well as his family or associates.” Kerr rightly includes in Ce more diffuse burdens such as community hostility to law enforcement.

Drones Are a Must For Trump’s Nativist Police State

Yesterday my colleague Alex Nowrasteh wrote an extensive list of reasons why Donald Trump, the presumptive Republican Party presidential nominee, is the nativist dream candidate. The list leaves little doubt that if Trump makes it to the White House he will seek to violate the Constitution, create a police state, put citizens’ privacy at risk, and build a border wall (despite its estimated $25 billion price tag) all in the name of reducing legal and illegal immigration to the United States.

Trump’s immigration plan ought to worry civil libertarians because, as Alex points out, he supports mandatory E-Verify, the ineffective employment eligibility verification program that puts privacy at risk. Trump’s disregard for effective policy and privacy rights can be seen not only in his views on E-Verify but also his support for 24/7 border drones.

Last month Trump told Syracuse.com that he would order the 24/7 surveillance of the U.S. borders, adding, “I want surveillance for our borders, and the drone has great capabilities for surveillance.”

What Trump might not know is that drones on the U.S. border don’t have a great track record. At the end of 2014 the Department of Homeland Security’s Inspector General released an audit of the Customs and Border Protection’s Unmanned Aircraft System Program. The program includes MQ -9 Predator B drones (also called “Reapers”), perhaps best known for its combat missions abroad, as well as the Guardian, the Predator B’s maritime variant. The program’s audit was unambiguous:

The program has also not achieved the expected results. Specifically, the unmanned aircraft are not meeting flight hour goals. Although CBP anticipated increased apprehensions of illegal border crossers, a reduction in border surveillance costs, and improvement in the U.S. Border Patrol’s efficiency, we found little or no evidence that CBP met those program expectations.

Unsurprisingly, cartels at the southern border are taking part in an arms race with CBP, using jamming devices on patrol drones. Almost a year after the inspector general’s audit Timothy Bennett, a science-and-technology program manager at the Department of Homeland Security, explained how the cartels hinder CBP operations:

DHS was unable to say just how often smugglers tried to jam or spoof border-watching UAVs. But Bennett said the attacks are hindering law enforcement abilities to map drug routes. “You’re out there looking, trying to find out this path [they’re] going through with drugs, and we can’t get good coordinate systems on it because we’re getting spoofed. That screws up the whole thing. We got to fix that problem,” he said.

The ineffectiveness of drones on the border is not the only concern. CBP drones also pose privacy concerns. Predator B drones carrying out combat missions abroad have been outfitted with Gorgon Stare, a wide-area surveillance technology that allows users to track objects within an area at least 10 square kilometers in size. Almost two years ago it was reported that once incorporated with Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System (ARGUS-IS), another wide-area surveillance tool, Gorgon Stare can monitor 100 square kilometers. A video outlining ARGUS-IS’ capabilities is below.

Yes, Michael, REAL ID Is a Nationwide Data-Sharing Mandate

Baton Rouge IT consultant Michael Hale is right to be concerned about the unfunded mandates in the REAL ID Act. The U.S. national ID law requires states to issue driver’s licenses and share driver data according to federal standards. States complying with REAL ID will find that the U.S. Department of Homeland Security (DHS) dictates their driver licensing policies and the expenditure of state funds in this area forevermore. But he raises that concern at the tail end of a letter to the editor of The New Orleans Advocate that broadly endorses the national ID law based on incorrect information. Here’s some information that Mr. Hale and every American concernced with our liberty and security should know.

Mr. Hale believes that state driver data “will continue to be maintained by each individual state, and each state will decide who gets access to this information.” This is not the case. The REAL ID Act requires states to share driver data across a nationwide network of databases. The DHS and other national ID advocates downplay and deny this, but they are not persuasive because the requirement is right there in the statute:

To meet the requirements of this section, a State shall adopt the following practices in the issuance of drivers’ licenses and identification cards: …
(12) Provide electronic access to all other States to information contained in the motor vehicle database of the State.
(13) Maintain a State motor vehicle database that contains, at a minimum–
(A) all data fields printed on drivers’ licenses and identification cards issued by the State; and
(B) motor vehicle drivers’ histories, including motor vehicle violations, suspensions, and points on licenses.

Mr. Hale says, “The Real ID Act allows states to either adopt the Real ID or to come up with their own version of secure ID that Homeland Security can approve.” This is not true. The option of issuing a non-federal license or ID does not waive the obligation to share driver data nationwide.

Unlike the Department of Homeland Security and its pro-national ID allies, Mr. Hale gamely tries to argue the security merits of having a national ID. “The purpose of all this is to create a trustworthy form of ID that can be used to ensure air travel security,” he says. “The first step in securing a flight is to make sure everyone on board is who they claim to be.”

That argument is intuitive. In daily life, knowing who people are permits you to find them and punish any bad behavior. But U.S. federal public policy with national security implications and billions of taxpayer dollars at stake requires more articulate calculation.

The costs or impediments a national ID system would impose on dedicated terrorists, criminal organizations, and people lacking impulse control is minimal. For billions of dollars in taxpayer dollars expended, millions of hours standing in DMV lines, and placement of all law-abiding Americans into a national tracking system, REAL ID might mildly inconvenience the bad guys. They can, for example, bribe a DMV employee, spend a few thousand dollars to manufacture a false identity, or acquire the license of someone looking similar enough to themselves to fool lazy TSA agents. I analyzed all dimensions of identification and identity systems in my book, Identity Crisis: How Identification is Overused and Misunderstood.

There are other security measures where dollars and effort deliver more benefit. Or people might be left in control of their dollars and time to live as free Americans.

The Department of Homeland Security consistently downplays and obscures the true nature of the REAL ID Act’s national ID policy, and it never even tries to defend its security merits in any serious way. In the information technology community, the security demerits of having a national ID system backed by a web of databases as required by the law seems relatively clear.  People familiar with information technology tend to be more concerned, not less, with the power and peril of a national ID system.

The quest continues to make active citizens like Mr. Hale more aware of all dimensions of this issue.

Idaho May Implement REAL ID—by Mistake

Ten years ago, Idaho came out strongly against the REAL ID Act, a federal law that seeks to weave state driver licensing systems into a U.S. national ID. But Department of Homeland Security bureaucrats in Washington, D.C., have been working persistently to undermine state resistance. They may soon enjoy a small success. A bill the Idaho legislature sent to the governor Friday (HB 513) risks putting Idahoans all the way in to the national ID system.

Idaho would be better off if the legislature and Governor Butch Otter (R) continued to refuse the national ID law outright.

Idaho’s government was clear about the federal REAL ID Act in 2008. The legislature and governor wrote into state law that the national ID law was “inimical to the security and well-being of the people of Idaho.” They ordered the Idaho Transportation Department to do nothing to implement REAL ID.

Since then, the DHS has threatened several times to prevent people living in non-compliant states from going through TSA checkpoints at the nation’s airports. The DHS has always backed down from these threats—the feds would get all the blame if DHS followed through—but the threats have done their work. Compliance legislation is on the move in a number of states.

One of those states is Idaho, where that bill with Governor Otter would call for compliance with the REAL ID Act’s requirements “as such requirements existed on January 1, 2016.” That time limitation is meant to keep Idahoans out of the nation-wide database system that the REAL ID Act requires. But the bill might put Idahoans into the national ID system by mistake.

When the original “REAL ID Rebellion” happened with Idaho at the forefront, DHS was under pressure to show progress on the national ID. DHS came up with a “material compliance checklist,” which is a pared-back version of the REAL ID law. Using this checklist, DHS has been claiming that more and more states are in compliance with the national ID law. It is a clever, if dishonest, gambit.

Practical state legislators in many states have believed what the DHS is telling them, and they think that they should get on board with the national ID law or else their state’s residents will be punished. DHS is successfully dividing and conquering, drawing more power to Washington, D.C.