Cato adjunct scholar Shirley Svorny’s recent paper, “Could Mandatory Caps on Medical Malpractice Damages Harm Consumers?,” has sparked a debate with the Manhattan Institute’s Ted Frank at PointOfLaw.com.
Cato at Liberty
Cato at Liberty
Topics
Constitutional Law
Feds Palling Around With Mexican Cartels
Two years ago the Washington Post reported that the Immigration and Customs Enforcement agency brought dangerous Mexican drug traffickers to the U.S. who, while continuing their criminal activities in Mexico and the U.S., also served as informants to the federal authorities in their war on drugs.
In June, Operation Fast and Furious came to light where the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) allowed suspicious straw-purchasers of firearms to buy weapons in the U.S. and smuggle them into Mexico. The purpose was to track the guns all the way to the ultimate buyer—a Mexican drug trafficking organization. Overall, the ATF facilitated the purchase of hundreds of guns by Mexican cartels. Many were later found in crime scenes in Mexico, including one where a U.S. Border Patrol agent was assassinated.
On Sunday, the New York Times reported that the Drug Enforcement Agency has been laundering millions of dollars for Mexican cartels. The goal of the undercover mission is to follow the money all the way up to the top ranks of the criminal organizations. However, as the NYT notes, “So far there are few signs that following the money has disrupted the cartels’ operations and little evidence that Mexican drug traffickers are feeling any serious financial pain.”
So there we have it: in the name of the war on drugs, the federal government has provided safe havens to Mexican drug traffickers, facilitated their purchase of powerful firearms, and has even laundered millions of dollars for the cartels.
After spending millions of dollars toward fighting the drug war in Mexico, the United States has little to show for its efforts. It seems Washington is becoming more desperate each year to produce new leads and results. These three incidents display a stunning lack of foresight and borders on the federal government aiding the Mexican drug cartels, with little to show in return. The unintended consequences of these programs aimed at dismantling the cartels would be laughable were it not for the thousands that have died in Mexico’s drug related violence.
It is time for the United States to rethink the war on drugs and consider policies that will successfully undermine the Mexican drug cartels.
Big Brothers, PRODIGAL Sons, and Cybersecurity
I wrote on Monday that a cybersecurity bill overwhelmingly approved by the House Permanent Select Committee on Intelligence risks creating a significantly broader loophole in federal electronic surveillance law than its boosters expect or intend. Creating both legal leeway and a trusted environment for limited information sharing about cybersecurity threats—such as the idenifying signatures of malware or automated attack patterns—is a good idea. Yet the wording of the proposed statute permits broad collection and disclosure of any information that would be relevant to protecting against “cyber threats,” broadly defined. For now, that mostly means monitoring the behavior of software; in the near future, it could as easily mean monitoring the behavior of people.
A recent—and somewhat sensationalistic—Fox News article rather breathlessly describes a newly-unveiled security system dubbed PRODIGAL, or Proactive Discovery of Insider Threats Using Graph Analysis and Learning, which “has been built to scan IMs, texts and emails … and can read approximately a quarter billion of them a day.” The article explains:
“Every time someone logs on or off, sends an email or text, touches a file or plugs in a USB key, these records are collected within the organization,” David Bader, a professor at the Georgia Tech School of Computational Science and Engineering and a principal investigator on the project, told FoxNews.com.
PRODIGAL scans those records for behavior — emails to unusual recipients, certain words cropping up, files transferred from unexpected servers — that changes over time as an employee “goes rogue.” The system was developed at Georgia Tech in conjunction with the Defense Advanced Research Projects Agency (DARPA), the Army’s secretive research arm that works on everything from flying cars to robotic exoskeletons.
Don’t panic just yet: This is strictly being deployed on the networks of government agencies and contractors that handle sensitive information—places where every employee is well aware that their use of the network is subject to close scrutiny, and with good reason. There’s not really anything to say in principle against the use of such systems in this context, or for that matter on closed business networks where users are on clear notice that such monitoring occurs.
It would, by contrast, be a clear and quite outrageous invasion of privacy for such large-scale behavioral monitoring to be conducted on the residential or mobile broadband networks Americans rely on to provide their personal Internet connectivity—a fortiori if the goal is to share the results with the government without a court order. As I read it, however, House Intel’s cybersecurity bill would at least arguably permit precisely that.
Under the current language, as long as an Internet provider had a credible good faith belief that it was collecting and sharing behavioral information for one of several broadly defined “cybersecurity purposes”—say, by creating behavioral profiles of potential hackers, disruptive cyberactivists, or “misappropriators” of intellectual property—they’d enjoy full civil and criminal immunity for such actions. That would make any contractual promises to abstain from such monitoring unenforceable—in the highly unlikely event that ordinary users were even able to determine reliably what sort of information was being shared. It would be, to put it as mildly as possible, extraordinarily poor civic hygiene to enable the construction of this kind of quasi-public/quasi-private monitoring and profiling architecture.
This is not, I believe, the sort of thing the bill’s own architects aspire to bring about. But the abstract language employed in pursuit of technological neutrality here avoids the risk of obsolescence only by sacrificing predictability. Courts have recently begun signalling that they’re belatedly inclined to start insisting on full Fourth Amendment search warrants whenever government seeks digitally stored private contents, closing down statutory loopholes that sometimes gave investigators easier access. And now, just as one backdoor closes, a new backchannel granting access to otherwise private and protected material without any judicial process opens up? It does not take a cynic to predict that there will be a potent and persistent incentive to stretch any such channel as wide as the elastic bonds of the English language will permit.
The cleanest way to foreclose this is not to paste in a bunch of after-the-fact usage controls, minimization protocols, or special reports to Congress—though those aren’t bad ideas either. It’s to admit that Congress lacks psychic powers, which may entail that statutes regulating protean areas of technology have to be (or ought to be) swapped for the newer model about as often as iPhones. The specific, narrow categories of sharing everyone thinks are important and unobjectionable from a privacy perspective can be specifically, narrowly authorized now. In a decade, when we’re beaming thoughts directly to each other via quantum-entangled biomechanical brain implants, we can decide what specific statutory language solves the novel security problems of that technology, in a manner consistent with the Fourth Amendment.
The Security Theater Cycle
“What we obtain too cheap,” Thomas Paine famously wrote, “we esteem too lightly”—and it turns out that the converse holds true as well. It’s a well known and robustly confirmed finding of social psychology that people tend to ascribe greater value to things they had to pay a high cost to obtain. So, for instance, people who must endure some form of embarrassing or uncomfortable hazing process or initiation rite to join a group will report valuing their participation in that group much more highly than those admitted without any such requirement—which is one reason such rituals are all but ubiquitous in human societies as a way of creating commitment. Studies suggest that people are more likely to read automobile reviews after purchasing a new car than before—suggesting that people are sometimes less concerned with spending money in the most judicious fashion than with convincing themselves, after the fact, that they have done so. More morbidly, relatives of soldiers killed in action sometimes become much more fervent supporters of the war that cost them a loved one—because the thought that such a grave loss served no good purpose is too much to stomach.
I suspect that this phenomenon may help explain the dispiriting state of affairs described by an airline industry insider in an important Wired piece on airport security. The short version: we’ve spent some $56 billion on “enhancing” airport security over the past decade, with almost no actual security enhancement to show for it. We’re spending huge amounts of money and effort on burdensome passenger screening that doesn’t seem very effective, while neglecting other, far more vulnerable attack surfaces. It is, when you think about it, a somewhat strange priority given the abundance of highly vulnerable domestic targets. Reinforced cockpit doors and changed passenger behavior pretty much made a repeat of a 9/11-style suicide hijacking of a domestic flight infeasible—at negligible economic and privacy cost—long before we started installing Total Recall style naked-scanners, which makes explosives the real remaining risk. Yet the notable bombing attempts by passengers we’ve seen since 9/11 have (a) originated outside the United States, and (b) been foiled by alert passengers after the aspiring bomber slipped through the originating country’s formal screening process.
This shouldn’t be terribly surprising: when a terror group has already managed to get an operative into the United States, a domestic flight (that can’t be turned into a missile) would be one of the stupider, riskier targets to select, given the enormous array of much softer target options that would be available at that point, even assuming pre‑9/11 airport security protocols. As far as I’m aware, the last time a passenger successfully detonated a bomb on a U.S. domestic flight was in 1962. This presents something of a puzzle: Why have we focused so disproportionately on this specific attack vector, at such disproportionate cost, when the terrorists themselves have not? Why haven’t we reallocated scarce resources to security measures (such as better screening of airline employees) that would provide greater security benefit at the margins? One possibility is that, having accustomed ourselves to submitting to the hassle and indignity of ever more aggressive passenger screening, we become more disposed to believe that these measures are necessary.
It’s become commonplace to refer to many aspects of airport screening—the removal of shoes, the transparent plastic baggies for your small allotment of shampoo—as “security theater.” Security guru Bruce Schneier coined the term to refer to security measures whose ritualistic purpose is to make passengers feel safer, even though they do almost nothing to actually increase safety. But on reflection, this seems wrong. It probably holds true in the immediate aftermath of a high-profile attack or disaster. Once the initial heightened fear subsides, however, these visible and elaborate security measures probably do more to increase our perception of risk than to assuage our fears. It is, after all, something of a cliche that hyperprotective parents tend to end up raising children who see the world as a more dangerous place. Overreacting to childhood illnesses is one reliable way of producing adult hypochondriacs down the road.
Security theater, then, isn’t only—or even primarily—about making us feel safer. It’s about making us feel we wouldn’t be safe without it. The more we submit to intrusive monitoring, the more convinced we become that the intrusions are an absolute necessity. To think otherwise is to face the demeaning possibility that we have been stripped, probed, and made to jump through hoops all this time for no good reason at all. The longer we pay the costs—in time, privacy, and dignity no less than tax dollars—the more convinced we become that we must be buying something worth the price. Hence, the Security Theater Cycle: the longer the ritual persists, the more normal it comes to seem, the more it serves as psychological proof of its own necessity.
A Cybersecurity Exception to Wiretap Laws?
It’s gotten surprisingly little media attention thus far, but late last week the House Permanent Select Committee on Intelligence approved a bill to facilitate sharing and pooling of “cyber threat information” between private companies and government intelligence agencies—in particular, the übergeeks at the National Security Agency. It’s actually not a bad idea in principle. But the original draft was so broad that that the White House felt compelled to express concerns about the lack of privacy safeguards—which should give you pause, considering how seamlessly President Obama has shifted from thundering against the Patriot Act to quietly embracing the ongoing kudzu growth of our surveillance state. A few encouraging tweaks were hastily added before the committee approved it, but the bill’s current incarnation still punches an enormous hole in the wiretapping laws that have, for decades, been a primary guarantor of our electronic privacy.
First, a bit of context. Whenever you send an e‑mail, start an IM chat, place a VoIP call, visit a web page, or download a file, your traffic passes through many intermediary networks, starting with your own broadband or wireless provider. While savvy users will protect their sensitive communications with encryption, our expectation of privacy when we use the Internet is also safeguarded by federal law, which generally prohibits network owners providing transit services to the general public from intercepting, using, or disclosing the contents of other people’s communications in any way beyond what’s needed to get the traffic from sender to recipient in the ordinary course of business. There are exceptions, of course: for law enforcement monitoring subject to a warrant, for emergencies, for consensual interceptions, and for monitoring that’s necessary to the protection of a provider’s own network. But the presumption against interception is strong and typically hard to overcome. (Non-public networks, like a corporation’s private intranet, are another story, of course.) Communications metadata—the information about who is talking to whom, and by what route—is less stringently regulated, but carriers are still barred from sharing that information with the government absent some form of legal process. The motivation for all of this is the understanding that heavily regulated carriers, which also often compete for lucrative government contracts, would be subject to government pressure to “voluntarily” share their customers’ data (especially if the sharing could be done secretly). Thus, the law ensures that the government will have to observe the niceties of judicial process before digging through citizens’ private communications, rather than relying on the “informal cooperation” of intermediaries.
This generally salutary arrangement does, however, create some difficulties in the cybersecurity context. Carriers and cybersecurity providers who have visibility on multiple private networks will often be in an optimal position to detect a wide array of attack patterns, involving both metadata (where are apparent attacks coming from? what timing patterns do they exhibit) and contents (what characteristic “signatures” indicate the presence of viruses, malware, or mass phishing emails). This is information it’s highly valuable to have shared among providers—and, yes, the government too—and which generally doesn’t implicate the kinds of privacy interests wiretap law is supposed to protect. But legislators (or rather, the staffers who actually draft these bills) are generally keen to craft “tech neutral” laws that aren’t bound too tightly to current technologies and vulnerabilities, and therefore won’t be obsolete in the face of new tech or new threats. Unfortunately, this often entails erring on the side of breadth, which in this case means creating a massive loophole to remove a minor obstruction—the legislative equivalent of blowing your nose with C‑4.
The bill provides that, “notwithstanding any other provision of law,” a company that provides cybersecurity services for its own networks or others may use “cybersecurity systems” to acquire “cyber threat information,” and share such information with any other entity, including the government. (One of the amendments introduced last week stipulates that the government may use and share that information only when one “significant purpose” of such use is the protection of national security or cybersecurity.) The crucial question, of course, is what counts as “cyber threat information.” That term is defined to encompass:
information directly pertaining to a vulnerability of, or threat to a system or network of a government or private entity, including information pertaining to the protection of a system or network from—
(A) efforts to degrade, disrupt, or destroy such system or network; or
(B) theft or misappropriation of private or government information, intellectual property, or personally identifiable information.
The intention here is to cover the sort of information I talked about earlier—intrusion patterns and malware fingerprints. On a literal reading, though, it might also include Julian Assange’s personal IM conversations (assuming he ever had an unencrypted one), or e‑mails between security researchers. Moreover, one important purpose of this information sharing is to be able to distinguish malicious from benign traffic—which may mean combing through a big chunk of traffic logs surrounding a suspected or confirmed penetration attempt (and comparing those logs to others) in order to extract the hostile “signal” from the background noise. That makes it extremely likely that a substantial amount of wholly innocent, and potentially sensitive, information about ordinary Americans’ Internet activities will end up in the sharing pool. Many attacks will appear to originate from computers conscripted into malicious botnets by malware, unbeknownst to owners whose legitimate personal traffic could easily be swept in and shared as “cyber threat information” as well. The current proposal doesn’t require minimization or anonymization of personal information unless the companies sharing the information impose such conditions themselves. Finally, “cybersecurity systems” is sufficiently vaguely defined that one could even imagine a sysadmin with a vigilante streak reading it to include aggressive countermeasures, like spyware targeting suspected attackers. After all, “notwithstanding any other provision of law” includes provisions of (say) the Computer Fraud and Abuse Act that would place such tactics out of bounds.
Intelligence agencies are also empowered to share classified cyberintelligence with designated companies—and heaven help the firm that’s starved of that security information while their competitors have access to it. Another of the amendments added last week expressly bars conditioning such intelligence sharing on any particular company’s level of “voluntary” cooperation, and clarifies that the intelligence companies may not “task” private companies with obtaining specific types of information for them. Which is nice, but seems awfully hard to enforce in practice. What we’ve already seen, unfortunately, is that cozy long term collaborative relationships between carriers and intelligence agencies are breeding grounds for abuse, even when the law actually does prohibit the carriers from sharing information without legal process. It’s desirable to create legal space for limited cyberthreat information sharing—but it has to be done without creating a large and tempting backdoor through which government might seek to use “voluntary information sharing” as a way to avoid getting a warrant or court order.
The Real Trouble With the Defense Authorization Bill
The Senate on Thursday passed the 2012 defense-authorization bill. It includes a controversial provision meant to put al-Qaeda suspects and their associates in military custody rather than prosecute them as criminals. The White House has rather weakly threatened a veto, complaining primarily that the bill undercuts their discretion in dealing with terrorists.
If the White House vetoes the bill, it will be for the wrong reasons. The trouble is not what the law mandates but what it affirms. It does not require the president to put any terrorists in military custody but rather to comply with a new bureaucratic process if he chooses not to do so. Even as we move toward the end of the wars in Iraq and Afghanistan, the law affirms a presidential power to detain anyone, including American citizens, in the name of fighting a nebulous and seemingly permanent terrorist menace. That is bad for both civil liberties and for our ability to think clearly about terrorism.
Most debate about the bill concerns section 1032. It says that the armed forces “shall hold” anyone that is part of al-Qaeda or an associated force and participants in an attack on the United States or its coalition partners for the course of hostilities authorized by Congress in 2001—and dispose of those suspects under laws of wars. American citizens are excluded. Thanks to a compromise negotiated by Armed Service Committee Chair Carl Levin (D‑MI) and Ranking Member John McCain (R‑AZ), the section now allows the secretary of defense, after consulting with the secretary of state and director of national intelligence, to keep the suspect in civilian courts by informing Congress that doing so serves national security.
The administration objects to 1032 largely because it undercuts their discretion. However, as Levin and McCain note in a recent op-ed, the administration still “determines whether a detainee meets the criteria for military custody.” The president could presumably just decline to label a detainee as someone fitting the requirements of military detention in the first place and try him in civilian court without getting a waiver from the secretary of defense.
The provision’s main relevance is as a talking point. Republicans already fond of castigating the president for allowing alleged terrorists to have their day in court can pretend that he is ignoring this law when he does so.
The real trouble with the bill is the preceding section, 1031. It “affirms” that the authorization of military force passed prior to the invasion of Afghanistan allows the president, through the military, to detain without trial al-Qaeda members, Taliban fighters, associated forces engaged in hostilities against the United States and those that support those groups. Nothing excludes American citizens.
The section says that it does not expand presidential war powers, but that contradicts its other language and common sense. By explicitly endorsing constitutionally dubious powers that the president already claims, Congress makes those claims more likely to survive legal challenge.
The 2001 Authorization of Military Force allows the president to make war on “nations, organizations, or persons” that he determines to have been involved in or aided the September 11 attacks and those that harbored these groups. Effectively, that meant al-Qaeda and the Taliban. Our last two presidents have used that authority to claim the right to kill or indefinitely detain anyone, anywhere that they decide is associated with some arm of al-Qaeda. The courts have trimmed these powers in ways that remain uncertain, particularly as applied to U.S. citizens. In Hamdi v. Rumsfeld, the Supreme Court held that the U.S. military has the power to detain without trial Americans captured on foreign battlefields but that the detainee can challenge the detention in court. Contrary to Carl Levin’s assertions, the ruling did not say that people seized in the United States fit that category.
This defense bill’s expansive list of enemies strengthens the president’s claim that he can detain almost anyone without trial in the name of counterterrorism. Future White House lawyers will cite it to justify those powers. Courts may tell Americans that challenge their detention on constitutional grounds that Congress’s endorsement of the president’s claims to detention powers makes them sounder.
The bill may even strengthen the president’s case for using other war powers, like killing citizens with drone strikes. That interpretation is bolstered by the detainee language’s similarity to the reauthorization of force contained in the House’s defense bill. That legislation explicitly gives the president the power to make war on al-Qaeda, the Taliban and associated forces. By using nearly identical language to describe who the president can detain under his war powers, the Senate bill may stealthily achieve the same end.
Liberalism means minimizing the exercise of war powers. To say, as backers of this legislation do, that the constitution allows our government to kill and detain people without trial is not an argument that we should do so often. Because those powers so offend liberalism, those that advocate them should have the burden of explaining why they are necessary, even if they are constitutional.
Instead, advocates of these extraordinary powers take it as nearly self-evident that military detention is somehow safer than criminal trials. But criminal proceedings, because they are adversarial, produce better information than military interrogations. That information makes the public better consumers of counterterrorism policies. Public debate does not always make better public policy, but it often helps.
You can see how by looking at the footnotes of books about terrorism, like the 9–11 report. Many of sources are records of criminal trials of terrorists. Had all those suspects been held without trial, their testimony and the government’s claims about them might have remained secret. What did become public would be less trustworthy because it would not have been vetted by an institutional adversary, as in court.
Take the case of Umar Farouk Abdulmutallab, the Underwear Bomber, and its connection to the killing of Anwar al-Awlaki, the jihadist propagandist killed earlier this year in Yemen. Both before and after getting a Miranda warning, Abdulmutallab apparently told his FBI interrogators a great deal of information about his trip to Yemen to prepare the explosives he tried to detonate in plane over Detroit. Had he not plead guilty on the first day of trial, prosecutors were set to argue that Awlaki had aided the plot. The government would have had to substantiate its claim that Awlaki, an American citizen, had graduated from being a propagandist to plotting attacks and therefore become a combatant they could legally kill—something they still have not done. The trial would have shed light on how the White House decides which of its citizens it can kill in the name of counterterrorism. That information would at least inform debate.
Civil liberties are a sufficient reason to oppose handing the executive the power to detain more or less whomever it wants. But our system of government does not divide powers simply for fairness. Unilateral decisions are more likely to be foolish ones.
Revised DSM‑5 Could Open Up Wider Legal Claims
The American Psychiatric Association is revising its highly influential Diagnostic and Statistical Manual, currently known as DSM-IV (the fifth version will be “DSM‑V” or, since a switch to Arabic numbering is planned, “DSM‑5”). Nearly 8,000 persons have signed a petition, sponsored by the Society for Humanistic Psychology, Division 32 of the American Psychological Association, which challenges the revision’s proposed widening of the definitions of mental disorder. The letter associated with the petition warns that the revision proposes to lower diagnostic thresholds for many categories of disorder without good reason, as well as introducing new constructs such as “Internet Addiction Disorder” that have “no basis in the empirical literature.” The expansion could lead to inappropriate medical treatment as well as other ill effects.
David Foley at Labor Related spells out some of the legal implications for the workplace:
Among others, the changes in the DSM‑V could impact Americans with Disabilities Act claims (is the plaintiff disabled, what is a reasonable accommodation, etc), Family Medical Leave Act claims (does plaintiff suffer from a serious illness) and workers compensation laws (does plaintiff have an illness and was it caused by work).
Introducing a new category of Mild Neurocognitive Disorder, for example, could entitle workers to begin claiming job-related accommodation for cognitive deficits often associated with advancing age — perhaps especially significant since federal law has made it unlawful for most private employers to set policies of automatic retirement at any particular age. As Foley notes, the task force is also planning to reduce the diagnostic threshold for two disabilities that generate many ADA claims already: Attention Deficit Disorder and Generalized Anxiety Disorder.
Employers already face serious legal risks under existing law if they decline to accommodate employees with mental and behavioral deficits (which may include substance abuse, at least if the worker has entered rehab). As I noted the other day at Overlawyered, a hotel chain has agreed to pay $132,500 for dismissing an autistic front desk clerk rather than working with a state-paid “job coach” to remedy his deficiencies. The EEOC sued an insurance company that rescinded a job offer as an agent to an applicant after he tested positive for methadone. An Iowa jury awarded $1.1 million against a university for failing to accommodate an employee’s request for a lighter work load and other changes after she was diagnosed with depression, post-traumatic stress disorder and anxiety. And HR lawyers have warned employers that administering personality tests to new workers could violate the law by improperly revealing protected conditions such as “paranoid personality disorder.”
Earlier posts on the ADA and mental/behavioral deficits here (trucking firm sued for avoiding drivers with drinking history), here and here.