Cato Institute
1000 Massachusetts Ave, NW
Washington, DC 20001-5403

Phone (202) 842 0200
Fax (202) 842 3490
Contact Us
Support Cato
Regulation Magazine


Let the Punishment Fit the Crime

What factors should a judge consider when a convicted criminal stands before him to he sentenced? And what if the perpetrator is not a real person, but an artificial person-a corporation? These questions are complex and case law has not produced consistent answers to them. Indeed, the wide disparity in sentences for comparable crimes prompted Congress in 1984 to create the U.S. Sentencing Commission and to charge it with drafting comprehensive federal sentencing guidelines.

The creation of the Sentencing Commission touched off a separation-of-powers controversy because it includes judges and non-judges in an agency nominally in the judicial branch, procedurally more akin to the executive branch, and engaged in an activity that is arguably legislative. The commission's guidelines for sentencing individuals convicted of federal crimes were issued in 1987, and already 250 federal judges have ruled on their constitutionality: 42 percent in favor; 58 percent against. The constitutional question is now before the Supreme Court.

Meanwhile a second controversy is brewing in the legal community over the commission's deliberations on guidelines for sentencing organizations-that is (almost always), corporations. A discussion draft of the guidelines was published in July, and the commission is presently holding a series of hearings on it. Some critics label the draft as soft on white-collar crime because it openly embraces an approach that seeks to avoid over deterrence as well as under deterrence, suggesting a benign tolerance of crime that has no place in the criminal justice system. Others argue that the penalties being discussed are so much more severe than current penalties that they will cause substantial economic damage and may go beyond the scope of the commission's charter.

Most participants in the debate agree that the financial penalty against a corporation should be commensurate with the offense. A central point of disagreement, however, is the metric for the severity of a crime: its benefit (to the perpetrator), or its cost (to society). Traditionalists stress that the appropriate goal for sentencing is to ensure that crime doesn't pay; economic pragmatists argue that the goal should be to ensure that crime doesn't cost. Although it is not immediately apparent, the economic approach is inherently more severe: the activities that we make criminal generally involve social costs much greater than the private gains, so that cost-based penalties make crime more than unrewarding.

The economic view is dominant in the draft guidelines. This approach, drawing on the law and economics literature of the past 20 years, holds that the goal of law enforcement is not to minimize crime per se, but to deter crime optimally and thereby maximize society's wealth. It argues that the expected penalty (the probability of punishment times the actual penalty) for any particular crime should equal the social cost, or harm.

The table below shows, for a sample of 159 cases decided between January 1, 1984 and December 31, 1987, the ratio of criminal penalties to the social harm that would be estimated using the guideline criteria. These ratios are shown both under current practice and under the proposed guidelines. (A penalty-to-harm ratio of 1 would be appropriate when the probability of punishment is 1; the lower the probability, the higher the appropriate penalty-to-harm ratio.) As illustrated, the mean ratio under current practice is 0.28. For the typical case of private fraud or property crime, the guidelines propose that the criminal penalty be raised to a multiple of 2 times the social cost, with a reduction to 1.5 if the particular crime is unusually likely to be detected, or an increase to 3.0 if the perpetrators acted to make detection more difficult. For the typical crime in all other categories, the multiple is set at 2.5, with a reduction to 2.0 or an increase to 3.5 depending on the ease of detection. Firms that voluntarily report their offenses prior to the commencement of an investigation would be subject to a multiple of 1.0.

Ratios of Criminal Penalties to Social Costs Under Current Practice and Sentencing Guidelines
(For Sample of 159 Cases Decided 1/1/84-12/31/87)
Current Practice Mean Penalty/Mean Cost
Guidelines Typical
Increase in Penalty for Mean Case
Private Fraud
7.7 times
Government Fraud
6.4 times
35.7 times
33.3 times
4.9 times
All Cases
7.1-8.9 times
Derived from Mark A. Cohen, "Preliminary Assessment of the Impact of the July 1988 Discussion Draft on Corporate Monetary Penalties." A Report tot he U.S. Sentencing Commission.

The guidelines recognize that penalties should vary with the difficulty of detection. For instance, judges may increase the penalty if there have been active efforts to conceal an offense or its consequences, or reduce it if a firm has been unusually forthcoming. This allows for tailoring penalties to reflect the extent to which a firm's own strategic behavior increases or decreases the likelihood of detection.

The guidelines incorporate several other novel economic adjustments to the calculated penalties. For example, penalties are adjusted for the time elapsing between the time the social costs are imposed and the time the penalties are actually paid. Also the guidelines do not permit judges to adjust the size of a penalty to the size of the firm, or to limit penalties on corporations to prevent insolvency; an optimal penalty structure will periodically (and appropriately) render corporations insolvent.

What causes many prosecutors to object to the guidelines is the implicit notion that a corporation should be permitted to engage in crime when it perceives the act to be in its interest. They invoke the image of a board of directors weighing the pros and cons of committing a crime. At the commission's October public hearing in New York, for example, Gary G. Lynch, Director of Enforcement at the Securities and Exchange Commission, argued:

To reach potential violators, prosecutors and judges have to convey a stronger message: that the penalties for detection and conviction will not only eliminate any gains to the violator, but will be perceived as sufficiently abhorrent that no reasonable person would even undertake an economic analysis of the risks and rewards of conduct.

This image is a red herring. Crimes are committed by agents of a corporation and, while corporations spend substantial resources to deter internal crime, they cannot be completely successful. It is impossible to be sure that nowhere in the organization will a safety standard be violated, an emissions limit exceeded, or a securities rule transgressed. Corporations will devote resources to deter crime to the extent it is in their-that is, the stockholders'-interests to do so.

In effect, the economic approach enlists the corporation as a law enforcement agent. Indeed, this is the most compelling rationale for subjecting "artificial persons" to criminal liability. A corporation is far better situated to control its agents than is any other law-enforcement organization. It can detect and deter crime cost-effectively through its personnel practices, compensation policies, accounting systems, security procedures, and other means available only to the corporation. It is certainly socially advantageous to encourage the corporation, as a law enforcement agent, to act efficiently. Crime deterrence is unarguably a good, but it is also costly. Wasteful spending by corporations is just as socially undesirable as wasteful spending by a city police force or by the FBI.

The right image to have in mind, therefore, is not of a criminal performing a benefit-cost analysis to decide whether to commit a crime. Rather it is of a law enforcement agency performing a benefit-cost analysis to determine what resources to use in deterring crime and how best to deploy them. Part of the function of the cost-based penalties is to give a price signal to corporations that indicates the relative severity of various crimes. That way the corporation's efforts to deter crime will reflect the public's interest in deterrence.

Bringing economics to bear on criminal sentencing practices is an ambitious and important undertaking. The question that remains to be answered is whether the actual level and structure of penalties contained in the draft guidelines will promote efficient outcomes. Some supporters of the economic approach have expressed concern that the proposed penalties are too large and thus will serve as a costly overdeterrent. Excessive criminal penalties induce innocent firms to invest too many resources in ensuring that they are not falsely accused of having committed a crime; other firms are motivated to overinvest in monitoring the behavior of their agents. Net social losses are the result.

Part of the problem is that the guidelines focus exclusively on criminal penalties, and thus fail to reflect other important penalties imposed on violators. These include extensive civil penalties as well as market sanctions, such as loss of reputation. Environmental offenses, for example, carry federal civil penalties of $25,000 to $40,000 per day. The market also sanctions violators, and these sanctions can be potent-as evidenced by the stock market reaction to safety oversights by airlines involving no criminal activity. (See "The Flight From Equity," Regulation, 1987 Number 3/4.) While the guidelines recognize that the economically optimal criminal penalties should take into account these other sanctions, they do not incorporate a practical mechanism for doing so. This biases the guidelines toward excessive criminal penalties.

As can be seen in the table, the guidelines substantially increase criminal penalties (although the relatively small sample size makes it difficult to draw firm conclusions). Considering the aver age case, the typical firm faces a 7- to 9-fold increase in penalties, up from 0.28 times the damages to 2.0 or 2.5 times. Even for firms that voluntarily turn themselves in prior to investigation (thus qualifying for a multiple of 1.0), the avcrage penalty is more than tripled.

The concern over excessive penalties extends to the commission's antitrust guidelines, which were published in October 1987. The antitrust penalty for horizontal restraints of trade, for example, was increased from three times the damages up to eight times, and allowable damage estimates were increased to 10 percent of gross sales. The guidelines acknowledge that these changes probably will result in nearly a IC-fold increase in typical penalties for antitrust violations. The guidelines also make imprisonment of corporate officers a virtual certainty, when efficiency would generally argue for the use of fines rather than incarceration.

While these increases do not, in and of themselves, demonstrate optimality or non-optimality, they do raise serious questions about whether the Sentencing Commission has gone too far. In 1985 Congress voted a large increase in criminal penalties that, according to the commission's data, more than tripled the average penalty. Yet the draft guidelines imply an even greater increase over current practice. Some of the policy changes that contribute to larger penalties have a sound basis in theory, including the collection of interest, the assumption of 100 percent restitution, and the elimination of discounts for inability to pay. But the cumulative effect of these and other choices made in the guidelines is an extraordinary change in the stringency of criminal law. A decision to increase penalties by nearly an order of magnitude is a decision to cause a massive increase in the real resources devoted to crime deterrence. Unfortunately the report presents no compelling evidence that such a large increase is warranted.

Another concern is that the guidelines fail to reflect the very different circumstances of detection and conviction for various crimes. With an optimal penalty structure, penalties should vary inversely with the probability of punishment. While there is no good evidence on how these probabilities differ across crimes, say between a violation of an environmental statute and a case of private fraud, certainly these probabilities do differ, probably quite substantially. As indicated in the table, the guidelines propose a narrow band of penalties for the typical firm, ranging from no less than 2.0 to no more than 2.5 times social cost. Uniformity in sentencing, the statutory goat of the Sentencing Commission, can only rationally apply to individuals committing a particular type of crime, not to different crime categories.

Finally, there is concern that the guidelines fail to recognize the inherent limits on the courts ability to calculate accurately the social costs of crime. The range of multiples that the guidelines would apply to social costs (for any particular crime) appears quite narrow compared to current practice, but that does not mean that uncertainty would be reduced. There remains considerable uncertainty in the calculation of social costs. Nor have the guidelines removed uncertainty about the legal standards. Much like present arrangements, firms will not know precisely what the penalty is for any particular crime or where the boundary lies between legal and illegal behavior.

As demonstrated by Richard Craswell and John E. Calfee in "Deterrence and Uncertain Legal Standards," Journal of Law, Economics, and Organization (Volume 2, Number 2, Fall 1986), uncertainty alone-whether about the legal standard or its enforcement-can cause overdeterrence even when no multiplier at all is used. When firms know that taking extra precautions can reduce the probability of being penalized, they may take those precautions even when the penalty is simply the calculated net social cost. Inflating the penalty by a multiplier then increases the precautions taken and likely dissuades economically efficient behavior that the legal system does not intend to dissuade.

Apart from the proper structure and level of penalties, the Sentencing Commission is also grappling with the question of probation. The July draft spells out a limited set of cases in which the courts may use probation for monitoring a firm to improve detection of future offenses. Oversight by the courts would generally be limited to ensuring that fines are eventually paid. An alternative proposal, drafted by Professor John C. Coffee Jr., Richard Gruner, and Christopher Stone, and also published in July by the commission, takes quite a different approach: 'organizational probation, as a supplementary sentence, makes clear that there is no price that, when paid, entitles the organization to engage in misbehavior." In other words reduce the level of crime to zero, no matter what the social cost. This is diametrically opposed to the logic of optimal penalties.

The Coffee et al. proposal envisions the court acting to correct "inadequate" internal controls in corporations. For a period of up to five years, the court and its appointed probation officers would oversee the running of the firm and would be able to disallow management decisions that they believe might contribute to future violations. Such an approach threatens virtually unlimited intervention by the courts and probation officers into the managerial practices of firms. Nor is it clear how this approach would help. With an optimal set of penalties, firms should have proper incentives to deter crime. It is difficult to believe that probationary supervision would improve the efficiency with which a firm pursues this or any other self-interested objective.

Those who find unthinkable the endorsement of an 'optimal" amount of crime (other than zero) are forced to acknowledge that existing sentencing practices do not result in zero crime. Indeed, it is difficult to imagine any sentencing practice that would eliminate crime altogether. This is true for corporations as well as for individuals. Congress passes many statutes that impose strict (if sometimes vague) duties on corporations, with an announced intent to stamp out the reprehensible acts. In practice finite penalties-tempered by prosecutorial and judicial discretion-avoid the inequities and inefficiencies that would accompany a draconian enforcement policy. The mercy of the court is not a particularly helpful guide for corporate behavior, however. In fact, it was the exercise of "unfettered discretion" that gave birth to the Sentencing Commission. The commission's goal, according to a staff working paper, is to develop a determinate, principled law of sentencing.

It will be a significant accomplishment if the U.S. Sentencing Commission succeeds in adopting the economic approach for corporate sentencing. The greatest obstacle seems to be the perception that this approach is too tolerant of corporate crime. In an attempt to allay this criticism, the draft guidelines appear to have incorporated a set of penalties that are too high. That strategy is dangerous, since excessive penalties would impose substantial costs throughout the economy.

Dying for Drugs

For 50 years it has been illegal to sell drugs that have not been deemed safe by the Food and Drug Administration and for half that time it has been illegal to sell drugs that have not also been deemed effective. Now the AIDS epidemic is causing thousands of desperately ill patients to ask why. Under increasing pressure to make experimental
drugs available to patients more quickly, the FDA has taken a series of steps to accelerate the drug testing and approval process. FDA's most recent reform, if it is carried out in good faith and is not reversed in court, could knock up to 4 years off the decade-long process.

Delays in approving promising new drugs have always imposed substantial costs on patients: suffering from serious diseases, with little effect on FDA policy. For the first time there is a vocal patient lobby protesting that the drug approval process is producing more consumer injury than protection. Early in August the Presidential Taslit Force on Regulatory Relief asked the FDA to develop proposals to speed the availability of new drugs for AIDS and other life-threatening conditions that lack adequate alternative therapies. Vice President George Bush asked that the proposals meet three criteria: The FDA should be permitted to consider the risks of the life-threatening conditions that a drug is intended to treat, rather LU, than simply the risks of the drug; American manufacturers should be free to compete with foreign suppliers; and patients and physicians should be afforded broad discretion in choosing among the various risks that confront the patient.

No doubt feeling a heightened sense of urgency when AIDS patients and activists demonstrated outside its offices, on October 21 the FDA published an interim rule announcing a new procedure for clinical testing of drugs intended to treat life-threatening or severely debilitating illnesses. Instead of waiting for the completion of large, costly, and time-consuming "phase Ill" Lu, studies, the FDA said it is now prepared to approve some drugs earlier in the process. Some ambiguity remains to be resolved, however. It is not yet clear whether the agency is willing to base its approval on the data normally available early in testing, or whether it will insist that manufacturers bear the burden of accelerating the collection of data. The FDA's own study of drug testing makes a strong case for earlier approvals, even without any acceleration of data collection.

The FDA study examined the history of 174 drugs that entered human testing between 1976 and 1978. It identified the stages in the clinical testing process that contribute the most toward the likelihood of final approval, thereby providing a crude measure of the value of each stage. Phase III testing, despite its large costs, appears to have the least value.

After typically two years of animal testing, a manufacturer files an Investigational New Drug (IND) application to begin human testing of a chemical entity that shows therapeutic promise. Human testing gives the most accurate information about safety, efficacy, and dosage. It is also slow and costly. Under present procedures, the average drug spends five years in human testing- roughly half of the total drug development and approval time. For reasons that have more to do with the economics of drug development than with regulatory requirements, clinical testing proceeds in three (sometimes overlapping) phases, each involving more subjects and lengthier studies. Only if the initial small-scale results show promise does testing proceed to more elaborate studies.

Phase I studies typically involve 10 to 50 healthy subjects. Designed to assess toxicity and to determine how a drug is metabolized in the human body, these studies generally are completed in a few months. Of the 174 (approved) INDs in FDA's study, 71 percent successfully completed phase I trials.

In phase II clinical studies, a drug is administered for the first time to patients with the condition it is intended to treat. These studies generally are randomized controlled trials involving 50 to 200 subjects and take from several months to two years to complete. These small scale and relatively inexpensive trials are the real crucible of the drug testing process. In the FDA study the number of drugs that completed phase II was less than half of those that started it, and only 32 percent of those that were approved for human testing. Economic concerns were the single most frequent reason for abandoning research during phase II, although safety and efficacy problems were almost as prominent.

If phase II studies are the unkindest cut, phase III trials are a monument to the pursuit of perfection. These trials constitute the most costly and time consuming step in the tortuous journey from test tube to market. Phase III trials typically involve 200 to 1,000 patients, although some studies require several thousand. The studies may be completed in as Little as one year, but it is not unusual for them to stretch over three or four years. The FDA study found that decisions to abandon research during phase LU trials were infrequent, amounting to 14 percent of the products that initiated the trials.

With the completion of phase LU trials, New Drug Applications (NDAs) were submitted for 28 percent of the 174 drugs studied; final FDA approval was granted for 20 percent. (In some cases, the study imputed final outcomes for drugs still in the process.)

The significance of phase II studies is striking: more than 90 percent of all decisions to abandon drug development occur before the end of phase II. And the results of phase II studies provide a good indication of the likelihood of final FDA approval. Of the products that complete phase II, 85 percent will also complete phase III, and 63 percent will eventually receive FDA approval. Yet by the time the typical drug completes phase II, it still has not reached the mid-point of the development and approval process. The first two phases raise the probability of success from an initial 20 percent to 63 percent; completion of the remainder of the ordeal raises the probability of eventual FDA approval to only 73 percent.

Based on the results of the FDA study, the phase II clinical trials can be used to predict both safety and efficacy with considerable accuracy. Of course, the phase III large-scale clinical trials do add information. They enable slightly more refined estimates of the risk-benefit trade-offs that are inherent in nearly any drug. Larger samples increase the chances of identifying relatively infrequent side effects and provide an opportunity to test alternative dosages. Much of the information generated in phase III trials ultimately forms the basis for recommendations on the drug's use for incorporation in professional labeling. The critical question about these studies is not whether they are worth conducting, but whether they are worth a one to four year delay in drug availability.

Given the meager productivity of phase III, it seems apparent that such elaborate pre-market testing is counterproductive for drugs intended to treat serious and life-threatening conditions for which there are no adequate alternative therapies. Patients with such conditions simply do not have the time to wait for the results of further testing. Even a cautious patient with AIDS, for example, would presumably be delighted to take a drug that had odds of 2 to 1 for ultimate approval. Indeed, many are taking drugs that are far less likely to receive FDA sanction.

It is difficult to say how much the FDA's new procedures will improve the drug approval process. The interim rule states that for drugs intended to treat life-threatening and severely debilitating illnesses, phase III testing will sometimes be waived. A new requirement has been added, however-the sponsor must consult in advance with FDA staff on the design of phase II studies. This suggests that the FDA will be seeking improvements in the phase II studies: larger sample size, multiple doses, multiple independent investigators, and so forth. It will be a classic bureaucratic response if the FDA prescribes regulation of phase II study design as the cure for problems that result from excessive regulation in the first place. The result may simply be to make phase III studies begin earlier. This is problematic. Because 68 percent of the products that enter human testing are abandoned before initiating phase III trials, resources expended on more elaborate trials in earlier phases may largely be wasted. Inmost cases, such a change would simply increase the cost of finding out what existing trials can already tell us.

Presumably, drug developers already have the proper incentives to balance the benefits of earlier results against the costs of larger studies. Regulatory attempts to encourage higher cost strategies will reduce the incentives to enter human trials and will inevitably reduce the flow of new drugs. The situation may be even worse since the price of earlier approval also seems to include a requirement for additional "phase IV" studies after a drug is marketed.

The first test of the FDA's intentions will be how it handles the 11 AIDS treatments now in phase III trials. If the FDA is willing and able to approve these and other drugs sooner and with less data, the new procedures will pay off-that is, if the courts let them.

Legislation may be needed to address the real problems in the drug approval process: Simply stated, the law requires excessive certainty that a drug will work before it is allowed on the market. Under the 1962 amendments to the Food, Drug, and Cosmetic Act, a manufacturer must demonstrate efficacy under a "substantial evidence" standard. Substantial evidence is not the relatively weak restraint familiar to all practitioners of administrative law. Rather, the statute defines substantial evidence as "adequate and well controlled investigations, including clinical investigations, by experts…, on the basis of which it could fairly and responsibly be concluded by such experts that the drug will have the effect" it claims. This language may leave the FDA little room to approve drugs that are probably effective, and little room to consider the needs of patients who have no adequate alternative drugs. Like many other examples of new product regulation, it ignores that possibility that the uncertain new risk may be far less than the known existing risk. For drugs intended to treat serious and life-threatening diseases without adequate alternative therapies, this makes no sense.

If we want greater certainty that such drugs actually help, the necessary studies can and should be conducted after the drugs are made available to patients-not before. Although the research community will object that such an approach would make it difficult to enlist patients in clinical trials, there is no evidence to support this fear. Indeed, the vast majority of human clinical trials now being conducted involve drugs that are already on the market. These drugs are legally available to anyone who can get a doctor's prescription, yet there is no indication that manufacturers have serious difficulty in enrolling patients in clinical trials.

The drug regulatory apparatus in the United States has developed as a political response to tragedy. In 1937 the drug "Elixir Sulfanilamide" killed more than 100 people in two month; in the aftermath Congress passed the Federal Food, Drug, and Cosmetic Act of 1938 which required manufacturers to prove to the FDA a drug was safe prior to marketing. The Thalidomide tragedy in the 1950s caused a rash of birth defects in Europe and, although Thalidomide had not been approved for use in the United States, provided the political impetus for the 1962 amendments and the "substantial evidence" standard.

The emergence of AIDS as a major public healthy problem may catalyze a new, and welcome, transformation of the drug approval process. In the face of a growing number of cases of a disease that apparently is always fatal, regulatory delay to dot ever "i" and cross every "t" seems increasingly intolerable. Victims of AIDS are the most visible victims of a process that bends over backwards to avoid mistaken approvals of drugs that might not work. Tragic as it is, the AIDS crisis of the 1980s offers a real opportunity for broad reform of the drug approval process.

The Iceberg's Tip

Budget expenditures are a poor measure of the burden regulatory agencies impose on the economy, but have the advantage of being easily observed. In a study published by Washington University's Center for the Study of American Business,"1989 Federal Regulatory Budgets and Staffing: the Effects of the Reagan Presidency," Melinda Warren and Kenneth Chilton report that the performance of the Reagan administration in controlling spending by regulatory agencies has been mixed. As shown in the table below, between fiscal year 1980 and 1989 real spending declined by 10 to 20 percent in half of the 10 largest social regulatory agencies. The largest reduction, 90 percent, was achieved by the Economic Regulatory Administration of the Department of Energy. ON the other hand, the Environmental Protection Agency, already the biggest spender in 1980, managed a 70 percent increase during the Reagan administration. As a result, total real expenditures increased by 13 percent. EPA now accounts for more than half the total expenditures among these 10 regulatory agencies.

Expenditures of the Regulatory Agencies
(Fiscal Years, Millions of 1982 Dollars)

Top 10 Social Regulatory Agencies
(estimated) 1988
(estimated) 1989
% Change 1980-1989
Environmental Protection Agency
Coast Guard
Nuclear Regulatory Commission
Food Safety Inspection Service
Food and Drug Administration
Federal Aviation Administration
Animal & Plant Health Inspection
Occupational Safety and Health
Office of Surface Mining Reclamation and Enforcement
Economic Regulatory Admin.

Conventional wisdom has it that economic regulation, unlike social regulation, did indeed wither during Reagan's tenure. This is accurate to a degree: the Civil Aeronautics Board disappeared and the Interstate agencies grew by 26 percent-twice the growth rate of the social agencies (although in absolute terms, the economic agencies are much smaller than their social brethren). The Patent and Trademark Office, after growing by 89 percent, is now the largest of the group. The Securities and Exchange Commission and the various banking regulators all grew by about 50 percent, while the antitrust agencies shrank by about a third.

Expenditures of the Regulatory Agencies
(Fiscal Years, Millions of 1982 Dollars)

Top 10 Economic Regulatory Agencies
(estimated) 1988
(estimated) 1989
% Change 1980-1989
Comptroller of the Currency
Federal Deposit Insurance Corp
Patent and Trademark Office
Federal Reserve Banks
Interstate Commerce
Federal Communications
Securities and Exchange
Federal Energy Regulatory
Federal Trade Commission
Department of Justic Antitrust

Subscribe to Regulation