Topic: Regulatory Studies

TV Broadcasters Should Have Same Rights As Everyone Else

Remember broadcast television? Amid the avalanche of new streaming services, DVRs, and Rokus, not to mention cable TV, some people may have forgotten—or, if they’re under 25, never known—that there are TV shows in the air that can be captured with an antenna. The Supreme Court certainly hasn’t forgotten, given that it maintains an outdated rule that broadcast TV gets less First Amendment protection than cable, video-on-demand, or almost anything else–a rule dating to the 1969 case of Red Lion Broadcasting Co. v. FCC.

That lower standard of protection comes from the belief that the broadcast-frequency spectrum is scarce, and thus that the Federal Communications Commission is properly charged with licensing the spectrum for the public “interest, convenience, and necessity.” But if newspapers or magazines were similarly licensed, the First Amendment violation would be obvious to all but the most hardened censor.

Hence the case of Minority Television Project v. FCC. Minority Television Project is an independent, noncommercial license-holding TV station in San Francisco. Unlike most noncommercial license holders, Minority TV receives no PBS money. Because it’s an over-the-air broadcaster, however, it must comply with the restrictions placed on the licenses by Congress and the FCC, including prohibitions on paid commercials and political ads. Minority TV challenged these restrictions as violating the First Amendment.

Applying Red Lion’s lower First Amendment standard, the district court, a panel of the U.S. Court of Appeals for the Ninth Circuit, and even the en banc Ninth Circuit (11 judges rather than the usual 3) all ruled against Minority TV. On petition for certiorari to the Supreme Court, Minority TV argues that Red Lion’s rationale for reducing broadcasters’ rights is outdated and should be overruled.

Cato has filed an amicus brief in support of Minority TV, agreeing that it’s time to give broadcast TV full First Amendment protection. Just as we argued in 2011’s FCC v. Fox Television Stations—where the Court chose to evade the question—it’s time to update our law to fit current realities. The way that people consume information and entertainment has changed dramatically since 1969. Rather than three broadcast networks, we have hundreds of channels of various kinds, and increasingly people are forgoing traditional TV altogether. The FCC can still license broadcasters—that system isn’t going away anytime soon regardless of the next mind-boggling innovation—but the conditions it places on those licenses have to satisfy strict First Amendment scrutiny, especially when they pertain to political speech.

The Supreme Court should take this case in order to update its treatment of broadcasters’ speech rights, including a requirement that the government offer a truly compelling justification any time it wants to restrict them. 

Judge Rebukes Labor Department Over Shoddy Case

It seems every week or two another federal agency gets smacked down in court for trampling the rights of regulated parties in enforcement litigation. This week it’s the Labor Department’s turn:

The U.S. Department of Labor must pay more than $565,000 in attorney fees to an oilfield services company it accused of wage-and-hour violations totaling more than $6 million, a federal judge has ruled….

Officials, who opened their investigation in 2010, alleged the business [Texas-based Gate Guard Services, LLC] improperly classified 400 gate attendants as independent contractors.

The agency would have learned that the guards weren’t employees had it talked to more than just a few of them, [federal judge John] Rainey wrote in a 24-page order. Because the probe was not “substantially justified,” Gate Guard was entitled to recover its attorney fees, he said.

“The DOL failed to act in a reasonable manner both before and during the course of this litigation,” Rainey wrote.

Goaded by labor unions and other interested parties, the Obama Labor Department has made wage-and-hour law a big priority, with the President himself pushing the law into new ways of overriding private contractual choice. As for the overzealous enforcement, it’s coming to look less like inadvertence and more like systematic Administration policy.  Last year we noted an Eleventh Circuit decision rebuffing as “absurd” a Labor Department claim of authority regarding the H-2B guest worker program. The pattern extends to agency after agency, from the EPA (ordered to pay a Louisiana plant manager $1.7 million on a claim that hardly ever succeeds for defendants, malicious prosecution), to white-collar enforcement, to a series of Justice Department prosecutions under the Foreign Corrupt Practices Act. 

Probably the agency to suffer the most humiliating reversals is the Equal Employment Opportunity Commission, nominally independent but in fact reshaped in recent years into a hyperactive version of its already problematic self. You can read here about some of the beatings the EEOC has taken in court in recent years, including a case last summer where the federal judge dismissed the commission’s lawsuit over a Maryland company’s use of criminal and credit background checks using words like “laughable,” “unreliable,” and “mind-boggling.” And just last week, as reported in this space, the Sixth Circuit memorably slapped around the commission’s amateurish use of expert testimony in another credit-check case, this time against the Kaplan education firm. As I noted at Overlawyered

The Sixth Circuit has actually been one of the EEOC’s better circuits in recent years. For example, it reversed a Michigan federal judge who in 2011 had awarded $2.6 million in attorneys’ fees to Cintas, the employee-uniform company, and reinstated the lawsuit. In doing so, the appellate panel nullified what had been the lower court’s findings of “egregious and unreasonable conduct” by the agency, including a “reckless sue first, ask questions later strategy.” The commission hailed the reversal as one of its big legal wins — although when one of your big boasts is getting $2.6 million in sanctions against you thrown out, it might be that you don’t have much to brag about….

If you wonder why the commission persists in its extreme aggressiveness anyway, one answer may be that the strategy works: most defendants settle, and the commission hauled in a record $372 million in settlements last year. 

 Perhaps it is time for defendants to start settling less often.

 

Revisiting Central Clearing for Derivatives

The Dodd-Frank requirement that over-the-counter derivatives be centrally cleared is one of the (slightly) less controversial provisions of the Act, at least in spirit if perhaps not always in substance. But for a time, a few observers have worried - myself included - that concentrating derivatives clearing activities in one or two single-purpose entities may increase, rather than reduce, the risk to the broader economy posed by the default of a counterparty.

As it turns out, we skeptics are not alone. In yesterday’s Wall Street Journal, the good folks at BlackRock are cited as having raised concerns in a recent study about the lack of clarity regarding where the risk ultimately falls in the event of default by a large counterparty. Banks and investors want the clearinghouses themselves to backstop some of this risk. The BlackRock study notes that “post-crisis rules have forced a large swath of risky trades… and this risk needs to be addressed.”

It is perhaps, therefore, a good time to hark back to Craig Pirrong’s Cato Policy Analysis from 2010, released on the day the Act was signed into law. In it, Mr. Pirrong argues that central clearing leads to better and more efficient risk pricing ONLY if the clearinghouse has perfect information. He notes the risk sharing that occurs through the clearinghouse mechanism encourages excessive risk taking, which creates moral hazard. Pirrong also highlights that “if the clearinghouse has imprecise information, the margin levels it chooses will sometimes overly constrain the trading of its members and sometimes constrain them too little…all of these factors mean that it is costly for the clearinghouse to control moral hazard.” As Pirrong notes, a clearing mandate reduces market efficiency and poses “its own systemic risks in a world where information is costly.”

One of the major criticisms of the previous or “bilateral” approach to derivatives clearing was that banks and investors could not adequately monitor their own risk exposure to counterparties (with some side complaints about banks mispricing risk etc.). However, as the BlackRock study notes, it is not clear that the central clearing approach addresses this concern, especially since the rules governing outcomes in the event of a major default have yet to be finalized. In particular, if a major counterparty defaults and the clearinghouse is not holding sufficient collateral to cover that counterparty’s trades, who loses out? Is it the members? The Federal Reserve? (Remember, one of the Board’s first actions under Dodd-Frank was to allow clearinghouses to borrow at the discount window in the same way that commercial banks do). Will the clearinghouse perhaps declare bankruptcy (and, if so, what impact will the failure of a major utility have on operational stability)?

More importantly, just when counterparties have realized these products must be treated with caution, the system is incentivizing the market participants with the best information (the members) to pool and therefore increase the riskiness of their activities. Derivatives are an important economic tool and vital to most companies’ (financial or otherwise) risk management. But we should not assume that the framework created by Dodd-Frank will eliminate risk in the derivatives trade, real or perceived.

Why Did Western Nations Continue to Prosper in the 20th Century even though Fiscal Burdens Increased?

In the pre-World War I era, the fiscal burden of government was very modest in North America and Western Europe. Total government spending consumed only about 10 percent of economic output, most nations were free from the plague of the income tax, and the value-added tax hadn’t even been invented.

Today, by contrast, every major nation has an onerous income tax and the VAT is ubiquitous. Those punitive tax systems exist largely because—on average—the burden of government spending now consumes more than 40 percent of GDP.

historical-size-of-govt

To be blunt, fiscal policy has moved dramatically in the wrong direction over the past 100-plus years. And thanks to demographic change and poorly designed entitlement programs, things are going to get much worse, according to Bank of International Settlements, Organization for Economic Cooperation and Development, and International Monetary Fund projections.

While those numbers, both past and future, are a bit depressing, they also present a challenge to advocates of small government. If taxes and spending are bad for growth, why did the United States (and other nations in the Western world) enjoy considerable prosperity all through the 20th century? I sometimes get asked that question after speeches or panel discussions on fiscal policy. In some cases, the person making the inquiry is genuinely curious. In other cases, it’s a leftist asking a “gotcha” question.

Long-Run GDP

I’ve generally had two responses.

International Regulatory Conflict

My colleague Peter Van Doren posted here yesterday about a new National Highway Traffic Safety Administration (NHTSA) rule which mandates that “all cars and light trucks sold in the United States in 2018 have rearview cameras installed.” I’m going to leave the analysis of the domestic regulatory aspects of this issue to experts like Peter. I just wanted to comment briefly on some of the international aspects.

In particular, what if other governments decide to regulate in this area as well and they all do it differently?  That would mean significant costs for car makers, as they would have to tailor their cars to meet the requirements of different governments. Note that the U.S. regulation doesn’t just say, “cars must have a rear-view camera.”  Rather, it gets very detailed:

The final rule amends a current standard by expanding the area behind a vehicle that must be visible to the driver when the vehicle is shifted into reverse. That field of view must include a 10-foot by 20-foot zone directly behind the vehicle. The system used must meet other requirements as well, including the size of the image displayed for the driver. 

In contrast to a market solution, which provides flexibility as to what will be offered, the regulatory approach has very specific requirements.

As far as I have been able to find out, the United States is the first to regulate here, but others are likely to follow. When the EU or Japan turn to the issue, for example, will they develop regulations that are incompatible with the U.S. approach? Will there be a proliferation of conflicting regulations?

In theory, it’s easy to avoid these problems. Smart regulators would recognize that their foreign counterparts’ regulations are equally effective. But in other areas of automobile regulation, we haven’t seen enough of this cooperation. The rear-view camera issue provides an opportunity for regulators from different countries to work together to avoid making regulation even more costly than it already is.

NHTSA’s Rearview Camera Mandate

Last week the National Highway Traffic Safety Administration (NHTSA) completed rulemaking that mandated that all cars and light trucks sold in the United States in 2018 have rearview cameras installed.

In 2008 Congress enacted legislation that mandated that the NHTSA issue a rule to enhance rear view visibility for drivers by 2011.  Normally, such a delay would be held up as an example of bureaucratic ineptitude and waste. But in this case, NHTSA was responding to its own analysis that determined (p. 143) that driver error is the major determinant of the effectiveness of backup assist technologies including cameras.

In addition, NHTSA concluded that the cost per life saved from installation of the cameras ranged from about 1.5 times, to more than 3 times the 6.1 million dollar value of a statistical life used by the Department of Transportation to evaluate the cost effectiveness of its regulations.  NHTSA waited until the possibility of intervention by the courts forced it to issue the rule.  The problem in this case is Congress overreacting to rare events rather than the agency.

For more on auto safety regulation, see Kevin McDonald’s piece in Regulation here.

FSOC’s Failing Grade?

All the recent hype over the legitimacy of high frequency trading has overshadowed another significant event in financial regulation: In a speech in Washington, D.C., yesterday Securities and Exchange Commissioner Luis Aguilar offered some fairly strong criticisms of recent actions by the Financial Stability Oversight Council (FSOC). The speech was significant because it is the first time that a Democratic commissioner has criticized the actions of one of the Dodd-Frank Act’s most controversial creations. (To date, only the Republican commissioners have criticized the FSOC, and we all know that Republicans don’t much like Dodd-Frank.) Indeed, Aguilar’s statements indicate just how fractured and fragmented the post-Dodd-Frank “systemic risk monitoring” system is.

At issue is the FSOC’s recent foray into the regulation of the mutual fund industry. Aguilar described the FSOC’s actions as “undercut(ting)” the SEC’s traditional authority and described a major report on asset management by the FSOC’s research arm, the Office of Financial Research, as “receiv(ing) near universal criticism.”

He went on to note that “the concerns voiced by commenters and lawmakers raise serious questions about whether the OFR’s report provides (an) adequate basis for the FSOC to designate asset managers as systemically important … and whether OFR is up to the tasks called for by its statutory mandate.”

For those of us who have been following this area for a while, the answer to the latter question is a resounding “no”. The FSOC claims legitimacy because the heads of all the major financial regulatory agencies are represented on its board. Yet it has been clear for a while that the FSOC staff has been mostly off on a frolic of its own.

Aguilar notes that the SEC staff has “no input or influence into” the FSOC or OFR processes and that the FSOC paid scant regard to the expertise or industry knowledge of the traditional regulators. Indeed, the preliminary actions of the FSOC in determining whether to “designate” mutual funds as “systemic” echoes the Council’s actions in the lead-up to its designation of several insurance firms as “Systematically Important Financial Institutions” that are subject to special regulation and government protection. It should be remembered that the only member of the FSOC board to vote against the designation of insurance powerhouse Prudential as a “systemic nonbank financial company” was Roy Woodall, who is also the only board member with any insurance industry experience. And in the case of mutual funds and asset managers, the quality of the information informing the FSOC’s decisions—in the form of the widely ridiculed OFR study—is even weaker. The process Aguilar describes, where regulatory agencies merely rubber stamp decisions made by the FSOC staff, is untenable (in part because the FSOC staff itself has no depth of experience, financial or otherwise).

Aguilar’s comments could be viewed as the beginning of the regulatory turf war that was an inevitable outcome of Dodd-Frank’s overbroad and contradictory mandates to competing regulators. But the numerous and well documented problems with the very concept of the FSOC means that it is time for Congress to pay some attention to Aguilar’s comments and rein in the FSOC’s excessive powers.