Cato Policy Analysis No. 11 June 3, 1982

Policy Analysis

Property Rights In Radio Communication: The Key to the Reform of Telecommunications Regulation

by Milton Mueller

Milton Mueller is an associate policy analyst with the Cato Institute, currently working on a book on telecommunications policy


Executive Summary

The Communications Act of 1934 subjected the telecommunications industry to a degree of central planning unprecedented in the United States. The recent trend toward deregulation reflects an almost universal belief that this experiment in central planning was a failure. Nevertheless, all attempts at reform, even those promulgated in the name of deregulation, have left the backbone of federal regulation untouched: centralized allocation of the frequency spectrum.

The Communications Act, like the Federal Radio Act that preceded it, claims the "airwaves" as the property of "the public," forbidding private ownership and market exchanges of radio frequencies. This claim of public ownership has given rise to a centralized system of licensing, which provides the legal and technical basis for many of the FCC's other rules and regulations.

The Federal Communications Commission is the successor to the Federal Radio Commission, which was created by the 1927 Radio Act to allocate frequencies after broadcasting technology emerged in the early 1920s. The FCC inherited the FRC's frequency allocation powers in 1934, when the other telecommunications services were added to its domain. Control of the frequency spectrum plays a surprisingly large, and insufficiently appreciated, role in the FCC's regulation of telecommunications in general. The burgeoning cable industry, for example, is highly dependent upon satellite and other relay services that use the spectrum for the distribution of its programming. The FCC's regulatory authority over cable was established because of cable's potential impact upon broadcasting, which the FCC was inclined to protect due to its control of the frequency spectrum. The new competition that has blessed the once-monopolistic field of telephony does not come from companies laying new lines or creating new local exchanges, but from new uses of the frequency spectrum.[1] The different telecommunication techniques -- satellites, coaxial cable, microwave, and traditional broadcasting and telephony -- are all part of an integrated network. Each technique is used in combination with the others. The FCC's absolute control over the allocation of one element of this network, the frequency spectrum, provides the foundation upon which the FCC can base its control of the rest of the industry.

During the debate over deregulation, however, the FCC's licensing power is generally taken for granted. Debate centers on which rules and regulations imposed through the licensing process should be eliminated or eased. But the fundamental issue underlying this debate is whether the frequency spectrum should continue to be treated as "public property" and centrally allocated by the FCC, or whether private, freely transferable rights in radio communication should be created and a full-fledged market system introduced.

This analysis finds that the creation of a market in radio communication through the definition of freely transferable rights is desirable for two reasons. First, it would introduce a price system into the process of frequency allocation. The incentives and signals created by market prices would lead to more efficient rationing and to conservation of this scarce economic good. Efficient rationing will become increasingly important as the new services made possible by the new technology enter the market. Equally important, definition of property rights would make open entry into radio communication services possible, thereby introducing more competition into the field. Central allocation of frequencies makes all entry dependent upon the approval of the FCC. Aside from the inherent delays and costs created by such administrative review of entry, it is clear that established firms often use the FCC's power over frequency allocation to shield themselves from competition. A system of property rights will introduce a fair, orderly, and swift procedure by which new competitors can enter any telecommunications market where new services are needed. This will allow the industry to adapt to changing conditions without the need for government direction. Introducing a price system and defining a flexible entry procedure will do much to bring order to the current regulatory chaos in telecommunications.

The first section of this report analyzes the relationship between property rights and deregulation, noting that deregulation is creating a de facto system of private property, yet one devoid of some of the most important benefits of a property system that permits free exchange.

The second section analyzes the nature of scarcity in radio communication, criticizing some of the common fallacies concerning spectrum scarcity and the electromagnetic spectrum's status as a "natural resource."

The third section is a critique of the present system of frequency allocation. It notes that the absence of a price system has created and will continue to create severe problems in spectrum management. A price system is shown to be possible only by the introduction of freely transferable rights; alternative economic techniques, such as auctions and lotteries, are criticized as inadequate.

The fourth section shows that a feasible system of freely transferable rights can be based on transmitter and receiver inputs. Such a system of property rights is already in use on a limited scale in the 4-6 Ghz band.

The fifth and final section notes that government regulations quickly become obsolete as the technology and economics of communications change. A system of private property or freely transferable rights, in contrast, would establish enduring rules that would protect the public's interest in justice and efficiency while allowing the industry to adapt to changing conditions.

1. Deregulation and Property Rights

The idea that private property rights can be created in radio communication is not new; it was first proposed by Leo Herzel and Ronald Coase of the University of Chicago in the 1950s.[2] Since then, discussion of the issue has been mostly confined to scholarly journals. High-level policy makers, such as the 1968 Presidential Task Force on Communications Policy, considered private property proposals, but the change seemed too radical at the time and never reached the nation s legislative agenda.

Recent trends in regulatory policy, however, make the idea seem not so radical anymore. For the past 10 years the FCC has been deregulating telecommunications at an accelerating pace, and every deregulatory initiative moves us closer to a system of private, freely transferable rights.

In September 1981 the FCC, under Chairman Mark Fowler, submitted a legislative package to the Congress that proposed sweeping changes in the Communications Act, including elimination of those portions of Section 315 that articulate the so-called Equal Time and Fairness doctrines. But the centerpiece of the proposal was an amendment that directed the Commission to rely on "marketplace forces" as the primary element of policymaking. In effect, the Fowler package would make deregulation a permanent part of the Communications Act. Since there is no "market" without the exchange of private property, the introduction of "marketplace forces" into the allocation of radio frequencies -- one of the primary factors of production in telecommunications -- would require the definition of freely transferable rights.

The issue of property rights is also implicit in the controversy over the Equal Time and Fairness doctrines. To broadcasters, Equal Time and Fairness are clear violations of the First Amendment because they give the federal government the power to intervene in their programming. To them, this power is as outrageous as a federal order to include a certain story on the front page of the New York Times in order to ensure a disgruntled reader of "fair representation." Those who would deny First Amendment status to the electronic media do so on the grounds that the technology of radio communication justifies a different kind of regulation than that appropriate to the print media. The federal government is allowed to put restraints and conditions upon the licensee, the Supreme Court stated in Red Lion Broadcasting v. FCC, "because of the scarcity of frequencies."[3] The argument from scarcity pervades defenses of Equal Time, as it does all defenses of federal regulation of the frequency spectrum. The argument of Rep. John Dingell (D-Mich.), chairman of the House Committee on Energy and Commerce, is typical: Repeal of Equal Time would grant broadcasters "exclusive and highly profitable use of a scarce and valuable resource in perpetuity, without any accountability."[4] Dingell's argument was seconded by Rep. Timothy Wirth (D-Colo.), chairman of the House Subcommittee on Telecommunications. He believes that "spectrum space is limited and broadcasters are privileged to operate over this most precious resource."[5]

Clearly, the broadcasters' claim of First Amendment rights will not be perceived as legitimate unless they own the channels they use, and many legislators are hesitant to grant them outright ownership of what appears to be a "limited resource." In the print media, what makes the deregulated "marketplace of ideas" legitimate is, quite literally, the market -- a system of private property in which publications can be freely chosen from among a field of competitors. The public interest is protected not by government-enforced "fairness" rules, but by open entry -- anyone can publish and dis- tribute printed matter if he can afford to. The contention is that there is no fixed limit on the number of publications that can be produced but that there is a natural limit on the number of stations that can operate in the frequency spectrum without interfering with each other. (This contention will be examined in detail in section 2.) Frequencies must therefore be rationed by the federal government, and the free market/First Amendment paradigm does not apply.

Equal Time and Fairness are the last bulwarks of the "public ownership" concept of frequency allocation; most of the others have already crumbled, or are in the process of crumbling. The recent extension of broadcasting licenses (television licenses were extended from three to five years, and radio licenses from three to seven years), and the FCC's removal of program content regulations, ascertainment requirements, and other restrictions on the operation of a radio station, make the award of a broadcasting license more and more like the grant of a property right. Indeed, the van Deerlin bill, introduced in a previous Congress, would have made licenses permanent, and some recent bills have retained this feature. One bill now before Congress would not permit competing applications for license renewal.[6] The National Telecommunications and Information Administration (NTIA), which sets telecommunications policy for the executive branch, has argued that the FCC should allow Direct Broadcast Satellite (DBS) licensees to sublease their licenses.[7]

If licenses to use radio frequencies become permanent, if they can be leased or otherwise exchanged, if the users of those frequencies have fewer and fewer restrictions placed upon them, then we are bordering on a system of private, freely transferable rights in radio communication. Nevertheless, "public ownership" and central allocation authority remain in the Communications Act, and no coherent legislative alternative has been formulated. The result is an uneasy combination of two radically different regulatory concepts. If regulatory policy is not to oscillate arbitrarily between these two approaches, Congress will have to make a choice between them.

If Congress fails to make a decision and write it into the Communications Act, we could end up with the worst of both worlds. The FCC's regulation is double-edged; it restricts what licensees can do, as the industry is quick to complain, but it also protects licensees from competition. The FCC's frequency allocation and assignment criteria, as we shall see in greater detail later, limit the number of available channels and do not allow the users of these channels to subdivide and reconstitute them to make more available to new entrants. Within such a politico-economic framework, removing the restrictions on licensees without removing the protection afforded by the FCC's control of frequency allocation continues to give established firms a powerful advantage over new competitors.

An example is provided by the FCC's about-face on its own proposal to reduce the AM broadcasting channel width from 10 khz to 9 khz. The bandwidth of AM radio has been 10 khz since 1928. Radio technology has obviously advanced since then, making it feasible to reduce channel spacing to 9 khz. Many other countries already use 9 khz spacing. By reducing spacing the FCC would create room on the frequency spectrum for hundreds of new stations. Under Carter-appointed chairman Charles Ferris, the FCC advocated just such a change and sold the entire Western region of the International Telecommunications Union on the change as well. But the new administration reversed this position. The primary impetus for keeping 10 khz, not surprisingly, came from the National Association of Broadcasters and the National Radio Broadcasters Association, organizations that represent incumbents in the industry. Had 9 khz passed, their members would have been faced with new competition for scarce advertising revenues, and the technical changes required by the shift would have imposed some costs on them. Thus, while the FCC is rapidly removing many of the restrictions on existing broadcasters, efforts to subject them to new competition are often stalled because of the system of centralized frequency allocation. The primary rationale for deregulation is that the discipline imposed by market forces serves the public interest better than direct oversight. But by failing to define freely transferable rights in radio frequencies, the government is exempting radio communication firms from one of the most important kinds of market discipline: open entry.

2. Scarcity in Radio Communication

Property rights in radio communication cannot be defined, or even discussed intelligently, unless we start with a proper understanding of what the electromagnetic spectrum is and how it is used. That the spectrum is "scarce" in some sense, and that this scarcity creates the need for some kind of rationing, is obvious to everyone. Nevertheless, while the term "scarcity" is invoked often in discussions of radio communication and telecommunications policy, there is still a great deal of confusion about what the term means when applied to radio communication.[8]

It is common to hear the spectrum referred to as a "natural resource." This is true not only of politicians like Dingell and Wirth, who believe that this "precious resource" ought to be owned by "the public," but also of many economists and engineers who tend to support private ownership and deregulation. The former chief of the FCC's Office of Plans and Policy and the FCC's current chief scientist, testifying before Congress in 1979, described the spectrum as "part of a subset of natural resources, namely those resources that do not conform to legal or geographic boundaries." Their testimony went on to compare the rationing of scarcity in the spectrum to that of other nonconforming resources such as fish, oil, and water.[9]

This characterization of the electromagnetic spectrum is fallacious and misleading. The spectrum is not a "natural resource"; it does not even exist independently of specific transmitters and receivers. The economist's and politician's treatment of the spectrum as a resource is strangely reminiscent of the 19th-century belief in the existence of an "ether" -- an invisible, incorporeal medium through which radio waves pass. But physicists since Steinmetz and Einstein have discarded the notion of an ether; perhaps it is time policy makers caught up with them.

Electromagnetic energy consists of oscillating electric and magnetic fields which traverse space at the speed of light. The term "frequency" refers to the rate of oscillation and is denominated in units of cycles per second or Herz (abbreviated hz). The frequency spectrum is the scale of frequencies from 0 hz at the bottom to cosmic rays, with a frequency of 1025 hz, at the top. What is commonly called the radio frequency spectrum is simply our term for the range of frequencies suited for telecommunication, and runs from 10 khz to 300.000 Mhz.

Radio communication takes place when a transmitter and a receiver resonate on the same frequency. The phenomenon of resonance can be observed by setting two tuning forks of the same pitch near each other. Strike one, and the other will begin to vibrate. Radio communication uses this kind of energy transfer to move information from one point in space to another, but with radio the interaction is electromagnetic and does not involve vibrations in the air. Information is encoded by the transmitter as a set of variations on the frequency oscillation pattern. These variations are called the modulation pattern. The frequency oscillation pattern provides a consistent frame of reference (just as a wire connection does) against which a modulation pattern can be interpreted. Thus, a receiver tuned to the same frequency as a transmitter can decode the modulation pattern and reproduce the information.

There is no "spectrum," then; there are only transmitters and receivers of electromagnetic energy. Electromagnetic energy can be generated by a variety of sources: a radio transmitter, the sun, the galaxies, neon lights, automobile ignition systems. We measure this energy by frequency and arrange the frequencies in consecutive order on a map we call "the electromagnetic spectrum." The resulting classificatory schema makes it easier for us to understand the behavior of electromagnetic transmitters and receivers. But the spectrum, the arrangement, is our own creation. No Platonic entity or "invisible resource"[10] exists independently of a specific transmitter at a specific location. Conversely, no knowledge of a transmission can be gained without setting up a specific receiver in a specific location.

As the frequency spectrum is neither natural nor a resource, it is not surprising that scarcity in radio communication is not caused by physical depletion, as is scarcity in natural resources. A radio station does not consume the frequency on which it broadcasts; a microwave tower does not deplete our stock of spectrum.

Instead of approaching the spectrum as a resource that is somehow "used" by transmitters, it is best to think of scarcity in radio communication in terms of what radio engineers call "electromagnetic compatibility." As the name implies, compatibility means that the operation of one radio transmitter does not interfere with the reception of other transmitters; i.e., their operation is compatible.

It is the phenomenon of interference that gives rise to scarcity in radio communication. This is not, however, as simple as it sounds, for the use of the same frequency does not necessarily result in harmful interference. In order to understand how interference creates scarcity, let us imagine that receivers R1 - Rn are all within a 100-mile radius of transmitter T1 and all are tuned to T1's frequency. As long as they are tuned to the same frequency, the modulation pattern that goes into T1 will be reproduced by the amplifiers of R1 - R n. But if another transmitter, T2, adds another modulation pattern to the same frequency, it may interfere with the ability of some of the receivers within the set R1 - Rn to reproduce T1's signal. The level of interference is denoted by measuring the relative strength of T1 and T2 in a specific location. The signal/interference ratio that results, T1/T2, will obviously be different for every receiver because of the difference in their proximity to the two transmitters (and, possibly, their different technological characteristics).

The closer the signal/interference ratio comes to 1:1 in any receiver, the worse the interference will be. If the ratio is high enough, the receiver can ignore or filter out the weaker signal. But as the ratio approaches 1:1 the receiver's ability to differentiate between the two modulation patterns breaks down, and the signal becomes unintelligible. The point is that signals do not interfere with each other in space, they interfere with each other in receivers. Radio communication is scarce because of the radio receiver's limited capacity to differentiate between modulations that are not separated by frequency, or by a large enough ratio in received strength.

Although discussions of electromagnetic compatibility can easily become intimidatingly technical, the principle behind it is actually quite simple: To make their communications compatible, radio transmitters must be separated in space and frequency[11] by enough of a margin to allow receivers to differentiate between their signals. Thus, the further away T2 is from T1 (all other things remaining equal), the fewer of T1's receivers it will interfere with. The lower the radiated power of T2 in relation to T1, the smaller its geographic range and therefore the fewer of T1's receivers it will interfere with. If T1 and T2 operate on the same frequency -- that is, if they oscillate at the same tempo -- they must be separated in space to avoid interference. If they transmit from the same location, they must be separated by frequency to avoid interference. Because energy transmitted on one frequency will also generate weaker signals on frequencies that are subharmonic (or constant multiples) of that frequency, transmitters that use subharmonic and adjacent frequencies must also be separated in space to some degree to avoid interference. If T1 and T2 use the same frequency but the polarization is different -- that is, if one oscillates vertically and the other horizontally -- then the transmission may be compatible.

To summarize, by resonating on the same frequency, electromagnetic transmitters and receivers establish a connection or channel for communication, just as if a wire had been strung from one to the other. From here on I will refer to these connections as channels. Just as the same path in space cannot be occupied by two cables at the same time, so for any given radio receiver no channel can be occupied by more than one transmitter at the same time if coherent communication is to result. Stated differently, a receiver tuned to a specific frequency can decode the modulation pattern imposed on that frequency by only one transmitter at a time. This is why radio communication is scarce. The scarce economic goods allocated by the FCC are not portions of a "natural resource" or even frequencies per se, but are these channels, or the opportunity to make electromagnetic connections among specific transmitters and receivers. Scarcity is allocated by separating transmitters in space and frequency by the degree necessary to achieve electromagnetic compatibility.

Although channels are scarce it is a mistake to imply, as so many court decisions and politicians do, that the number available is rigidly fixed. The number of channels available can expand as the technology of radio communication improves and/or as the economics of telecommunication demands. The primary factor limiting the number of channels available is not technology or the "finite" limits of the spectrum, but the government's system of allocation. The absence of private property rights eliminates many of the incentives and opportunities to expand the number of channels and to make most efficient use of the channels that do exist.

As noted, electromagnetic compatibility is achieved by separating transmitters in space and frequency. As radio technology improves, it becomes possible to progressively reduce these separations without increasing interference, thereby making room for more and more channels within the same frequency and spatial dimensions. This can happen by reducing the bandwidth [12] of transmitters and receivers (9 khz is an example), and by reducing the spatial separations required of transmitters.[13] The number of channels available can also expand by developing transmitters and receivers capable of using frequencies (generally higher) that could not be used before.

The reductions in the separations required to achieve compatibility made possible by advanced technology are quite dramatic. It is technically feasible to reduce the television bandwidth, set at 6 Mhz by the FCC, by a factor of 5, 10, and even 100.[14] Since the technology required to do so is expensive, such a reduction may not be economical. The FCC's Low-Power Television (LPTV) proposal permits hundreds of new television stations due to closer spacing and low power. The most dramatic example of closer spacing, however, is provided by AM radio. In 1926 one engineering study estimated that no more than 331 standard broadcast stations would fit into the AM frequencies without harmful interference.[15,16] In 1939, the FCC declared that the AM band was "saturated" with 764 stations.[17] Today, there are more than 4,000 stations fit into the AM frequencies. Much of this expansion was made possible by the use of directional antennas, which allow us to control a signal's propagation pattern.

The number of possible channels also expands by developing transmitters and receivers capable of using higher and higher frequencies. The range of technically usable frequencies has risen from a top limit of 1 Mhz in 1912 to over 40,000 Mhz currently.

While the number of channels available has expanded, the scope of expansion has been severely limited by the absence of a price system. If scarcity in radio were rationed by the price system rather than the government, the price of channels would rise as the demand for them increased. In this context, anyone who could find a technological means of creating new channels, or of making more efficient use of existing channels, would profit economically. In the land mobile services, for example, widespread congestion limited the profits of the manufacturers of mobile radio equipment. As long as all the mobile radio channels were full, the market for their product was limited. Thus the manufacturers -- unable to obtain additional allocations from the FCC and hence forced to confront the full burden of spectrum scarcity -- reduced the bandwidth of their equipment from 240 khz in 1940 to 15 khz or less today. In contrast, services such as broadcasting, in which the dimensions of channels are carefully controlled by the FCC, have not experienced any bandwidth reductions.

Rising prices, then, stimulate expansion or more intensive use of a good. In this respect, radio channels are no different from any other scarce good. As the price of land in congested urban areas rises, for example, its use becomes more intensive. Technology is used to create more of the good, as when a skyscraper creates hundreds of office units where before there were only a few.

A proper understanding of scarcity in radio communication also makes it clear that there is no significant difference between radio communication and print in this respect. Clearly, the physical resources that go into the production of the print media -- newsprint, presses, distribution trucks, and so on -- are scarce, and hence their owners charge a price for them. It is often asserted that print differs from the electronic media in that virtually anyone can prepare a written or printed message. While true, this ignores the fact that once a message is printed it must be physically transported from the publisher to the readers to have any effect. Thus, printed communications are as dependent upon channels of transportation as radio communications are dependent upon electromagnetic channels. Both kinds of channels are scarce, and for exactly the same reasons: It is costly to build a road or fly an airplane from point A to point B and costly to send a bundle of newspapers on that road or plane, just as it is costly to establish a radio channel and transmit information electronically. Scarcity as reflected in the price of transportation excludes some people from access to printed communications just as scarcity in radio channels excludes some transmitters in favor of others. Indeed, the rising price of energy and other raw materials makes the distribution of printed matter more expensive and less accessible to the public than telecommunication, the price of which is rapidly falling. Increases in first- and second-class postage rates, for example, have threatened the economic viability of many marginal publications.

Nevertheless, the argument runs, if no commercial alternatives are available for printed matter, an individual can always hand out mimeographed pamphlets on a street corner, whereas there is (allegedly) no room for backyard broadcasters. This argument does not stand up to analysis, and not only for the obvious reason that mimeographed copies must be bought. Radio communication, in the form of broadcasting or satellite, covers a much larger geographic region and reaches a larger audience than an individual handing out pamphlets on a street corner or mailing copies to his friends. To cover the same-sized audience as a broadcaster, our pamphleteer would have to command enough resources to print up millions of copies and pay hundreds of thousands of dollars in postage or other shipping charges. This kind of distribution is not accessible to "anyone" any more than mass broadcasting is. To make an honest comparison of telecommunication and print in this respect, we must base the comparison on each medium's coverage of a geographic region of equal size. Once we limit our comparison to these cases, we find that distribution of a message by electronic means is easier and less expensive than print. A bullhorn or any other P.A. system will cover a street corner as well as the distribution of pamphlets, and an individual can distribute information among his friends by means of CB radio, ham radio, or telephone as easily as he can by sending them a manuscript in the mail.

There is nothing mysterious or exceptional about the nature of scarcity in radio communication. The idea that the scarcity of channels somehow justifies federal intervention in radio, while the scarcity of newsprint and transportation channels does not justify federal regulation of print, is simply an atavism. It may have been understandable at the dawn of radio in the 1920s, when our regulatory system was formed, but there is no excuse for it now -- especially now that newspapers are relying on telecommunication to an increasing degree.[18]

Despite the increasing price of transportation and the falling price of telecommunication, there are thousands of nationally distributed publications reflecting a great range of opinion and subject matter. Information that is telecommunicated cannot approach this diversity and scope. Incredibly, this fact is often cited by critics of deregulation as evidence of the need for continued regulation.[19] Apparently they are unaware of the fact that the print media are not regulated by the FCC but by the price system, that is, by the exchange of private property in the market. The flexibility and diversity in print that has arisen from this arrangement is not an argument for regulating the electronic media differently; it is a powerful demonstration that free market and First Amendment concepts should be extended to the telecommunications industry immediately.

3. Centralized Frequency Allocations: a Critique

The debate over property in the "airwaves" is frequently muddied by dichotomizing the alternatives of "public" and "private" property. Public property is a euphonious term that implies that all of us acquire control over the airwaves, while private property sounds selfish. But public property is a meaningless term. Ownership means that the owner has some control over the good in question. If a resource is scarce it cannot be controlled by everyone equally, no matter what form of regulation we adopt.

If by public ownership we denote something like the national parks, which anyone may enter and enjoy at his own convenience, then the airwaves were owned by 'the public" from 1920 - 1926, when anyone who applied to the Department of Commerce could acquire a broadcasting license. This kind of public ownership quickly led to chaos, as more than 700 stations crowded into the two frequencies set aside for that purpose by the Department of Commerce.[20] In 1927 Congress responded to this crisis by passing the Radio Act, and the Federal Radio Commission it created promptly threw about 15% of these stations off the air and ceased to issue licenses for several years. The airwaves have never been "public property" since. The chaos of 1926 demonstrated the need for -- indeed, the inevitability of -- some kind of rationing, some way of excluding some members of the public from the use of radio frequencies.

The choice we face, then, is not between a system of public ownership and a system of private ownership, but a choice between two different kinds of private ownership. In one case, a monopoly over a channel is granted to a private licensee by the federal government, which retains some -- but increasingly less -- power over how the channel is used. The federal government rations the scarce good by defining a property structure through administrative procedures that we will explore below. In a "private" property system, the difference is that license rights could not be withdrawn by the FCC, trading of these rights would be allowed, and hence a price system would replace administrative rationing.

A. The Present System

The FCC controls the use of the frequency spectrum by specifying what one can do with a radio transmitter. The rights granted by the FCC license specify antenna height and location, power level, operating hours, and other technical standards. These specifications, which we will refer to as the inputs of the transmitter throughout this report, determine the range of a radio signal insofar as its range can be socially controlled. The FCC arrives at these input specifications by means of two processes: allocation and assignment.

The allocation [21] process sets aside a certain block of consecutive frequencies for the use of a specific communications service.[22] The block allocated to AM broadcasting, for example, extends-from 535 khz to 1604 khz. With the exception of AM broadcasting, which developed before any regulatory apparatus existed, allocation precedes commercial de- velopment of a service. Sometimes different services share the same block, but for the most part allocation restricts each service to a separate range of frequencies. An official record of the FCC's division of the spectrum is contained in the "Table of Frequency Allocations," Section 2.106 of the FCC Rules and Regulations.

Once a block of frequencies has been allocated to a particular service, the assignment process determines the bandwidth and geographic range of the particular channels within the block. The FCC sets standards governing the bandwidth of the service and the distance each transmitter must be separated from the other transmitters on the same, adjacent, and harmonic frequencies. The co-channel separation for UHF television, for example, is between 150 - 170 miles. In broadcasting, assignment procedures attempt to define geographic zones, called "signal contours" or "service areas," within which a station is protected from interference.

Assignment procedures vary by service. FM broadcasting and VHF and UHF television are assigned according to a pre-engineered assignment table. These tables prearrange all the input relationships among transmitters, defining a fixed number of channels. These channels are then handed out to private users in the licensing process. The television assignment table was adopted in 1952, the FM radio table in 1963. Over 2,000 channels were made available by the TV assignment tables. However, most of them were not located in markets capable of sustaining a station, while the few that were located in desirable urban markets have long been occupied. Thus, as of 1980 there were just over 1,000 operating UHF and VHF television stations.

AM radio and the newer microwave-satellite services, in contrast, do not use assignment tables. They are assigned on more of an ad hoc basis. New stations are worked in in a way that will avoid harmful interference with existing stations. Land mobile radio services are allocated channels, and the coordinator of a mobile service is allowed to fit as many individual users into a channel as he can. In effect, there are no assignment procedures for most mobile services.

A distinction is in order here. "Allocation" and "assignment" are the terms the FCC uses to describe what it is doing. Because its regulatory program is founded on the notion that the spectrum is a "resource," allocation and assignment are the terms it uses to describe the process by which portions of that "resource" are created and handed out to private users. It would be simpler, however, and more in accord with the analysis presented in section 2, to say that what the FCC really does is decide what technical standards the manufacturers of radio equipment must use and, once the equipment is manufactured, how far the transmitters must be placed from each other. The FCC's technical standards include, but are not limited to, bandwidth specifications. This distinction is important because it underscores the degree to which the FCC's role in "frequency allocation" gives it a major role in the design and production of radio equipment. The Radio Technical Planning Board established by the FCC in 1943, for example, set the technical standards for television that prevail to this day.

Allocation and assignment are based on engineering and legal criteria. But because they must be used to ration channels, a scarce good, the engineers, lawyers, and politicians who make the decisions must become full-time economists as well. By limiting the number of frequencies available to a service and by setting the bandwidth and geographic separations required of transmitters, the FCC sets an upper limit on the number of competitors who can enter a given radio market. The VHF assignment table, for example, starts from an allocated base of 12 channels. Once the transmitter separations are taken into account, the number of VHF channels in a given geographic region is reduced from a range extending from 7 in a few cases to as little as 3 in others. Likewise, if the FCC allocates 500 Mhz to DBS and limits the number of orbital slots DBS satellites can occupy, it limits the number of competitors who can enter that market.

By segregating services into distinct blocks of frequencies, the FCC puts itself in a position where it must judge the relative economic value to society of different services. If it makes too many frequencies available to land mobile services, there may not be enough for relay purposes or broadcasting. Allocations made to television had to be taken away from FM radio, and the frequency allocation planned for DBS will have to be taken away from established microwave services.

While the decisions made by the FCC are thus economic in nature, no price for channels or frequencies (or for closer geographic spacing) is ever charged. A small license fee is charged to cover the FCC's license processing costs, but this fee in no way purports to represent the actual value of the channel. All allocation and assignment decisions are made by administrative procedures and are likely to involve public hearings in accord with the Communications Act and the Administrative Procedures Act. "Trafficking" or trading of licenses or channels is forbidden. Changes in ownership require the approval of the FCC. The license cannot be subleased or its input specifications altered in any way in exchange for money. Unlike virtually every other commodity in the economy, then, radio channels do not become more expensive as the demand for them increases.

In sum, allocation and assignment define the property structure of radio communication -- they create channels and assign control of them to private users.[23] The FCC's power to define the dimensions of radio channels, their arrangement on the spectrum, and the kind of signal they can carry is equivalent to the power the federal government would have over the rest of society if it defined the size and shape of all land parcels and approved all land transactions. Because the property structure is defined exclusively by government, radio communication provides one of the purest examples of economic planning in the U.S.

Serious questions can be raised concerning whether that much power ought to be centralized in one agency of the federal government. This critique will focus on the narrower question of whether a centralized agency can exercise such power rationally. By abolishing private property and exchange in radio communication, the FCC also abolishes the possibility of attaching prices to channels. Without market prices, there is no way to continuously bring the dimensions and distribution of channels in line with the continuous changes in the economics and technology of telecommunication. Assuming the best intentions on the part of the regulators, the rigidity of central allocation limits the availability of telecommunication services and impedes the industry's adaptation to changing conditions.

B. The Need for Prices

One of the most important discoveries of modern economic science is the role of prices in facilitating rational allocation of scarce goods in a complex economic system. Prices are a medium for the transmission and reception of information no less than a satellite link or a cable network. Instead of electrical on-off patterns, the price system employs numerical variations in a common medium of exchange -- money. The information conveyed by money prices is information about the supply and demand for scarce goods. Prices provide objective information about the supply and demand for a commodity because they represent an exchange ratio, the quantity of money required to induce the owner of a good to give it up under certain conditions. Attaching a numerical value in this way to commodities and services that would otherwise be incomparable allows individual decision makers to directly compare the cost of alternative factors of production. In sum, money prices convey knowledge about the value of goods.

The deregulation of telecommunications began in the late '60s.[24] While technological change helped stimulate this trend, another important factor was a body of literature that has become known as "spectrum economics."[25] This literature applied economic analysis to radio communication for the first time, treating the electromagnetic spectrum as a scarce good. It accumulated an overwhelming body of evidence showing that government decision makers have no way of knowing the value of scarce frequencies, and hence administrative rationing is chaotic and inefficient. Indeed, the most damning evidence of the problems with central allocation comes from within the spectrum management bureaucracy.[26]

While the literature of spectrum economics is recent, it is really only an extension, within the microcosm of the frequency spectrum, of the debate over economic planning that began in the 1920s. In the early decades of the 20th century, about the same time as the commercial use of the frequency spectrum began, economic planning was a new and, some thought, "scientific" idea. The first systematic critique of planning was advanced by Ludwig von Mises, the Austrian economist.[27] Von Mises argued that rational economic calculation is impossible without prices that arise from actual market exchanges. Without some means of comparing physically non-comparable factors of production and of expressing those comparisons in precise units, Mises claimed, a central planner cannot know how to put productive resources to their most efficient use. Far from bringing order to the "chaos" of the free market, he predicted central planning would itself lead to chaos.

One of Mises' followers, Nobel laureate F. A. Hayek, noted in 1945 that the price system acted quite literally like a telecommunications medium, registering information about the relative scarcity of goods and distributing this information throughout the society.[28] Hayek also called attention to the fact that the information needed to plan an industry is never concentrated in a single place, but is widely dispersed as "bits of incomplete and frequently contradictory knowledge which separate individuals possess." Hayek concluded that this "Planner's Dilemma" cannot be solved by "first communicating all this knowledge to a central board which, after integrating all knowledge, issues its order. We must solve it by some form of decentralization."[29]

What is interesting about the history of the idea of planning is that Mises and Hayek in effect made a prediction about the impossibility of planning, and this prediction was tested by subsequent events. Mises' warning was ignored by political decision makers; central planners were given control over the allocation of scarce resources. In the U.S. telecommunications industry in particular, their power to define the dimensions of channels gave the planners near-absolute control over the property structure of radio communication. The frequency spectrum is an ideal test case because it was exempted from market forces so completely.

As predicted, no price system could develop for channels without market exchanges. The absence of a price system, as the spectrum economics literature attests, did indeed make economic calculation impossible. The ultimate result was chaos in frequency allocation. The federal government intervened in 1927 to bring order to the "chaos of the airwaves" that had developed in the absence of property rights. It succeeded in keeping chaos off the airwaves by establishing technical standards and by rigidly limiting the number of transmitters on the air. But the locus of the chaos merely shifted, into the corridors of the FCC itself.

The FCC has been unable to keep up with the pace of change in radio communication since the end of World War II. It took the FCC nearly 10 years to finalize allocation and assignment criteria for television. For four of those years, it had to impose a "freeze" on the licensing of stations. It was almost 30 years before the FCC was able to change those specifications with the LPTV proposal. It took the FCC three years to settle a dispute between FM radio and VHF television over the same frequencies, and it took 10 years to reallocate some frequencies from UHF television to mobile radio. Access to channels is thus constricted by a bureaucracy which frequently needs 10 years to make a major decision, and the result is a backlog of applicants that can only be described as chaotic. Eight to 10 applicants frequently apply for desirable television channels in urban markets, and the astronomical trading price of these stations attests to the value of the channel. Over 400 applications for the Multi-Point Distribution Service (MDS) have been received since 1978, over 100 of them mutually exclusive.

The degree to which the market is stifled by the FCC is indicated by what happens when it lifts some of its regulations and opens the door to newcomers, as it did with the LPTV proposal. The FCC was flooded with 6,000 applications. True to form, the FCC reacted by slapping a freeze on further applications because it is unable to process them all. FCC research suggests that if the freeze is lifted, another 8,000 applications might be received. Government and industry prognosticators estimate that the licensing logjam will not be broken up until mid-1983.[30] In all of these cases, the FCC must make what government spokesmen admit are mostly arbitrary selections among the competing applicants,[31] yet resolution of the competing claims can take several years and cost hundreds of thousands of dollars.[32] In some cases, the cost of obtaining an assignment exceeds the value of the license.

There is also a tremendous maldistribution of channels arising from central allocation. For the most part, allocation and assignment criteria are uniform throughout the country when it is obvious that the services which can be sustained differ radically from region to region. Frequencies set aside for forestry use, for example, were only recently made available for taxicab services in New York City. The possibilities for networking a signal created by microwave relays, satellites, cable, and the phone system means that any broadcast station in any part of the country can aspire to a national audience. Conversely, any audience in any region, no matter how remote, could receive a range of channel choices as wide as the nationwide market could sustain. Until recently, the FCC's policy of localism, enforced through its allocation and assignment criteria, has confined broadcasting to a small area around the community where the transmitter is located.[33] Thus, instead of 50-100 national radio and television channels, the public has been limited to three or four local television stations, 10-15 AM and FM stations, and only three national networks.[34] It is worth noting in this respect that the most promising proposals for low-power television are precisely those that rely on satellite and translator networks to distribute a signal produced in a larger market throughout the country. Without these programming networks, few of the proposed local stations will be viable.[35]

The role of prices in facilitating value comparisons would suggest that without them, the FCC would have serious problems making an allocation when different radio communication services desire the same frequencies. If there is congestion within an allocation, the FCC can adopt stricter technical standards to squeeze in more channels. But when alternative, mutually exclusive uses for frequencies are proposed, we would expect the FCC to be totally at sea. This is exactly what we do find. Conflicts over service allocations have led to extreme confusion and delay.

To date, the FCC has resolved only two major conflicts between commercial services. The first involved FM radio and television. FM radio was developed prior to World War II as an improvement over AM broadcasting. In 1940 it was granted the VHF frequencies 40 - 52 Mhz. By 1943 there were more applications than available channels in New York and New England; by 1945, 55 stations and an estimated 400,000 receivers existed.[36] At the end of the war, both FM and the new television industry were ready for more frequencies, and the VHF range (roughly, 20 - 120 Mhz) was believed best suited to both. A long political battle ensued. The crossfire of testimony, technical reports, congressional hearings, and court challenges lasted three years, from 1944 - 1947. In the end, the FCC moved FM from 40 - 52 Mhz to its present slot, 88 - 108 Mhz, and awarded TV the frequencies 44 - 88 Mhz (Channels 1 - 6) and 174 - 216 Mhz (Channels 7 - 13). The FM industry was wiped out by this decision and did not recover until the late '60s. Consumer investment in receivers was rendered obsolete, and established stations were forced out of business.

The only other time the FCC was faced with a conflict between two services, its performance was even worse. In 1956 the FCC initiated allocation proceedings; one proceeding, Docket 11997, documented the need for more frequencies for land-mobile radio (LMR). Land-mobile advocates requested unused channels allocated to UHF television, but the docket was terminated without making a reallocation. A land-mobile advisory committee was created, though. Years later, after mounting pressure, the FCC authorized the use of UHF channels 70 - 83 in the 50 largest markets, and 115 Mhz was given to LMR from the government and broadcast studio-transmitter links. Reallocation of 200 Mhz thus took about 10 years. Some of the frequencies reallocated are still not being used,[37] and LMR services are still highly congested.[38]

The LMR-UHF reallocation graphically demonstrates how difficult it is to get a centralized, administered system to make the kinds of adjustments that a price system would induce automatically. Although the FCC has no idea how to cope with this problem, it is faced with the need for another reallocation at the moment,[39] and reallocation promises to be one of its most prevalent concerns in the future. According to an FCC report:

Proceedings involving reallocation of the spectrum can be expected to occur much more frequently in the future. This is particularly true in the most heavily populated portion of the spectrum between roughly 100 Mhz and 10 Ghz, which is the optimum range for most existing applications of radio. Bandwidth requirements preclude the use of lower frequencies for many of these applications, while the propagation characteristics of electromagnetic energy impose economic constraints at frequencies above this range.[40]

Reallocation forces the FCC to make a comparison between the economic value of the different communications services. Prices, as we have seen, provide an objective standard with which to make these comparisons. But prices can arise only if channels can be traded among private owners.

The need for precise comparisons of the economic value of alternative uses of the frequency spectrum is heightened by recent technological developments. Since the invention of the transistor in 1947 and the integrated circuit in the late 1950s, electronic communications and computer technology have become increasingly integrated. The refinement of electronic communications has made the various technologies -- broadcasting, cable, fiber optics, and satellites -- close technical substitutes for each other. Where the allocation plan of the 1940s and '50s assumed that over-the-air broadcasts were the only way to get a television signal to a home receiver, for example, the new technology provides a broad range of options. In addition to coaxial cable, there are MDS, DBS, and Subscription TV, none of which use the VHF and UHF frequencies allocated to TV by the FCC. And, of course, telephone companies are fully capable of providing cable service as well. Telephone service itself does not have to be confined to the traditional wire connections of the local operating company; it can also be provided by satellite networks, by land-based microwave networks, or by cable "television" franchises.

As a result, every communications service that uses radio frequencies is competing with every other for "room" on the spectrum. And every service that uses radio technology may use cable or fiber as a technical substitute. In short, every telecommunications service is competing with every other telecommunications service -- or rather, they could be, if our regulatory system would allow it. For at the heart of the present system of frequency allocation is the assumption that the different services and technologies of telecommunication are not integrated and do not compete with each other. Indeed, a central feature of any system of administrative planning is that the planners must be able to classify and categorize the subject of their decision making.

This prerogative of planning forces the FCC to draw a number of intricate and increasingly tangled lines through telecommunications and computer technology. Private mobile radio services, to use only one of many examples, may not be connected to the wire line telephone system. This restriction, in the words of one authority, "may not serve a purpose, other than to keep the two services legally distinguishable."[41]

It is becoming increasingly obvious that classification of telecommunications technologies in an era of microelectronic integration is pointless. "With the emergence of new multifunctional technologies such as microwave, fiber optics, and co-axial cable, classifying a service has become a difficult task," claimed the NTIA in a recent report to the FCC.[42] "The results are not always predictable, and...courts have not hesitated to overturn the Commission's service classifications." In some cases, the same service is declared by the courts to be two different things. In 1978 a court ruled that MDS services are "broadcasting" and overruled the Commission's ruling that MDS is a "common carrier."[43] A year later, another ruling held that MDS was not broadcasting and based its decision upon the technology used.[44] The FCC originally classified MDS as a common carrier in 1972, but that was before the FCC itself added enough frequencies to the bandwidth of an MDS channel to make it viable as a carrier of television signals. Since then, MDS has simply joined broadcasting, satellites, and cable as another kind of technology used to distribute a television signal. It is not significantly different from Subscription TV.

Any information -- text, TV signals, facsimile, data, radio, or voice -- can be carried by cable, satellite, fiber, radio, or by any combination of these elements. Why, then, does the FCC expend time, money, and effort attempting to fit integrated technologies into arbitrary pigeonholes? Because its control of the frequency spectrum, and its general determination to regulate communications, forces it to. There is no way a central planning authority can decide which channels to give to which applicants unless they are broken down by service. These arbitrary service classifications impose a pattern of technological and economic segregation upon an integrated industry.

Although segregation by service makes price comparisons difficult and in some cases impossible, the integration of telecommunications technology means that economic considerations should be paramount in the choice of a communications system. If the different technologies can all do much the same thing, the most important question is which can do it at the least cost to consumers.

The relationship between technical and economic considerations in the choice of a communications system is clarified by the graph in Figure 1, taken from the Mathtech study.[45] The graph shows a curve PQR representing all the technically efficient combinations of radio frequencies and non-radio technical inputs needed to produce a given level of radio service. Technical efficiency refers to producing the maximum output per some unit of input. A television transmitter and receiver with a 1.5 Mhz band, for example, may be more technically efficient than ones with a 6 Mhz band, in that transmission and reception of the same signal require a smaller bandwidth. But technical efficiency must be differentiated from economic efficiency, which means that the desired output is produced at the lowest cost in terms of all the inputs used. The smaller bandwidth may require technical inputs that make the system too expensive and hence economically irrational. Every point on the curve PQR is technically efficient, in that it would be impossible to reproduce the same level of radio service without using more of at least one input, thereby moving either northwest or southeast along the curve. The point is that technology provides us with a range of options, but does not tell us which one is socially efficient.

Fig. 1

Technically Possible Combinations of Spectrum and Other Inputs to Produce a Given Level of Radio Service

Quantity of Other Inputs Used

Minimum level of use of other inputs

Quantity of Spectrum Used

[Chart Omitted]

The vertical axis of the chart represents all those technical inputs that do not use radio channels, such as telephone wire, cable, or special attachments designed to reduce bandwidth. The horizontal axis represents the "quantity of spectrum"[46] used. The Mathtech authors point out that under the present system, all the technical inputs on the vertical axis command a price, but the occupation of channels does not.[47] As long as there is no market price for channels, they assert, the designer or user of a communications system has an incentive to locate his system nearer to point Q on the curve -- that is, to use radio channels as a cheaper substitute for those telecommunication techniques that are priced. This occupation of scarce channels does, however, displace other users. Those who are displaced are forced to use more expensive inputs or to forego their communications system altogether.

This chart is valuable for the way it demonstrates that radio communication exists on an economic and technological continuum with other telecommunication techniques. The horizontal axis represents the range of channel dimensions. Under the present system, the FCC's allocation policy determines where on the spectrum a channel will be located and how many there will be, while assignment criteria define the dimensions of each channel. These decisions are of necessity, in the absence of prices, rather arbitrary. In some cases the difficulty of obtaining an assignment drives users to cable or fiber alternatives, while in other cases outmoded allocation and assignment criteria give certain users free access to scarce and valuable channels. Rather than suggesting that the FCC create incentives to use more or less of the spectrum than an efficient market system would, we can conclude with William Meckling that "[FCC] spectrum management is so bizarre that none of us can even imagine what efficient utilization of the frequency spectrum would look like."[48]

A price system would make each user bid against all the alternative uses for a channel. He could not displace other users without paying the price necessary to do so. If his occupation of a channel of specific dimensions is more valuable to him than to the other potential users, then and only then will he retain it. If he can fulfill his communications needs by using alternatives to radio, then the direct price comparison between radio and non-radio alternatives will make it clear when it makes sense to do so. Because he must pay a price to displace other users, a price that increases as the demand for radio channels increases, he is automatically given an incentive to find that point on the curve PQR that minimizes the opportunity cost of using the frequencies. Thus, a price system ensures that an optimal mix of inputs will be selected, a mix that will accommodate as many users as possible at the lowest cost. It follows logically that the most efficient use of the spectrum can be attained only if every communications service and technology is bidding against every other for access. A user in one service not only displaces potential users in the same service, but also limits the number of frequencies available to other services. In other words, the present practice of allocating discrete blocks of frequencies to distinct services is inherently inefficient.[49] Allocation seals off each service into a distinct economic fiefdom, preventing any direct comparison of economic value. This is especially harmful now that many services in different allocated band, such as DBS and VHF and UHF television, and services which use radio channels for relay purposes such as cable, are competing against each other.

C. Controlled Markets Are Not the Answer

The case for introducing a price system into frequency allocation is so overwhelming that one is led to wonder why it hasn't happened. The answer is that prices can only emerge from actual market exchanges, and therefore the introduction of a price system requires the definition and free exchange of property rights. Definition of such rights is resisted by Congress and the spectrum-management establishment for a number of reasons.

One of them is simply the government's traditional reluctance to relinquish control of things. But inertia is no justification for the present system. Another reason is the tradition of "public" ownership as it is written into the Communications Act. But "public" ownership, as we have seen, has always been something of a legal fiction, and the present deregulatory trends make it even more of one. A market property structure would allow open competition and create incentives to use radio channels more intensively. This would lead to a greater variety of telecommunication services at a lower price. By removing restrictions from broadcasters and other services without creating such a property structure, Congress is giving us every kind of deregulation except the kind that would benefit the public most.

By far the most important obstacle, however, is the lingering belief that we can have our cake and eat it, too. The spectrum-management establishment is more favorably disposed toward administered auctions, lotteries, user fees, and other "controlled market" techniques than freely transferable rights, for the obvious reason that these milder measures would preserve their control of the spectrum while making the existing system more rational. Most proposals for the reform of frequency allocation are confined to these halfway measures.

Once again, the modern controversy closely follows the debate over economic planning held decades ago. After von Mises had called attention to the impossibility of economic calculation under central planning, socialist intellectuals attempted to counter his argument by showing how administrative price-setting and trial and error techniques could simulate market prices. Contemporary spectrum managers and economists, however, show no evidence that they are aware of this debate, one of the most important in the history of economic thought.[50]

Interpretations of the debate vary, but to this writer the exchange clearly established that meaningful prices can only emerge from actual exchanges of property in the market. A central planner's disposal of a resource will cause him no direct gain or loss, nor does he directly use the resource or the money exchanged for it. Thus, a shadow price invented by a central planner tells us nothing about the value of the resource in alternative uses. As Hayek has stressed, market competition is a "discovery procedure."[51] Alternative uses and combinations of goods are tested by rivalrous entrepreneurs. These alternatives are "tested" in the strict, empirical sense in which scientific hypotheses are tested in experiments. Just as we cannot say that we know the outcome of an experiment until it has been actually conducted, so we cannot know what is the most efficient arrangement of resources unless private owners are free to exchange them in whatever way they see fit in an effort to find out what works best. Prices are the exchange-ratios that emerge from this discovery process. Unless a scarce good is controlled by owners who are free to buy, sell, subdivide, or reconstitute portions of it, its value in alternative uses can never be expressed by prices.

The difference between true prices and administratively set prices can be clarified by analysis of Figure 2. Figure 2 symbolizes a market exchange between two owners of radio communication systems.[52] The chart shows two hypothetical cost-spectrum trade-off curves for communication systems A and B. System A operates with 100 khz bandwidth at a cost of $7 million while System B operates with 60 khz bandwidth at a cost of $6 million. Assuming that neither system's capacity or quality would be jeopardized, Figure 2 reveals that an exchange of frequencies would be in the interest of both system owners. The owner of System B could pay the owner of System A $2 million for the right to expand his bandwidth by 20 khz. This exchange would move B's system northwest along the curve, reducing the cost of his system by $3 million. After the trade, each system realizes savings of $1 million, while the total number of frequencies occupied by both systems together does not change. As Ewing points out, similar examples could be constructed to show how market exchanges could create greater communications capacity or use less bandwidth while remaining at a fixed total cost.

Fig. 2

Hypothetical Cost-Spectrum Trade-off Curves

[Chart Omitted]

This chart demonstrates several important features of prices that emerge from actual market exchanges. To begin with, the transaction establishes a precise correspondence between a specific act of technical adjustment in the two systems and a specific quantity of money. Economic and engineering considerations are automatically integrated; indeed, there is no way to separate the two, for the price of the extra 20 khz actually determines the bandwidth of the two channels. An administratively determined price, in contrast, is established independently of any specific communications system and independently of any exchange of frequencies. Setting a price and adjusting the engineering specifications (e.g., channel or allocation size) are inherently separate and distinct acts. The administrator hopes that the price he sets will result in the kind of efficient redistribution of frequencies symbolized by Figure 2, but he cannot guarantee that it will, and in many cases it is likely that the market will be steered in a direction entirely different than he intended. Had an administrator decreed that 20 khz was "worth" $4 million, for example, the exchange between A and B never could have taken place.

The exchange between A and B was predicated on a purely subjective factor: the judgment that no degradation of service quality would result. But frequencies cannot be plugged into or wrenched out of a communications system without affecting its performance under some conditions. Clearly, only the users of the systems can decide whether its integrity would remain intact after the exchange, and the judgment they make would depend entirely on their unique desires and purposes. A mobile radio service for delivery trucks needn't be as worried about signal quality as a high-definition television broadcaster. Also, in a free market the owners of both systems may be confronted with other opportunities for exchange that would make the symbolized exchange, though attractive in the abstract, undesirable in fact. A, for example, may be capable of splitting his channel into two separate systems of 50 khz each; selling one of them might bring in more revenue than B's offer.

In conclusion, 20 khz is worth $2 million only given the specific cost-spectrum trade-off curves of Systems A and B, only after a subjective determination that the integrity of the two systems would not be harmed by the exchange, and only assuming there are no better alternatives facing the two owners. This underscores the sense in which true prices can emerge only from the economic choices of individual owners. Prices that emerge from actual exchanges are sensitive to unique conditions and spur systemic or general adaptation to them. Administrative prices do not emerge from concrete conditions; they are abstractions imposed from above. Thus, they are inherently incapable of duplicating the adaptive function of true prices. In effect, they are simply the same old guessing game in disguise.

Like administered prices, government-administered auctions and lotteries are often put forth as techniques that will introduce "economic" factors into spectrum management short of a true, private property-based market system. Channels with multiple applicants could be auctioned off to the highest bidder and the auction "price" used to adjust frequency allocations. Or the selection could be made in a frankly arbitrary manner, by lottery. Undoubtedly, both techniques would be less costly and time-consuming than comparative hearings. But while auctions and lotteries may make the distribution of channels more efficient from the perspective of the government, they do little to make frequency allocation more adaptable to the demands of the public.

By holding auctions for channels, the government is not introducing "economic techniques" into spectrum management, it is simply selling its monopoly privilege to the highest bidder. Auctions do not automatically adjust the dimensions of channels or the size of allocations to supply and demand as a true price system would. They merely give the administrator more information with which to change his allocation and assignment criteria in the traditional way. Auctions can take place only within a service allocation already designated by the FCC, and the bidding must be confined to channels whose dimensions are already defined by the FCC. A high auction price in one allocation may indicate the need for more frequencies in that band -- but it does not tell the FCC where to get more frequencies, nor does it provide any information about how many new channels should be created. These adjustments must rely on the guesses of administrators just as the present system does. Moreover, once the adjustments are made, the administrator doesn't know the value of scarce frequencies under this new arrangement until and unless he holds another auction. In a true market price system, in contrast, exchanges -- and hence adjustments -- can be made at any time.

By limiting the scope of market forces to a narrow band of frequencies and a fixed point in time, the FCC would favor large commercial bidders. As one authority noted, "competitive bidding for scarce spectrum under rigid constraints on short-run supply cannot but generate very high market clearing prices."[53] For all their problems, auctions are not that much more flexible than the present hearings system. They still require time-consuming bureaucratic procedures to decide when and under what conditions to hold them, and they can take as long as six months to administer.

Lotteries are not an "economic technique" but merely an administrative expedient. They, too, fail to introduce price signals and incentives into the actual process of frequency allocation. Even the advocates of lotteries admit that unless the lucky person to win one is free to exchange his license with users who may value it more than he does, lotteries fail to guarantee optimal spectrum use.[54] If the value of lotteries is predicated on the possibility of the ensuing market exchanges, it is hard to see why we should bother with them at all. Why not just allow free transferability to begin with?

It is predictable that the spectrum-management establishment would be predisposed to find a middle ground between a pure market and the present system. But there is no middle ground. Either scarcity in radio communication is rationed by a price system -- that is, by individual owners of radio transmitters and receivers making exchanges -- or it is rationed by the government. The government may streamline it planning process in an effort to make its allocation more responsive to supply and demand, but these efforts should not be misrepresented as the introduction of prices and markets. There is no market in radio communication unless there are freely transferable rights.

4. A System of Freely Transferable Rights

With the desirability of private property rights in radio firmly established, the question that remains is how such rights can be defined and put into practical use. The problem is not so much the definition of rights -- the present system already does that. The problem is to come up with a definition that will hold up throughout the process of market exchange, rights that do not rely upon the existence of a central authority for their distribution.

In June of 1969 a team of economists, engineers, and attorneys published a detailed description of an alternative property system in the Stanford Law Review.[55] Most discussions and criticisms of freely transferable rights in radio use this study (hereafter referred to as the "DeVany system") as a bench mark. In the DeVany system, rights consist of a geographic area outside which the field strength of a radio signal cannot exceed a specified limit and within which no other transmitter's emissions can exceed the same limit. Rights would also include a specified frequency band. Outside that band the right-holder could not exceed a certain field strength (expressed in volts/meter/hz). Within that band, no other transmitter could exceed the same limit. The DeVany system is the same in all essential respects as the property proposal of Jora Minasian.[56] Instead of controlling the dimensions of the property right by centrally specifying the inputs of the transmitter like the present system, the DeVany/Minasian proposals set limits on out-of-band and out- of-area emissions.[57]

In section 2, scarcity in radio was analyzed as a product of the technology of radio communication. No ethereal natural resource is needed to account for it. The significance of what may appear to be a purely semantic problem will now become evident. The assumption that the spectrum is a "thing" or resource independent of the transmitter and receiver naturally leads to a search for ways to divide that "thing" into parcels that can be bought and sold by private owners. If the transmitter and receiver hardware and inputs themselves are not the "property" that is to be bought and sold, then all we are left with is the propagation pattern of a radio signal. Thus, the attempt to define freely transferable-- rights becomes a problem of defining and enforcing boundaries between these propagation patterns. In theory, the DeVany/ Minasian proposals solve this problem by specifying an absolute limit on field strength in the frequency and geographic dimensions of the propagation pattern. Critics of the DeVany system, however, have raised some serious questions about the practicality of enforcing and exchanging rights without knowledge of the inputs used by each transmitter.[58] It is often difficult to monitor the actual output pattern of a transmitter without knowing the antenna heights and location, power, input, and transmission method.

The main problem with the DeVany/Minasian proposals, however, is that their attempt to base rights on propagation output patterns, while theoretically workable, is unnecessarily complicated. A simpler way to approach the problem is to identify the transmitter and receiver hardware and inputs as the "property" that is owned and traded, while treating interference as a negative externality that arises from the use of that property.

"Externalities" is a familiar economic concept. They arise when the consequences of using property in a certain way are not fully visited upon the property owner. It may be that the benefits caused by an owner's use are spread to non-owners (positive externalities) or that the harm caused by the owner's action falls outside the sphere of his legally defined liability (negative externalities). A common example of a negative externality is air pollution. A factory pours soot or chemicals into the atmosphere and everyone in the area -- not just the factory owner -- must breathe the polluted air. (This example makes it clear that the term "negative externality" is often just an antiseptic synonym for invasion or aggression.) If the factory is taxed for its pollution or held liable for the damage it causes, then the externalities are internalized to some degree. A positive externality occurs, for example, when someone lavishly landscapes his property. All of his neighbors benefit from the improved view, and their property values may even rise, but the owner reaps no economic benefit from them

A statement in Ronald Coase's seminal article "The Federal Communications Commission" strongly suggests the connection between the economic concept of externalities and the definition of property rights in radio. "Every regular wave motion," Coase noted, "may be described as a frequency. The various musical notes correspond to frequencies in sound waves. The various colors correspond to frequencies in light waves. But it has not been thought necessary to allocate to different persons or to create property rights in the notes of the musical scale or the colors of the rainbow. To handle the problem arising because one person's use of a sound or light wave may have effects upon others, we establish the right which people have to make sounds which others may hear or to do things which others may see." Coase concluded that "what is being allocated by the FCC or, if there were a market, what would be sold, is the right to use a piece of equipment to transmit signals in a certain way."[59]

In effect, Coase was suggesting that interference be considered a negative externality. The definition of property rights in "the spectrum" per se struck him to be as outlandish as creating rights in the colors of the rainbow. The property -- the object that is owned -- is the transmitter itself, while the owner's property rights are limited in accordance with the effects its use will have upon other property owners. Those who followed up on Coase's pioneering call for the definition of property rights in radio, however, abandoned this view. The detailed expositions that followed all attempted to base rights on the signal's propagation pattern. The problems inherent in monitoring something as variable as a propagation pattern have in turn led to a widespread belief that definition of private, freely transferable rights is either impossible or so complicated and expensive to enforce as to be impractical.

The problem is not that property rights cannot be defined, but that we are looking for them in the wrong place. Coase was on the right track to begin with. Properly understood, property in radio is not much different from property in any other material good. Rights in radio, as Coase suggested, consist of "the right to use a piece of equipment to transmit signals in a certain way." More specifically, it would involve the right to place a radio transmitter in a certain location and use a specified bandwidth, power, antenna heights, and transmission method, just as the FCC license stipulates now. Under a system of freely transferable rights, however, transmitter (and receiver) owners would be able to subdivide, alter, or reconstitute these rights in any way that did not interfere with the established receiver of those not a party to the exchange. New entrants could select and use any location and inputs that did not cause harmful interference to established receivers. Once a transmitter's inputs were registered with a central clearinghouse, the channel to specific receivers established by those inputs would be protected from interference by law.

It is a common assumption that the definition of rights in radio is a much more complicated affair than the definition of rights in land or other material goods. But once the externalities of these different kinds of property are taken into account, rights in radio seem much simpler. Property in land is plagued by a host of complex externalities, from noise pollution to public health problems. In contrast, the mathematical nature of electromagnetic wave propagation gives us the ability to calculate and predict the externalities caused by the use of radio transmitters. Our knowledge of radio propagation is far from perfect, but it certainly exceeds our knowledge of other kinds of externalities. Who is liable when a noisy bar disturbs the apartment dwellers across the street; when the personal hygiene habits of one household lead to disease or discomfort in another; when a chemical company stores its waste on property that is later sold, and after 10 years the chemicals begin to enter the water table? Where does one set of property rights begin and the other end? In an attempt to resolve such questions, Western societies have evolved complicated institutional arrangements for making judgments about permissible use of property. This machinery encompasses federal, state, and local court systems and regulatory bureaucracies, and other forms of government.

Externalities in radio communication are absurdly simple by comparison. Given the inputs of a radio transmitter, its propagation pattern can be model led using topographical data and propagation theory, and the model tested by measurements in the field. Variability due to environmental changes can be accounted for by using estimates of probability: For example, at location L the field strength of transmitter T will exceed x micro volts/meter 50% of the time. Given this information, a radio transmitter owner knows more about how to avoid negative externalities to other transmitters' receivers than city planners, apartment building owners, developers, and tenants know about how to avoid negative externalities among themselves. The problem of defining property boundaries, then, becomes a problem of coordination or of avoiding harmful externalities to the receivers of other transmitters.

The best thing that can be said for a system of property rights based on the principle is that it is not a purely theoretical construct. A modest version of it called frequency coordination is already in use in the 4 - 6 Ghz band. There is also a historical precedent. In October of 1926 the Tribune Company, publisher of the Chicago Tribune and owner of Station WGN, brought a complaint against the Oak Leaves Broadcasting Company in the circuit court of Cook County, Illinois.[60] WGN had been broadcasting at 990 khz since 1923. In September of 1926 the Oak Leaves Company, which had previously been broadcasting at 1200 khz, shifted its frequency to somewhere between 990 and 950 khz, interfering with reception of WGN. As both stations were licensed by the Department of Commerce, as required by the Radio Act of 1912, the conflict had to be settled by the state court. In effect, the Tribune Company argued that it had a property right to its channel, while the defendants insisted that a wave length could not be made the subject of private control. The judge resolved the case in favor of the Tribune Company. One of the major issues was how far the two transmitters should be separated by frequency. The judge upheld the Tribune Company's contention that separation by 40 khz or less would cause harmful interference to WGN: "The court feels that a distance removed 50 kilocycles from the wave length of the complainant would be a safe distance...." He also enjoined the Oak Leaves Company from causing any material interference to receivers within a 100-mile radius of the Tribune station.

The Oak Leaves case provides us with one of the earliest examples of the establishment of property rights through frequency coordination. The judge's establishment of the proper separation among transmitters in space and frequency, although accomplished without the aid of the sophisticated monitoring devices available today, was similar to the process of frequency coordination as it is practiced in the microwave band today.

The Mathtech study provides a detailed explanation of how frequency coordination works in the 4 - 6 Ghz band, and how its application can be extended to other services.[61] The following is only a summary of this work.

Whenever a new communication system is contemplated, say a satellite up link, worst-case technical calculations are made to locate the "coordination area," i.e., the area in which the potential for harmful interference exists. These worst-case calculations rely on standards set by the International Radio Regulations. Any proposed new station must communicate the technical details of the proposed station to all existing users within the coordination area; determine whether interference will be caused; and obtain the agreement of other stations.

If interference is estimated to be a problem, bargaining over possible adjustments to the inputs of the existing and proposed stations ensues. The proposed station can abandon its proposed site and move to another; restrict the direction in which its earth station points; construct physical barriers to interference such as pits, embankments, and metallic shielding; or use electronic interference cancellers. Alternatively, the station can pay existing users to install a new, more heavily shielded antenna, or a more directional antenna, or pay for the purchase, installation, and maintenance of an interference canceller at an existing station. It can also pay for a change in their frequency.

As the Mathtech authors make clear,[62] frequency coordination is used to define and transfer property rights. The FCC does not establish a rigid assignment table, but allows the users themselves to define the allowable limits of interference and the adjustments that must be made by newcomers. Moreover, the system works. Interference is virtually non- existent -- if anything, the system is too conservative -- and the process costs the taxpayers nothing. Private frequency coordination firms such as Compucon and Spectrum Planning, Inc. make money by doing the coordination for smaller firms; larger corporations like AT&T, Comsat, and Western Union do their own frequency coordination. One of the common speculative criticisms of a system of freely transferable rights in radio is that transaction costs, the costs involved in negotiating exchanges of property rights, would be too high.[63] But our actual experience with free transferability refutes this contention. The fact is that businesses engaged in radio communication have a strong incentive to find ways to minimize transaction costs.

The small band of frequencies in which frequency coordination prevails is subject to rapid growth. In 1979 there were more than 16,000 radio relay stations and more than 1,800 earth stations using 4 - 6 Ghz. Thousands of new receive-only earth stations have been installed recently to serve cable television systems. As an entry procedure, frequency coordination exhibits the flexibility that is so sorely needed in this era of rapid growth in telecommunications.

It is clear that transmitter inputs provide a firm basis for market exchanges. While the area "covered" by a radio signal varies, inputs remain constant. They are also fungible; that is, they can be divided into homogeneous units suitable for exchange. In return for compensation, a transmitter can agree to raise or lower his power level, to increase or decrease his antenna height, to change antenna location or polarization, and reduce or enlarge bandwidth by a specified number of units. Such a system of freely transferable rights need not engage in a futile search for boundaries in the atmosphere; it need only ask whether a given set of inputs establishes a channel between the transmitter and the desired receiver(s) of sufficient quality for the purpose at hand. With respect to other transmitters, we need only ask whether the inputs selected interfere with their established connections or not. If they do, the inputs of the interfering party must be altered until the interference is eliminated, or the interfering party must offer the other party enough compensation to make the interference acceptable.

The problem of extensive and unpredictable patterns of interference caused by natural phenomena beyond the user's control is inevitably raised as an argument against a system of freely transferable rights. But if rights are based on inputs rather than propagation patterns, natural phenomena pose no problem. If T1's duly owned inputs suddenly create interference with T2's channel due to some electromagnetic fluke, then the market, knowing that T1's owner has established a prior right to those inputs regardless of the unpredicted natural phenomena, can adjust in any of the following ways:

(1) T2's owner can bargain with T1's owner to induce him to adjust his inputs;

(2) Transmitters can purchase insurance against such events, either in the form of damage claims or in the form of emergency back-up channels;

(3) The channel interfered with, if the problem becomes recurrent, will become devalued just as land subject to flooding becomes devalued, and its price will fall. This creates an incentive to either find a technological solution to the problem or, if none can be found, to transfer ownership of T2's channel to a use that would not be significantly affected by occasional interference. In either case the market would reapportion the use of frequencies in a rational way.

It must also be stressed that no system of property, public or private, can protect people from interference that is truly unpredictable in nature. The proper standard of judgment here is not which system can avoid all unpredicted problems, but which can best adjust to them after they happen. A private property system would allow the individuals directly affected by the problem to work out a solution. The introduction of a price system would yield knowledge of the efficiency of various solutions to the problem. The knowledge generated by market transactions would, over time, lead to adjustments in radio usage that would minimize the risk of interference problems. A centralized system lacks this flexibility. When problems arise, the ownership pattern cannot change readily. The FCC can order very conservative input specifications in an attempt to minimize the risk in advance, but this conservatism may needlessly deny hundreds of users the chance to engage in radio communication.

Under a system of frequency coordination, the geographic extension of the property right is determined by the receiver. That is, the property right only protects the channels established to specific receivers. It does not give the transmitter a right to exclude other transmitters from the entire geographic area in which his signal exceeds a certain field strength. This is an important characteristic of the system and is explored in more detail in the appendix. Involving the receiver in this way, however, would work only for "point- to-point" radio communication services. The distinguishing feature of the point-to-point services is that every transmission is intended for a specific receiver or group of receivers. There is little difference between the people who transmit and those who receive in these services. Both have a commercial stake in the process. Many times the equipment used is both a transmitter and a receiver, as in microwave relays, satellites, and the mobile radio "community repeater," the antenna that relays the signal from one mobile unit to another.

This is a far cry from broadcasting. The broadcaster is not interested in any specific receiver; he merely wants to cover a geographic region containing a large population with a signal strong enough to allow that population to tune into his programming at will. Likewise, the broadcast receiver is not interested in any specific transmitter per se; he merely wants a range of program choices that suits his fancy. Thus, in addition to input specifications, property rights in broadcasting must specify the service area of the station; that is, the geographic region within which the transmitter is protected from interference. The DeVany/Minasian proposals could serve as a model here. Once input specifications are included in the definition of rights, all of the objections to the feasibility of their property system become invalid. In addition, the Mathtech study provided a detailed explanation of how to apply frequency coordination to FM radio broadcasting. Math tech would define the service area of the FM station according to the FCC's "50/50 rule." This is simply the area in which a field strength of 1 micro volt/meter is exceeded 50% of the time at 50% of the locations measured. With its service area so defined, a proposed FM station would have to determine whether it overlapped the service area of any other station. If so, it would have to obtain (or bargain for) the other's consent before it could begin transmission. If not, it would be free to apply for an FCC license. This is how the Mathtech authors proposed to solve the problem; the important thing is not this particular rule but that some agreed-upon rule would be evolved. As long as both input and geographical rights are freely transferable, the system would be able to adapt and the rule itself could be improved over time.

In sum, frequency coordination is simply a method or procedure for selecting transmitter inputs that do not interfere with established channels. Such a procedure is all that is needed to establish a free market in radio communication. However, the system as it is defined by the Mathtech study is much too conservative. The following modifications are recommended-

(1) Eliminate all service allocations. The Mathtech recommendations still confine the use of frequency coordination to services already designated by the FCC. As we have seen, however, one of the biggest problems now facing the FCC is the necessity for reallocation. Indeed, the most important function of a freely transferable rights system is its ability to allow new entrants into services where demand is rising, and to allow channel users in services where demand is decreasing to shift their channels to other uses. As long as service allocations are made by a central planning authority, the most beneficial effect of market forces will never be realized.

(2) Allow open entry. There is no reason to give the potential competitors of a new service -- that is, existing transmitters -- the unilateral power to determine whether a new entrant is acceptable or not. Thus, the present practice of frequency coordination, in which new entrants must obtain the prior agreement of all existing users in an area, should be modified. New stations should be allowed to enter at will, the only requirement being that they register their inputs beforehand. Any interference they caused would become actionable only after some demonstrable problem occurred. Existing transmitters should be able to obtain an injunction against a new entrant only in extreme cases; i.e., when there is strong evidence that the new entrant would seriously disrupt their service. Interference problems that emerged after a period of time could be settled on the basis of temporal priority; if transmitter B start regularly interfering with transmitter A's receivers after a year, B's owner would be liable for the interference if his inputs were registered after A's. The same principle would hold in disputes over inter modulation interference.[64] If A and B together cause interference with C's receiver, and the rights of the owners of both A and C are prior to B's, then B's owner would be liable for the inter modulation problem.

(3) Register inputs. All radio transmitters in the U.S. would record their inputs in a central registry, much as county courthouses serve as registries for property deeds. This registry could sustain itself by charging enough of a fee for its registration service to cover its expenses and enforcement costs. The task of monitoring interference and of identifying its source would become the responsibility of the rights-holder. Transmitter inputs not registered would not be protected as property rights and would be revocable if they began to interfere with registered rights.

(4) Vest rights in existing users. One advantage of a system of freely transferable rights based on inputs is that there would be few transition problems. We need only vest in all existing licensees the right to use the inputs granted to them by the current FCC license. Since they already possess these rights, no defender of the present system can complain that vesting them would be unfair or chaotic. However, some exceptions to vestment may be desirable. The U.S. military, for example, has been granted a huge portion of the spectrum. It is impossible to determine whether it actually needs all of its channels because the cloak of "national security" conceals its requirements from detailed scrutiny. The military must purchase the arms and equipment it needs and must pay salaries to its officers and enlisted personnel. With practically every other scarce good, the military must justify its needs to the Congress. Radio communication rights, in contrast, are granted free. This practice invites waste and prevents the civilian authorities from understanding the actual value of the portion of the spectrum controlled by the military. For this reason it is advisable to divest the military of its frequencies and force it to repurchase its channels on the market.

Another exception to vestment might be areas, such as television broadcasting, in which the FCC has fostered monopoly or concentration in the past. Thus, while the new low-power television stations may create interference in the outer margins of the signal contour of established stations, the advantages of creating many new channels far outweigh any claim the established broadcasters might have. As long as rights are freely transferable, those LPTV station that prove not to be viable can transfer their rights to other uses.

(5) Define a homesteading principle. Legislation defining an orderly process by which individuals could acquire rights to unused portions of the frequency spectrum (e.g., channels above 40 Ghz or channels released by the military) would have to be written.[65]

The creation of property rights in channels naturally implies that there would be no restrictions on the kind of information carried by a channel. This, and the absence of rigid service allocations, would promote competition. If the pay television business carried by most MDS channels slacked off in the future because of DBS or cable competition, the owner could shift his channel to data transmission services or any of an infinite number of other uses. Competition would flourish if channel owners could enter any telecommunications market for which there was adequate demand without restriction or delay. At present, for example, banks that need to exchange data with their branches and with other banks have essentially two alternatives: hook up to the phone lines (and pay enormous monthly bills to AT&T) or build their own microwave relay system. Under a system of freely transferable rights, it may prove more economical to purchase or lease existing channels from other services.

5. The Problem of Regulatory Obsolescence

Telecommunications law is in a state of turmoil. A legal and regulatory system more than 50 years old has had to contend with an explosion of new technologies and new services. The obsolescence of the present regulatory framework is obvious to everyone. But there is no consensus on how to change it.

Attempts to rewrite the Communications Act have not fared much better than the original act itself. The most ambitious of these attempts was initiated in 1977 by former Rep. Lionel van Deerlin. Van Deerlin's systematic overhaul of the act, in its various incarnations as H.R. 13015 and H.R. 3333, was a modest move in the direction of deregulation. For the most part, however, it stayed squarely within the framework established by the original act, and it increased government involvement in some areas. Despite the modesty of its reforms, the bill never made it out of the committees. Telecommunications proved to be too complicated and the process of reform beset by too many conflicting special interests. In other words, even though everyone agrees that the Communications Act is in drastic need of systematic revision, it simply cannot be done.

In the 97th Congress, efforts to reform the law were broken down into smaller, more manageable bills. More than seven bills were submitted to the subcommittees covering radio broadcasting, television broadcasting, telecommunications, international telecommunications, and other areas. But a piecemeal approach has its pitfalls, as well. Congress can never be sure just how the pieces fit together. The recent AT&T antitrust settlement, which rendered the meticulously prepared new telecommunications law (S. 898) obsolete virtually overnight, makes it clear that the same fate that befell the original Communications Act and the van Deerlin alternative may yet be in store for these attempts at piecemeal revision.

The spectre of obsolescence has dogged telecommunications legislation from the beginning. The Radio Act of 1912, written with ship-shore communications in mind, worked fine as long as the use of telecommunication technology was confined to those purposes. But as soon as a new technology, broadcasting, developed in the early 1920s, the established regulatory system fell apart. With the Radio Act of 1927 and the Communications Act of 1934, the regulators caught up with the industry once again. But the regulatory framework established inevitably reflected the state of technology at that time. The telecommunications industry was segregated by technology and service. Local broadcasters were considered to be something quite different from telephone companies; telephone companies were considered to be something separate and distinct from national broadcasting networks and mobile radio services. Computer companies were, until recently, not directly related to telecommunications companies, and vice-versa. What we have been witnessing in the last 20 years is a breakdown in this regulatory scheme caused by microelectronic integration and the addition of satellites to the field of relay services. Of course, technology continues to evolve, and there is little doubt that a regulatory scheme based on today's conditions will become obsolete in a decade or two -- maybe less.

The assumption underlying the original Communications Act, and most contemporary efforts to reform it, is that government should actively shape the development of telecommunications through positive intervention. This assumption is clearly the source of the recurring problem of regulatory obsolescence. Laws are supposed to establish the general rules governing human conduct.[66] The process of legislation is slow and inevitably involves a consensus among groups with varied and often conflicting interests. Consequently, legislation cannot provide an adequate basis for detailed control of economic and technological affairs, particularly in an industry as volatile as telecommunications. The proper function of law is to define general and enduring rules of just human interaction, rules that will hold fast through the maelstrom of technological and economic change.

There are two ways, then, in which the revision of the Communications Act can be approached. First, the Congress can stay within the framework of public control and central planning established by the original act. It can modify and meliorate that control, even shed much of it, but as long as that framework remains, Congress will be committed to constant revision and reform as conditions change. It will therefore be tugged back and forth constantly by special interests seeking the most favorable terms of revision. The alternative path is to withdraw altogether from the business of shaping the telecommunications industry -- just as Western governments in the last century withdrew their control over religious beliefs. Instead, Congress should apply First Amendment and free-market principles.

The system of freely transferable rights sketched above is an attempt to discover and apply such principles to radio communication. A system of private property rights, in conjunction with the First Amendment, provides a coherent approach to telecommunications reform. A system of property rights or frequency coordination would make the allocation of radio frequencies responsive to changes in supply and demand; the government would not need to intervene in response to changing conditions. It would also allow the owners of radio channels to devote them to whatever information services proved most attractive; there would be no need for the arduous and arbitrary classifications that currently hinder the industry. Owners would be able to exchange, subdivide, or reconstitute channels to adapt to changing technology, again without need of legislative change or regulatory oversight. Because it does not lead to any particular results but lets market forces determine the future of the industry, a system of property rights eliminates one of the major obstacles to communications law reform: the attempt of myriad special interests to get the government to rig the game in their favor. In sum, by defining a fair and orderly procedure for trading and protecting rights in radio communication, Congress can protect the public's interest in justice and efficiency without attempting to exert detailed control over this dynamically changing field.

APPENDIX: THE ROLE OF RECEIVERS

As noted in the text, implicit in the process of frequency coordination is the fact that the receivers of a transmission determine the geographic extension of the property rights. This is a sharp departure from most approaches to the problem of defining rights in radio, which attempt to base rights on the propagation of the signal in space. To Minasian, DeVany, and the FCC as well, rights in radio are "rights of radiation" -- i.e., the right to "cover" an "area" with a radio signal. In the system developed here, it is the ability to make connections with receivers that forms the basis of the right.

To illustrate what this means, let us postulate that T1 and R1 are two links in a microwave relay 20 miles apart. For the sake of argument, let us also assume that T1 and R1 are owned by different firms. By its reception of T1's transmission, R1 establishes a channel of a specified bandwidth and geographic length. Together, the owners of T1 and R1 have a right to prevent any other transmitter from using a location and/or inputs that will interfere with that channel. But they do not have the right to protect the "area covered" by T1's signal; indeed, this "right" would be quite meaningless, as we shall see later. If, as is usually the case, the inputs used by T1 go significantly beyond that needed merely to establish a channel R1, the additional inputs are an externality. New entrants can make room for their transmitters by bidding away these externalities, e.g., by offering the owner of T1 payments to focus his beam, use a more directional antenna, reduce channel size, or ultimately, to connect by wire. In effect, Tl's owner has "homesteaded" the original propagation pattern, in the sense that he has the right to the inputs originally used to establish a channel to R1. But he does not have the right to prevent the operation of new transmitters that would not affect his channel to R1.

Furthermore, as long as T1 and R1 are owned by different firms, neither of them owns the channel itself. The owners of T1 and R1 only own their transmitter hardware and inputs. The channel is an implicit contract or agreement between them, and either of them can revoke it at will. If T1's owner chooses to cease transmission, then R1's owner has no right to force him to continue. While this may seem unexceptionable, reversing the relation seems just as logical: A transmitter does not have the right to command a receiver to accept his transmissions. While equally obvious, this conception of the role of the receiver has radical consequences. It means that if R1 no longer wishes to receive T1, then T1 -- while retaining his right to operate in the same way, using the same inputs -- loses any right to protect his channel to R1 from interference. The ownership rights of the transmitter do not extend to the receiving equipment unless, of course, the same person owns both the transmitter and the receiver. Thus the desirability or undesirability of interference at any given receiver location can be determined only by the receiver. If another transmitter, T2, comes along and thoroughly drowns out T1 in R1, the owner of R1 has the right to initiate measures to stop this. But he will do so only if he prefers Tl's transmissions to T2's. If he doesn't object to T2's interference, then there is nothing illegal about it. More precisely, if the receiver doesn't object to interference, then it isn't interference.

"Interference" is often bandied about as if it were a technical term of great precision. It isn't. Interference is nothing more than a ratio between desired and undesired signals in a specific receiver. There is no more objective definition possible. Scientific measurement can determine that signal A is of field strength a volts/meter and signal B is of field strength b volts/meter at the point of reception at a given time. But science does not and cannot tell us whether the resulting ratio, a:b, impairs reception to an unacceptable degree, nor can it tell us which signal is desired and which is undesired. That is inherently a subjective judgment. If the ratio between a and b is 10:1, receivers in some radio services may be able to function perfectly well. Other services would require something on the order of 100:1 or 1000:1. Moreover, some receivers may object to a slight addition of noise that would pass by others unnoticed.

From the beginning, property theorists in radio have attempted to base property rights on the "area covered" by a radio signal. But this approach stems from a misapprehension of the problem. There is no spatial dimension to electromagnetic emissions. Like visible light, which is merely a small section of the spectrum around 1014 hz, radio signals never reach some point and stop; they merely attenuate until they become undetectable. Radio astronomers receive and interpret electromagnetic emissions from galaxies light-years away. Theoretically, observers in those galaxies could receive and decode all the radio transmissions from this planet. Thus, the "area covered" by a radio signal is, over time, the full volume of space. And any transmitter on earth could be received simultaneously anywhere in the hemisphere in which it is located, provided that the receivers were sensitive enough, their antennas high enough, and no other transmitters over-powered the signal.

While we cannot speak of the "area covered" by a radio signal, we can speak of the area in which it can be received. And the geographic boundaries of reception are completely dependent upon the unique position and characteristics of the receiver and transmitter in question and the proximity of other transmitters. Once the problem is understood in this way, the crucial role of the receiver in determining the geographic extension of the property right becomes clear.

The question that remains is whether the choices of receiver owners will directly affect the property structure of radio, or whether the transmitter owners or a central authority will make these choices for them. Ultimately, this is a normative question, not a positive one. By definition, judgments about the acceptability of interference are subjective. The regulation of radio interference, then, boils down to a matter of whose subjective preferences will prevail. The standards of science or technical and economic efficiency cannot provide us with an answer to this question. We can answer it only by discussing whose preference ought to prevail.

FOOTNOTES

[1] MCI and Sprint use microwave relays and satellites to relay a long-distance call from one local exchange to another. Satellite Business Systems (SBS), a joint project of IBM, Aetna Life and Casualty and COMSAT, uses digital satellite transmission to connect business offices around the country; in effect creating an alternative to the phone companies' local exchange. SBS offers its subscribers complete telephone, television, electronic mail, and computer hookups. Another radio alternative to the local distribution networks of the telephone companies is being contemplated by Xerox, whose XTEN (Xerox Telecommunications Network) would use the 10 Ghz region for an Electronic Message Service.

[2] Leo Herzel, "Public Interest and the Market in Color Television Regulation," University of Chicago Law Review 18 (1951): 802.

Ronald H. Coase, "The Federal Communications Commission," Journal of Law and Economics 2 (October 1959).

[3] 395 U.S. 390 1969.

[4] Broadcasting, 14 December 1981, p. 27.

[5] Ibid.

[6] S. 1629.

[7] "NTIA believes that allowing licensees to sublease will immediately restructure private sector incentives to use spectrum more efficiently. Economic pressures will result in:

1. emergence of narrower channels, use of amplitude compandored single sideband (ACSB)-type channel reduction methods, and other innovative ways of increasing frequency efficiency;

2. implementation of effective shields, more precise thrust stabilizers, and other equipment intended to increase the number of usable orbital positions;

3. initiation of R&D into space-frequency sharing options, e.g. hybrid satellites and multiple ownership space platforms, leading to greater efficiency in orbit-spectrum utilization."

"Inquiry into the Development of Regulatory Policy in Regard to Direct Broadcast Satellite for the Period Following the 1983 Regional Administrative Radio Conference," Comments of the National Telecommunications and Information Administration Gen. Docket #80-603.

[8] Les Brown in Channels magazine believes the airwaves should remain public property because broadcasters use "the air that is essential to life, the air we breath." Of course, radio communication has nothing to do with air -- it can and often does take place in outer space, where there is no atmo- sphere. The fact that this comment came from one of this country's most prominent writers on electronic communications underscores the need for some clarification of the economic characteristics of radio communication. "Fear of Fowler," Channels, January/February 1982, p. 36.

[9] Testimony of Nina W. Cornell, chief of the Office of Plans and Policy and Stephen J. Lukasik, chief scientist, FCC, before the Senate Subcommittee on Communications on S. 611 and S. 622, to amend the Communications Act of 1934. 18 June 1979

[10] The reference is to Harvey Levin's book, The Invisible Resource: Use and Regulation of the Radio Spectrum (Baltimore: The Johns Hopkins Press, 1971), which, despite its unfortunate title, is the most thorough, indispensable study of the subject in existence.

[11] Frequency refers to the rate or speed of oscillation. Thus, to be separated by frequency is to be dissynchronous or separated in time.

[12] Although we speak of "the" frequency used by a transmitter and receiver, no equipment uses a single herz. All electromagnetic emissions cover a range of frequencies, and this range is called the bandwidth. The bandwidth of AM radio is 10 khz, that of television 6 Mhz, or 600 times the bandwidth of AM radio.

[13] Electromagnetic emissions have a finite geographic range. But this range is not finite because radio signals reach some point and stop. On the contrary, once transmitted a radio signal continues on indefinitely, its strength diminishing as its distance from the transmitter increases. The spatial boundaries are established when a) it becomes so weak that no receiver can detect it; b) it is weaker than the signals of other transmitters on the same frequency by enough of a ratio at the point of reception to be undetectable as long as the other transmitter is in operation; or c) it cannot reach a receiver because of the curvature of the earth or some other physical obstruction.

[14] Levin, p. 216.

[15] Radio Broadcast, vol. IX (1926), p. 475; cited in J. P. Taugher, "The Law of Radio Communication With Particular Reference to a Property Right in a Radio Wave Length," Marquette Law Review, 12 April 1928, p. 181.

[16] The AM allocation has been slightly expanded since 1926.

[17] Levin, p. 220.

[18] The Washington Post uses the 12.2 to 12.7 Ghz band to transmit its information in digitized form to a remote printer. The Wall Street Journal uses telecommunication to distribute its nationwide editions to local printers And of course the wire services have always relied on telecommunication.

[19] Rep. Dingell: "Even if optimistic projections for the growth of cable, MDS, STV and DBS are accurate, we will continue to operate in a climate of scarcity for some time. Hence, we will need the protection afforded by the equal time and fairness provisions against abuse of that scarcity." Channels, December/January 1981/1982, p. 7.

[20] For a detailed account of the chaos of the airwaves, see Jora R. Minasian, "The Political Economy of Broadcasting in the 1920s," Journal of Law and Economics 12 (October 1969).

[21] Allocation in this specific sense should not be confused with the generic meaning of the term, which applies to any method of rationing a scarce good.

[22] The FCC's allocations are made within constraints set at the World Administrative Radio Conferences of the International Telecommunications Union (ITU). Thus, allocation is coordinated at the international level as well as the national level.

[23] See Cornell and Lukasik, p. 15: "Legally, [FCC licenses] are not recognized as property rights, that is, as rights that could be bought and sold, but in reality they could and sometimes do serve the same purpose. Essentially, the Commission conveys a form of property right when it issues a radio license in the broadcast, common carrier and some private services, but one specified in terms of input and not output."

[24] The best review of the deregulation of communications is provided by Douglas W. Webb ink, "The Recent Deregulatory Movement at the FCC," in Telecommunications in the U.S.: Trends and Policies, Leonard Lewin, ed. (Dedham, Mass.: Artech House, 1981).

[25] The "spectrum economics" literature includes: Herzel, p. 802; Coase; and Harvey Levin, "Federal Control of Entry in the Broadcast Industry," Journal of Law and Economics 5 (1962): 9-67. Herzel, Coase, and Levin are all academic economists associated with the "Chicago school."

By 1968 engineers and spectrum managers, faced with growing demand for radio communication services, found the paradigm of spectrum economics useful. This is reflected in the literature of the time:

Joint Technical Advisory Committee, Spectrum Engineering: The Key to Progress (New York: Institute of Electrical and Electronics Engineers, 1968);

President's Task Force on Communications Policy, Final Report (Washington, D.C.: U.S. Government Printing Office l968);

DeVany, et al., "A Property System Approach to the Electromagnetic Spectrum," Stanford Law Review 21 (1969): 1499-1561;

Levin, The Invisible Resource.

By the late 1970s the perspective of spectrum economics had permeated the spectrum management bureaucracies of the FCC and the Office of Telecommunications Policy (later to become the NTIA) in the executive branch. Webb ink, in "The Recent Deregulatory Movement at the FCC," p. 3, notes: "the number of experienced economists that have been hired throughout the Commission" and "the increasing emphasis on economic analysis in Commission proposals and decisions."

Studies and internal reports reflecting the economic perspective proliferated:

John O. Robinson, "An Investigation of Economic Factors in FCC Spectrum Management," FCC Spectrum Allocations Staff, Office of the Chief Engineer, Report #SAS 76-01, August 1976;

Webbink, "The Value of the Frequency Spectrum Allocated to Specific Uses," IEEE Transactions on Electromagnetic Compatibility, vol. EMC-19 (August 1977);

Donald R. Ewing, "Controlled Markets for Spectrum Management," NTIA, 1979;

Dunn Agnew and Gould Stibolt, "Economic Techniques for Spectrum Management: Final Report," Mathtech, Inc., 20 December 1979;

Webbink, "Frequency Spectrum Deregulation Alternatives," FCC Office of Plans and Policy, October 1980.

[26] In addition to the sources cited above, there are many unpublished papers, texts of speeches, and manuscripts circulated within the FCC and NTIA. See, for example, "Remarks by Dale N. Hatfield, Associate Administrator of the NTIA before the SIRSA 1980 Annual Membership Meeting, Panel on Policy Developments in Mobile Radio," 10 October 1980 (copy obtained from NTIA).

[27] Ludwig von Mises, The Theory of Money and Credit (1922; reprint ed., Indianapolis: Liberty Classics, 1981). See also von Mises, Socialism (reprint ed., Indianapolis: Liberty Classics, 1981).

[28] "We must look at the price system as...a mechanism for communicating information if we want to understand its real function....The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on, and this is passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to change of which they may never know more than their reflection in the price movement." F. A. Hayek, "The Use of Knowledge in Society," American Economic Review 35 (September 1945).

[29] Ibid.

[30] Broadcasting, 16 November 1981, p. 64.

[31] "The Commission too often is faced with selecting from among equally well-qualified applicants. In effect, they must distinguish the indistinguishable and decide the undecidable. The results are incredible delays and excessive costs that serve mostly to postpone or deny service to the public, raise prices to users, consume FCC and court resources, preclude small firms from going into business, and simply enrich a legion of communications attorneys." Dale Hatfield, Associate Administrator of the NTIA, in a speech before the Associated Public Safety Communications Officers 48th Annual National Conference, 18 August 1980.

[32] The cost of a hearing-for a contested 10-mile microwave link was estimated by the Mathtech study to be $342,307 when the applicant loses the dispute and $214,907 when the application is won. Estimates assume that the dispute lasts one year; many last longer than a year. See Table 10, "Cost of MX Hearings Delay for a New 10-Mile FS Link," in "FCC Inquiry into the Development of Regulatory Policy in Regard to Direct Broadcast Satellite for the Period Following the 1983 Regional Administrative Radio Conference, Comments of the National Telecommunications and Information Administration, Gen. Docket No. 80-603, p. 110.

[33] See Ida Walters, "Freedom for Communications," in Instead of Regulation Robert Poole, ed. (Lexington, Mass.: D.C. Heath and Co., 1982), pp. 96-106.

[34] The FCC's Network Inquiry demonstrated how the rigidity of allocation and assignment criteria limited competition to three major networks. A strong national network would require access to the top 50 markets, where most of the viewers, and therefore advertising revenues, were located. Under the FCC's 1952 TV allocation and assignment scheme, only seven of the top 50 markets received four or more VHF assignments. Twenty received 3 VHF assignments, 16 received two, and 2 markets received only 1. "As a consequence of this scheme, one network could reach 45 of the top fifty markets with VHF stations and the second could reach 43, while a third network could reach 27 and a fourth would have access to VHF stations in only 7 of the top fifty markets." The same FCC report documented how the DuMont network crumbled in the '50s as a consequence. FCC Network Inquiry Special Staff, "The Historical Evolution of the Commercial Network Broadcast System," October 1979, pp. 77-79.

[35] Broadcasting, 23 February 1981, pp. 58-62, carries a list of the proposed LPTV networks.

[36] Christopher Sterling and John Kittros, Stay Tuned: A Concise History of American Broadcasting (Wadsworth Publishing Co., 1977).

[37] See Hatfield, "Remarks Before the SIRSA Annual Membership Meeting, 10 October 1980.

[38] "The demand for channels [in the land mobile services] has been so great that the channels have become overloaded almost as fast as the Commission could make them available." Cornell and Lukasik, p. 21.

[39] Existing microwave services in the 12.2-12.7 Ghz band would be displaced if the FCC authorizes the use of that band by Direct Broadcast Satellites.

[40] Robinson, "An Investigation of Economic Factors in FCC Spectrum Management," p. 8.

[41] Webbink, "Frequency Spectrum Deregulation Alternatives," p. 27.

[42] Comments of the NTIA, "Inquiry into the Development of Regulatory Policy," p. 77.

[43] Ortho-Vision, Inc. v. Home Box Office, Inc., 474 F. Supp. 672 (S.D.N.Y., 1978).

[44] Home Box Office, Inc. v. Pay TV of Greater New York Inc., 467 F. Supp. 525 (S.D.N.Y., 1979).

[45] Economic Techniques for Spectrum Management: Final Report. p. II-10. Hereafter this report will be referred to as "Mathtech."

[46] Given the concept of the frequency spectrum outlined in section 2, the "quantity" of spectrum is rather a meaningless concept, and the Mathtech study was forced to admit as much. However, if the horizontal axis is reinterpreted as the extent to which one transmitter is incompatible with another, the meaning of the diagram is unchanged.

[47] See pp. 7-9.

[48] William Meckling, Foreword to A Property System Approach to the Electromagnetic Spectrum, (Washington, D.C.: Cato Institute, 1980), p. xii.

[49] Block allocation was also criticized as technically inefficient by the Joint Technical Advisory Committee report, Spectrum Engineering, the Key to Progress. (See pp. 76-77.)

[50] An account of this debate, and criticism of the inaccuracy of most standard accounts, is provided by Don Lavoie, "A Critique of the Standard Account of the Socialist Calculation Debate," Ph.D. dissertation, New York University. See also Trygve G. B. Hoff, Economic Calculation in the Socialist Sootety (London: Wm. Hodge & Co., 1949; reissued in 1981 by Liberty Press, Indianapolis).

[51] "Wherever the use of competition can be rationally justified, it is because we do not know in advance the facts that determine the actions of the competitors. In sports or in examinations, no less than in the awards of government contracts or of prizes for poetry, it would clearly be pointless to arrange for competition if we were certain beforehand who would do best." "Competition as a Discovery Procedure," from New Studies in Politics, Economics and the History of Ideas (Chicago: University of Chicago Press, 1978), p. 179.

[52] From Ewing, "Controlled Markets for Spectrum Management," p. 14.

[53] Levin, The Invisible Resource, p. 152.

[54] Webbink, "Frequency Spectrum Deregulation Alternatives," p. 33.

[55] Arthur S. DeVany, Ross D. Eckert, Charles J. Meyers, Donald J. O'Hara, Richard C. Scott, "A Property System Approach to the Electromagnetic Spectrum," Stanford Law Review, June 1969, pp. 1499-1561. Reissued in 1980 by the Cato Institute, Cato Paper No. 10.

[56] Minasian, "Property Rights in Radiation: An Alternative Approach to Radio Frequency Allocation," Journal of Law and Economics 18 (April 1975): 221-272. Note that Minasian considers the rights involved to be rights of "radiation" rather than rights to communication.

[57] Because the present system bases rights on input specifications and the DeVany system uses output specifications, the debate that followed centered on the relative merits of "inputs" vs. "outputs" as a means of specifying rights. It is generally assumed that there is a significant difference between the two methods. But there may be less difference than we think.

Describing the definition of rights under the present system, Cornell and Lukasik note that "rights that are defined at this time are input rights granted as privileges by the radio license." But they go on to add that "the feature of a license that is valued...is the quality of the output at the receiver location." In fact, input rights make little sense without reference to the signal receiver. Input specifications must be based on interference standards: "The Commission has, in fact, recognized that the ultimate test of a communication system's performance is the quality of the output. In the broadcast and common carrier services, the Commission has established standards for spectrum output in terms of the signal to interference ratio, or noise, to be expected at any receiving location." (p. 13)

If input specifications rely on assumptions about output, the reverse is also true: Any attempt to define rights in terms of output quickly devolves into a set of assumptions concerning input. Under the DeVany and Minasian systems, output limits expressed in volts/meter determine the geographic area owned by a transmitter. Minasian describes this as a set of symmetrical "emission" and "admission" rights. (p. 232) But when the actual workings of this system are explored, the need for knowledge of the transmitter's actual inputs becomes evident. In particular, the process of exchanging and enforcing rights would necessarily involve input specifications as a point of reference. Minasian notes that in purchasing all or part of the rights of a neighboring transmitter, the right-holder "must, to consummate the sale, negotiate with other right-holders to obtain their permission for his increased radiation or nullify the effect by reducing his radiation into the other areas." (p. 238) In practical terms, this means that the right-holder would go to his neighbors with a specific input adjustment to propose to them: I will reduce my power by this much, reduce antenna height by thus much, and so on. Transactions, it seems clear, would have to be defined in term of input adjustments so that the parties to the transaction and their neighbors could calculate the propagation pattern and field strengths resulting from the exchange. While these exchanges would be constrained by the output limits, the actual point of negotiation would have to be inputs. Output limits cannot be exchanged.

The DeVany/Minasian system's reliance on input specifcations emerges even more clearly when enforcement of rights is considered. Clearly, it would be too expensive to continuously monitor the entire output pattern of every radio transmitter. Thus, Minasian says that an output-based system would have to rely on occasional monitoring. If measurements led to suspicion that a transmitter was exceeding its boundaries, the inputs of the transmitter could be checked to confirm whether a violation was taking place. (p. 255) Because of the variability of signals, a measurement of excessive field strength is not proof of wrongdoing; likewise, determination of inputs is not necessarily a rights violation unless it can be shown that the inputs selected will result in an excessive field strength outside the area right of the transmitter in a significant number of cases.

In conclusion, input specifications have little meaning without reference to a desired signal/interference ratio in a specific receiver, and output specifications have little meaning without reference to the inputs of the transmitter. It does seem clear, however, that knowledge of the actual inputs used by a transmitter are an indispensable part of defining, exchanging, and enforcing rights.

[58] Levin, The Invisible Resource, pp. 94-95.

[59] Coase, "The Federal Communications Commission," pp. 32-33.

[60] Tribune Company v. Oak Leaves Broadcasting Station, Circuit Court of Cook County, Ill. Reprinted in Congressional Record, Senate, December 1926, pp. 216-219.

[61] Mathtech, chap. V and VI, pp. I-6 through I-9.

[62] Mathtech, p. V-21.

[63] "No means have been determined by which to define property rights in the spectrum that are consistent with free market operation yet enforceable at reasonable cost." Robinson is referring to property rights based on output, however. If property based on inputs, i.e., frequency coordination, is considered, his statement is demonstrably wrong. Robinson, "An Investigation of Economic Factors," p. 14.

[64] Intermodulation is a specific kind of interference in which the transmissions of two transmitters in close proximity combine to interfere with a third transmitter. Thus, if A is transmitting at 50 Mhz and nearby B is transmitting at 100 Mhz, interference to C, who transmits at 150 Mhz -- the addition of A and B's frequencies -- may result. For some reason, inter modulation is frequently cited as an immense, well-nigh insoluble problem for a system of private property, because neither A nor B alone is responsible for the interference. However, the problem can be handled by a simple rule of priority. If, for example, A and C are transmitting without interference, and it is the addition of B's transmissions which causes the problem, then B is liable. If both A and B register their inputs at the same time, then both could be enjoined from transmitting until they worked out some kind of coordination among themselves that did not interfere with C.

[65] Homesteading or initial rights acquisition raises complex issues of justice as well as economic efficiency. For that reason this report does not explore the issue or make any proposals. The approach to initial rights definition taken by Coase and Demsetz, however, is clearly inadequate. They contend that the initial distribution or definition of rights is irrelevant; as long as free exchange is possible, rights will move to their optimal arrangement. If a railroad dumps soot onto the crops of a farmer, for example, the Chicago school holds that it makes no difference whether the farmer has to compensate the railroad to stop the pollution, or whether the railroad is taxed or sued to stop the pollution. It certainly makes a difference to the farmer and the railroad.

[66] Hayek, Law, Legislation and Liberty, vol. 1, Rules and Order (Chicago: The University of Chicago Press, 1973).

1982 The Cato Institute
Please send comments to webmaster