Commentary

Considering the Internet’s Future

By Solveig Bernstein
January 10, 1997

Next spring the Supreme Court will hear ACLU v. Reno, the first challenge to the Communications Decency Act. The lower court’s ruling in the case declared that the CDA, which regulates a broad category of “offensive” content on the Internet, egregiously violates rights of free speech. Some lawmakers are already proposing to replace the CDA with narrower laws. But the proposals suffer from some of the same problems that plagued the CDA. It is time to question whether government should play any role at all in regulating indecency on the Internet.

Proponents of regulation argue that constitutional problems with the CDA could be fixed by amending the law to cover only material that is “harmful to minors.” The harmful-to-minors standard is the one that requires stores to cover up displays of girlie mags and refuse to sell them to minors. Is it suitable for the Internet?

The harmful-to-minors standard would cover less material than does the indecency standard, which was developed for radio and television. Traditionally, indecency can include common cuss words or images that are not at all erotic. But harmful-to-minors is still broad. Even a work with artistic or scientific merit for adults could be considered harmful to minors.

Furthermore, harmful-to-minors is defined according to community standards. That concept cannot fairly be applied to the Internet. If “community” were defined as the national community of Internet users, no jury could predict the average tastes of that diverse group. And the Constitution is supposed to protect minority viewpoints, not just the mainstream. As with the print media, “community” could be defined more narrowly (say, as the town within which a document is downloaded). But it’s hard to see why the tastes of any given town should be elevated to constitutional significance on the Net, where geography doesn’t matter to the content provider.

Because harmful-to-minors is broad and unclear, the amended law would require anyone who posted material about sex to keep children out. Commercial pornographers restrict access by requiring a credit card number or a PIN. But that costs money and slows down Net surfers. Clearly, such access controls can’t be used by nonprofit organizations or amateurs posting content for free. Adult verification services are unpopular with users, because they charge a fee and require advance registration. No one will bother to register to view a few amateur poems or photos.

The Internet enables amateur and nonprofit speech to reach a much wider audience than does any other medium. Speech on the Internet can be spontaneous and informal, like a chat around a backyard barbecue. Requiring adults to prove that they are adults before entering this forum will drive away much of the audience and many speakers.

The justification for using harmful-to-minors for the Internet is that that is the way naughty magazines are regulated in the print media. But in the print media world, the harmful-to-minors laws mostly affect commercial sales of material openly displayed in public places. The standard is inappropriate for the casual, amateur, spontaneous and informal speech posted on the Internet. Unlike the public streets, one needs a subscription to get online; the networks are private, not public.

And any Internet user who wants to limit his contacts to those that would get a “G” rating can do so. Internet users can control what their children view online by using software filters like Net Nanny or online service providers who allow access only to child-safe material. The filters allow parents to screen content posted in Asia or Holland, unlike a federal statute, which cannot touch material posted in foreign jurisdictions. And the filters let parents screen out violence and hate speech, too. Given those options, the federal government has no compelling interest in regulating broad categories of sexually oriented material on the Internet.

The burden imposed by the harmful-to-minors standard could be lightened by allowing sites to escape liability by labeling themselves unsuitable for children. But forced labeling is inappropriate for e-mail, individual newsgroup postings or any other spontaneous speech; it will not work for large, diverse libraries of works, either. Internet speakers would justifiably be angry about being forced to stigmatize their speech, and they are unlikely choose labels carefully. Voluntary labeling and filtering systems would generally be more reliable. It’s best to leave the matter to the private sector.

Federal lawmakers have serious issues to consider, such as Social Security and Medicare and problems with the welfare system. We can only hope that lawmakers will drop the political football that protecting children from the Internet has become and concentrate their attention where it is urgently needed.

Solveig Bernstein is assistant director of telecommunications and technology studies at the Cato Institute.