Testimony

The Importance of Data Access for Science and Governance

By Michael Gough
July 15, 1999
Subcommittee on Government Management, Information, and Technology
Committee on Science
United States House of Representatives

Mr. Horn and Members of the Committee, thank you for this opportunity to address you. I am here as a scientist and a citizen to testify that regulation and taxes that are promulgated as being based on science should not be shrouded in mystery because the underlying data are not available to the regulated and taxed.

Karl Popper, an English philosopher, inquired as deeply as anyone into questions about what is science and how does science work. He concluded that the scientific process, for all its accouterments of math, instrumentation, and specialized knowledge, can be divided into two parts. The first part is the formulation of an idea or a hypothesis or theory, the words are used somewhat interchangeably, about how some part of the physical universe works. The second part is the design and execution of an experiment or a test to examine whether or not the idea or hypothesis or theory is correct. And, of course, if it is correct, the idea or hypothesis becomes incorporated into scientists’ knowledge of the universe, and it can be used in the construction of other ideas and hypotheses.

Ideas, hypotheses, and theories are the stuff of all human inquiry, but the requirement of having to devise a test for an idea or hypothesis and demonstrating that the idea or hypothesis survived the test is the hallmark of science. An essential part of the testing process is review of ideas and hypothesis, tests and experiments and studies by other scientists. It’s necessary because all people can make mistakes, and scientists who investigate the unknown are in areas without guideposts or milemarkers. There’s nothing shameful about a mistake, but it’s inefficient and costly when mistakes are incorporated into accepted science. Additional ideas and hypotheses that are based on the mistake are almost certain to be wrong, and the time and effort expended on developing them and testing them is lost. Far better to review, analyze, and attempt to replicate a new finding before accepting it.

Scientists have developed myriad methods for review. Scientists are expected to present talks to their peers in seminars and meetings of all kinds. Most scientists welcome the opportunity to talk about their results and insights; after all, scientists who don’t talk can pass into obscurity and their work go unnoticed. Scientists tend to be pretty good listeners. They like to learn about what’s new even if it sometimes includes protracted periods of boredom. It’s not all sunny and serene, however. I think every practicing scientist can recall when a question from the audience opened a huge hole in the speaker’s logic or experimentation.

Beyond oral presentations, scientists, to obtain attention for their results and to be successful, have to publish their findings. Scientific journals have varying standards for review of papers submitted for publication, and scientists know that the journals with the most rigorous review are also the most prestigious.

One of the problems faced by scientists and journals is that the data that go into describing an experiment or a study can be such a bulky package that it won’t fit into a paper of any reasonable length. Some journals in economics and political science have responded to that problem by requiring that authors inform the readers about where the complete set of data is available and how to obtain it.

More informally, scientists make personal contact by phone or email to obtain additional data, or they visit each others’ laboratories. There are no rules for such requests or visits, but it’s generally understood that it’s okay to ask for data that are necessary for complete understanding of a published paper and not okay to ask for data that are still being examined before publication.

Good science requires that observations and analyses be repeatable and repeated. Given information about technique and procedure by the scientist who made the observation or analysis, other competent scientists should be able to replicate the observation or analysis. Reproducibility distinguishes science from another human activity called magic. For centuries, magicians claimed “special powers” that couldn’t be taught to others who lacked the power. Now, we know that magic is tricks, and that the tricks are necessarily kept secret so that non-magicians can’t learn them. Science, on the contrary, works best when it’s open to skepticism, review, and attempts at replication.

I am going to focus on scientific data are used for the development of laws, rules, and regulations, risk assessments and other government guidance documents, and I am going to divide those data into two types. Laboratory experiments and replication of laboratory data can be attempted in other laboratories. Most everyone can remember about a decade ago, when cold fusion burst into the news. The hypotheses underlying cold fusion and the explanations for how it could produce wondrous worlds of energy in an open glass beaker on a laboratory workbench at room temperature were contradicted by much of physical theory, but cold fusion didn’t fade away because of theory. It faded away because other scientists tried and failed and failed repeatedly to replicate the results.

There is a similar story of laboratory mistake (or worse) that has contributed to what are likely to be billions of dollars spent on largely or completely wasted toxicity tests. In 1996, scientists from Tulane University published a paper in Science magazine, one of the most respected scientific journals in the world with a reputation for rigorous review of papers before publication. The Tulane scientists reported that tiny amounts of pesticides, present at concentrations that are now permitted under stringent Environment Protection Agency regulations, could interact and unleash a plethora of adverse biological events. Their report, which was leaked to EPA before it was published in Science was instrumental in the passage of the Food Quality Protection Act of 1996 and especially important in Congress’ directing EPA to require new tests of commercial chemicals. The Tulane results attracted major press and TV and political attention, they have had lasting impact, and they are wrong.

Competent scientists in laboratories in universities, the federal government, and industry tried and failed to replicate the Tulane results. Initially, the Tulane scientists stuck to their guns and suggested that special conditions in their laboratory that weren’t exactly replicated in the other laboratories explained the discrepancy. These “special conditions” sound a lot like the “special powers” involved in magic that I mentioned earlier, and few scientists accepted them as the explanation. About a year after the publication of their results, the Tulane scientists threw in the towel, and published a letter in Science that acknowledged that no one, not even they, had been able to replicate their original findings.

Science worked. Even though the faulty (or fraudulent) science was not caught by the reviewers for Science, the requirement that scientists describe their experiments in enough detail so that others can try to replicate them led to the debunking of the mistake. Even so, American industry remains burdened with expensive and unnecessary testing requirements that will drive up consumer costs and almost certainly reduce consumer choice.

That ends what I have to say about data from laboratories that other scientists can attempt to replicate. I am now going to turn to epidemiologic studies that examine the health of populations of people with particular exposure histories or the histories of people with specific diseases. Such studies cannot be replicated. The data are collected on a unique set of people under unique conditions over a unique time period.

In large part, we are here today because of such a study. A study done by C.A. Pope and others1 is a primary basis for EPA’s stringent air pollution regulations announced in November 1996. At the heart of the Pope study is information about a million volunteers who participated in an American Cancer Society and supplied information about their habits, workplace and environmental exposures, and health. That data set is unique, and it cannot be replicated.

EPA’s air pollution regulations are very expensive - tens of billions of dollars a year - and some scientists question whether they will produce the health benefits claimed by EPA. Congress requested that the health data from the Pope study be made available to independent scientists, which would include industry scientists, for review and analysis. The scientists involved in the Pope study refused to release the data, and initially EPA backed them up. When EPA changed its mind and said the data should be made available for review, it was announced that the data really belonged to the American Cancer Society, and that EPA couldn’t release them. Pope and his colleagues eventually agreed to release all their data to a committee of the jointly industry-EPA funded Health Effects Institute, which is supposed to report its analysis of the data in 2000, years after the air regulations went into effect.

The Shelby Amendment that directed the Office of Management and Budget to establish procedures for access to federally generated data was one upshot of the attempt to get those data. In February, OMB published a proposal for the implementation of that amendment. In May, Steve Milloy and I wrote the EPA and requested the data that went into the Pope study because the same study is the basis for the calculation of most of the benefits EPA expects from its proposed Tier 2/Gasoline Sulfur regulation. EPA replied in a letter and supplied us data about air pollution, but stated, “We are not providing the health survey data you seek, because these data are not in the Agency’s possession…. Since the records were not produced under an EPA award, the Public Law cited as authority for your request is also not applicable.”2

As a citizen, I am very disturbed by other information in the EPA letter. “The health study data you seek are contained in a data base that is proprietary with the American Cancer Society (ACS). The EPA has never had access to this database….”3 Evidently, it’s not only critics of EPA’s regulations that have not seen the data. Not even EPA has seen them. I question whether billions of dollars in regulatory costs should be heaped on American industry, cities, and consumers on the bases of data that have not been examined by the regulatory agency.

Pope and his colleagues objected to releasing the health data because they said it would compromise the privacy of individuals in the study and make it impossible for Pope and his colleagues to do additional epidemiologic studies. That is an overblown concern.

For five years, I chaired the Department of Health and Human Services committee that advised the United States Air Force’s study of the health of the 1200 Air Force personnel who sprayed 90 percent of the Agent Orange used in Vietnam. There are few more newsworthy or politically sensitive epidemiologic studies.

It’s an immense study, involving extensive physical and psychological examinations of the 1200 men who sprayed Agent Orange and a comparison group of 1200 men who flew and serviced similar airplanes during the Vietnam War but who did not spray Agent Orange. The study began in 1982 and will end with the examination in 2002. The Air Force has contracted with famous and competent medical institutions such as the Lovelace Clinic in New Mexico and the Scripps Clinic in California for the conduct of the examinations, and the examination records and statistical analyses fill many data tapes and books.

In 1990 or 91, the Air Force scientists told the advisory committee that they had received some requests for data. I remember that there was a few minutes’ conversation about whether access to the data should be restricted in any way, but that was replaced with agreement that the data should be made available to anyone who requested it. I also recall comments that taxpayers had paid for the data and were entitled to it and that independent analyses of the data would strengthen the conclusions that the Air Force had drawn and that the committee accepted or those analyses would show where mistakes had been made.

The Air Force and the advisory committee were very concerned to protect the privacy of the study participants. An office at the National Center for Health Statistics is skilled in “scrubbing” data so that personal identifiers are removed, and such identifiers were removed. Releasing data was and is not a trivial affair, but I think that the Air Force experience demonstrates that confidentiality can be preserved.

My final example of the importance of access to data is concerned with the herbicide, 2,4-D (2,4-dichlorophenoxyacetic acid), the most widely used herbicide in the country. It has been thoroughly tested for toxicity, and EPA has declared that there is no evidence to support even the possibility that it causes cancer.

But 2,4-D has been the target of epidemiologic investigations by the National Cancer Institute (NCI), and those investigations have been marred by mistakes that would never have come to light without persistent requests for data collected by NCI. In 1986, NCI published a study of Kansas farm workers that included a table that indicated that exposure to 2,4-D increased the risk for cancer, and NCI scientists concluded that 2,4-D was a likely cause of cancer. This widely reported conclusion frightened farmers and other users of 2,4-D and raised concerns among consumers who worried about eating food that was contaminated with the herbicide.

Manufacturers of 2,4-D were finally able to obtain a copy of the questionnaire used by NCI in its study. The NCI scientists had never asked a question about 2,4-D use; instead they’d asked questions about uses of all herbicides. The origin of the mistake that transformed “herbicides” into “2,4-D,” is not known, but NCI published a correction. In a subsequent study of farm workers in Iowa and Minnesota, NCI completed its study without asking about 2,4-D use. Then it went back and resurveyed study participants and their relatives about 2,4-D use. The resurvey delayed the publication of the study by two years, and when the study appeared, there was no mention of 2,4-D.

Again, industry officials requested and obtained information from NCI, and the resurvey data showed no association between 2,4-D use and increased cancer risk. NCI scientists never released those data. Those data, of course, undermined any connection that could be drawn between 2,4-D and cancer, which they persisted in suggesting.

Each of the NCI studies was released with great fanfare that produced a lot of press coverage about the risks from 2,4-D. The corrections that showed no evidence of risk attracted far less attention.

In 1991, NCI published a study that showed an association between cancer in dogs and the dog owners’ use of 2,4-D.4 Like the NCI studies of farmers, the dog study attracted a lot of attention, and editorials drew attention to the similarities of the cancers reported in the farmers and in the dogs.

Industry officials had some doubts about the methods of analysis used by the authors of the dog study, and they requested the underlying data from NCI. NCI stonewalled release of the data for more than 18 months. Although the dog owners’ names had already been removed from the data, NCI said that they were concerned that “industry” would use information about the breeds of the dogs and ZIP locations to track down and harass the dog owners.

Eventually, NCI released the data, and scientists at Michigan State University reanalyzed the data. Their reanalysis revealed several flaws in the NCI dog study, and when those flaws were corrected, the association between 2,4-D and cancer in dogs disappeared.5 The 2,4-D saga shows the importance of citizens having access to data to check on the work of government scientists.

Science depends on skepticism, review, criticism, and replication. Good science and good scientists thrive under those conditions.

The science used to support regulations and taxes must be based on publicly available data for review and analysis. Otherwise, government, simply by calling any collection of data, conclusion, and conjecture “science” and refusing to let others see the data, has a free hand to impose taxes and regulations.




NOTES:

1 Pope, C.A., M.J. Thun, M.M. Namboordiri, D.W. Docery, J.S. Evans, F.E. Speizer, and C.W. Health. Particulate matter as a predictor of mortality in a prospective study of United States adults. American Journal of Respiratory and Critical Care Medicine 151: 669-674.

2 Wegman, L.N., Direct or, Air Quality Strategies and Standards Division, U.S. Environmental Protection Agency. Letter to Steven J. Milloy, June 9, 1999.

3 Ditto.

4 H.M Haynes, R.E. Tarone, K.P. Cantor, et al. 1991. Case-control study of canine malignant lymphoma: Positive association with dog owner’s use of 2,4-dichlophenoxyacetic acid herbicides. Journal of the National Cancer Institute 83: 1226-1231.

5 J.B. Kaneene and RA Miller. 1999. Re-analysis of 2,4-D use and the occurrence of canine maliginant lymphoma. Veterinary and Human Toxicology 41:164-170.