Risk assessments begin with the reasonable assumption that at some dose under some circumstances, a chemical will cause an adverse biological effect of one kind or another. We can call that assumption a hypothesis, and we can test it in animals. The result of the test is an observation that we can use to formulate a hypothesis about the likely effect of that chemical on humans.
So we have a risk assessment, which is a hypothesis, but we have no way to test it. As a policy decision, EPA is interested in cancer risks that are thousands of times below the level of detection by epidemiology. For non‐cancer effects, EPA is interested in risks 100 or 1,000-times below the lowest dose with an adverse effect in animals. Those risks can’t be directly tested either.
Whatever they are called, policy choices or science policy choices or assumptions, they are necessary for risk assessment, which, to use Alvin Weinberg’s term, is a “trans‐science” activity. Trans‐science refers to questions that can be asked scientifically, but that cannot be answered by science. For instance, the question, “How many people die from environmental exposures to chloroform?” is a question about causation, but there is no way to measure the number of people who die from environmental exposures to chloroform each year, if there are any.
Faced with the impossibility of directly testing the risk doesn’t mean that the hypotheses are useless. People interested in using risk assessments for public policy could devise tests and experiments to determine mechanisms of action, understand the biochemistry of those reactions, and factor them into the risk assessment process. Keeping in mind the importance of the null hypothesis in science, they could evaluate scientific information and apply it to the assessment of human risks.
Well, that’s fine, but what do we do until we have detailed information about biochemistry, molecular biology, and toxicology? When faced with that problem nearly 30 years ago, the fledgling EPA developed a series of assumptions to guide its risk assessments. That was understandable at the time because we didn’t know very much then. It is not understandable now.
So far as I know, the United States is unique among all the countries of the world in assuming that the risks from essentially all carcinogens can be modeled with a linear no‐threshold model. As a result, EPA worries about exposures to dioxin that are 160 to 1600‐times lower than the threshold for worry in other countries. Arguably, we’re safer. I haven’t heard, however, of Canadians pouring across the border seeking refuge from government‐permitted higher exposures.
It’s clear from looking at predictions from it, that the linear no‐threshold model produces estimates that can never be falsified. EPA focuses on some upper confidence limit on risk, rather than the best estimate, in its regulations of carcinogens. If it were possible to measure the predicted risk, and if the risk were found to be smaller than the upper confidence limit, it would not falsify the estimate. After all, the risk could be much smaller and be within the confidence limits. In fact, it can be zero and be within the confidence limits. Is that the best that can be done? After decades of risk assessment research?
For years, EPA clung to the assertion that the linear, no‐threshold model applied to all mutagenic carcinogens and that many chemicals, even if they were not positive in a standard mutagenicity test, might be mutagenic under some unspecified conditions. The weight of scientific knowledge about mutagenic activities has forced EPA to concede that some carcinogens are not mutagenic.
Dioxin is one such chemical. That information has not forced EPA to abandon its linear, no‐threshold model. On the contrary, in its 1994 “Dioxin Reassessment,” EPA made reference to the idea that every organism is exposed to a plethora of carcinogens and that any addition to that exposure will add to risk. Based, in part, on that idea, EPA fell back on its linear, no‐threshold default model to estimate the cancer risk from dioxin. The EPA’s Science Advisory Board roundly criticized that estimate, referring to the great amount of information about the biochemical effects of dioxin that EPA ignored, and rejected EPA’s cancer risk estimate.
Soon after that rejection, scientists who supported EPA’s risk characterization had a letter published in Science that said, the SAB had been generally supportive of EPA’s efforts and that the reassessment required only a little more “ripening.” Well, the SAB meeting about dioxin was three years ago today, and no re‐draft of the Dioxin Reassessment has appeared. After three years most things are described in terms of rot not ripeness.
EPA proudly announced that the 1994 Dioxin Reassessment was a move away from myopic concentration on the carcinogenicity of dioxin to a broader consideration of the other toxicities associated with the chemical. EPA described a number of experiments that investigated dioxin’s immunotoxicity, and it focused on a study that reported immunotoxicity in juvenile marmosets that were exposed to dioxin levels not unlike the levels that a few humans had experienced in workplace accidents. The scientist who did the marmoset study was so surprised (and, I expect, worried) by the results that he repeated the measurements in men who had been highly exposed. There was no effect in men.
Food and Drug Administration scientists who reviewed the Dioxin Reassessment asked EPA scientists why the positive marmoset results were featured in it and the negative human results ignored. The answer was, reportedly, slow in coming, but EPA’s reason was that dioxin might be more immunotoxic in juveniles, and the marmoset data might be more important for risk assessments. Some people would agree with that logic; some wouldn’t. Fair enough, but both experiments should have been discussed, and EPA should have presented its reasons for its focus on the marmoset data. Data selection is not good science.
Summing up its analysis of the non‐cancer toxicities, EPA said,