This week, President Trump is likely getting an earful in Paris over his extrication of the U.S. from the Paris climate agreement earlier this year. But our withdrawal will be meaningless unless he follows up with two important actions before he leaves office.

First, the administration must vacate the Environmental Protection Agency’s 2009 “Endangerment Finding” from carbon dioxide. Under the 2007 Supreme Court case Massachusetts v. EPA, this finding is required for the Agency to regulate carbon dioxide emissions under the Clean Air Act. No finding, no policy.

Second, the U.S. must pull out of the 1992 United Nations Framework Convention on Climate Change. This treaty, which was ratified by the Senate, is the document that enables subsequent emissions agreements, such as the Kyoto Protocol (not ratified) and the Paris agreement (an executive agreement).

As long as we are a party to the Framework Convention, a new president with different views on climate policy could simply sign us right back into the Paris agreement.

From the periphery, and certainly from reading the headlines, canceling out these two elements of climate policy might seem like a tall task. Certainly, the climate science used to justify the EPA’s endangerment finding and U.S. entry into the U.N. framework is seen as beyond reproach.

One of the foundational documents for the Endangerment Finding is the 2009 “National Assessment” of climate change. Its next iteration, in 2014, claimed it was “the most comprehensive and authoritative report ever generated about climate change,” as well as being “a key deliverable of President Obama’s Climate Action Plan.”

The problem is, these “assessments” rely solely upon computer climate models for their future scenarios of gloom and doom. As it turns out, climate modeling (or forecasting) isn’t necessarily climate science, because the modeler gets to choose a preferred answer, and then tune the internal equations to get there.

The forecast models are known as “general circulation models,” or GCMs, and are generated by various government research groups around the world. Every six years, the U.S. Department of Energy supervises a “model intercomparison” project. For the most recent one, in 2013, 34 modeling teams sent in a “frozen code” model to be compared with the predictions from other groups. These form a community of base models, which the researchers feel are their “best” version, and after this point the code cannot be changed until the inter-comparison is done.

According to an Oct. 2016 news story in Science magazine, the modeling team from Germany’s Max Planck Institute was finalizing their inter-comparison version when the team leader, Erich Roeckner, became temporarily unavailable to participate in the work. As the team tested the model before submitting it, they found it now predicted twice as much warming (7 degrees Celsius) for doubled carbon dioxide as it had in its previous iteration. Science reported that Roeckner had a unique ability to tune the model’s cloud formation algorithm, and so in his absence, the model produced heating way outside the norm. Roeckner’s team eventually got the warming down to a level that was within the range of the other models.

Enter Frederic Hourdin, who headed up the French modeling effort. He rounded up modelers from 13 other groups and recently published “The Art and Science of Climate Model Tuning” in the Bulletin of the American Meteorological Society. All of the climate models the world uses to create and justify things like the U.N. Framework Convention, the EPA endangerment finding, and the Paris agreement, are “tuned” to arrive at parameters forecast within an “anticipated acceptable range,” to quote Hourdin. But the big question is, acceptable to whom? One of Roeckner’s senior scientists, Thorsten Mauritsen, told Science, “The model we produced with 7 degrees [Celsius] was a damn good model.” But in his opinion that was too hot, so it had to be tuned.

The EPA’s determination that carbon dioxide needs to be more strictly regulated is based entirely on the GCM’s future climate projections, in which the subjective modeler — not the objective model — determines what is “acceptable.” That’s not science. It’s an educated guess. It is akin to the “herding” phenomenon seen among election pollsters when they adjust unexpected (but still possibly correct) results to appear more plausible based on others’ results and expectations.

It will be a considerable task to document the tuning problem. But if the Trump administration does this, it will have sufficient justification to warrant vacating the Endangerment Finding, which itself will justify getting the U.S. out of the U.N. Framework Climate Convention, and out of Paris for good.