Rumors are swirling that mortgage interest deduction reform is “on the table” for the White House and House Republicans. Is that a good thing or a bad thing? In order to answer that question it is first necessary to determine whether the deduction works as intended.
In our forthcoming chapter on international homeownership in The Routledge Handbook of Housing Policy and Planning, Mark Calabria and I discuss the deduction’s impact on homeownership rates. For years, the mortgage interest deduction has been justified under the pretense that it supports homeownership.
Spoiler alert: empirical evidence suggests it does not. Most studies find the mortgage interest deduction has no impact on homeownership rates, and at least one study finds that eliminating the deduction would boost homeownership.
People generally find this counterintuitive. Why wouldn’t a subsidy for homeowners increase the amount of homes owned? A simple way of thinking about it is that some people are like Person A: they are both financially qualified and want to buy a home. People like Person A will buy a home with or without the subsidy.
On the other hand, some people are like Person B: they are not financially qualified to buy a home or do not desire homeownership. The mortgage interest deduction does not address the reasons Person B is financially unqualified (income instability, weak credit, or poor savings) or the reasons that Person B doesn’t desire homeownership (maintenance, debt, reduced geographic mobility).
A more economically sophisticated way of understanding the mortgage interest deduction is that it reduces the cost of housing for existing homeowners, but increases the price of homeownership for non‐homeowners. In other words, any positive impacts on homeownership rates are canceled out by the negative impacts.
So what does the deduction do, if it doesn’t boost homeownership? Research indicates the deduction encourages people to buy larger or more expensive homes. In other words, it motivates homeowners to over‐consume housing. Studies indicate that mortgage subsidies distort the allocation of saving and investment across OECD countries and redistribute income “from new entrants in the housing market to insiders.”
What does the path forward look like? House Republicans have a proposal for reform; Paul Ryan’s A Better Way plan suggests doubling the standard deduction and reducing use of the mortgage interest deduction, though not eliminating it. Theoretically, under his plan taxpayers will have more money in their pockets they can use that to pay mortgage interest or something else.
There are also other possibilities for reform, like capping, limiting, and phasing the benefit out. Great Britain did so over a twenty year period (1974–1994), and far from a housing market catastrophe, Great Britain’s homeownership rates rose during this time period. Although the rise in homeownership was partly attributable to the privatization of public housing stock in the 1980s, it’s still notable that market turmoil was avoided.
Whatever shape reform ultimately takes, Republicans are smart to flag the opportunity. Since the mortgage interest deduction disproportionately benefits high‐income blue states while also redistributing to the wealthy, the politics of reform could unite some unlikely rivals. That might be just what tax reform needs.
I’ve begun working on a new book on the gold standard. In the first chapter I plan to discuss the origin of money, as a preliminary to discussing how silver and gold became the world’s dominant commodity monies.
The topic of the origin of money has become controversial in recent years. The dominant view among economists (for good reason), suggested by Adam Smith in the eighteenth century and fleshed out by Carl Menger in the nineteenth, is that money is a market-born institution. Convergence on one or two commodities as the common media of payment emerged from the actions of barterers seeking more effective trading strategies, without anyone aiming at the final result. But this view has lately been challenged by a resurgence of the “state theory of money,” also known as Cartalism, which argues that governments played an essential role in the establishment of money.
In 1979, Leslie Gelb and Richard Betts released a book on U.S. involvement in Vietnam, entitled “The Irony of Vietnam: The System Worked.” Unlike most previous treatments of the conflict, Gelb and Betts didn’t argue that the U.S. failure in Vietnam was the result of a poor foreign policymaking process. Nor did they argue that policymakers had been misinformed or misled about the conflict. They didn’t even argue that policymakers were under any illusions about how unlikely success in Vietnam was.
Instead, Gelb and Betts argued that – while the war in Vietnam itself was an abject failure for American foreign policy – the U.S. decision‐making system actually functioned as it was meant to throughout the period of increasing U.S. involvement in the war. As they describe:
“With hindsight, it seems evident that the costs of the strategy of preventing defeat were incalculable. But at the time of the crucial decisions, the costs of accepting defeat appeared to be incalculable. The system in this case coped as democracies usually do: by compromising between extreme choices, satisfying the partisans of neither extreme of opinion within the government but preventing the total alienation of either.”
As the authors show, the central question about American involvement in Vietnam wasn’t why U.S. involvement in Vietnam happened, or why policymakers chose to deepen it over time, but rather why U.S. policymakers considered it vital that Vietnam not be lost to communism in the first place.
At each key decision point, policymakers chose to do the minimum possible to avoid a communist victory. Their basic argument is simple: the U.S. decision‐making system was uniquely suited to fiddle with tactical choices while enabling policymakers to avoid hard strategic questions.
Stop me if this sounds familiar.
Donald Trump’s speech on Monday night carried strong echoes of Gelb and Betts’ work, as he recommitted the United States to an open‐ended, ill‐defined military mission in Afghanistan.
Much has been made of Trump’s flip‐flop on Afghanistan policy, shifting from his campaign rhetoric – which promised that the United States would be getting out of the nation‐building business – to a policy scarcely different than that pursued by his two predecessors. But it makes sense in the context of Gelb and Betts’ Vietnam argument.
Trump advisors – the ‘generals’ he is so proud of – were able to convince him that all the other Afghanistan options were worse. Even though a continued U.S. commitment in Afghanistan is unlikely to produce success, he agreed to an approach which largely hopes to prevent losses.
Just as policymakers did in Vietnam, Trump is fiddling with tactics without asking the broader strategic questions. To be precise: Is it actually a key U.S. interest today to stabilize Afghanistan and prevent further Taliban gains?
Certainly, it would be better for everyone if Afghanistan were stable, prosperous and democratic. But it is substantially harder to argue that it is a core U.S. interest. The key arguments in support of this proposition – laid out once again in Monday night’s speech – are questionable.
Indeed, the idea that an Afghanistan without U.S. military presence will result in future terror attacks is so misleading that scholars have described it as the ‘safe haven myth.’ Terror groups operating out of ‘safe havens’ have been responsible for only 1% of the terrorist attacks on the United States; 9/11 is an extreme outlier.
Others focus on past U.S. commitments, arguing that we’ve spent too many lives and too much effort to withdraw now. Yet as behavioral economists would note, this is a sunk‐cost fallacy, biasing policymakers to continue existing commitments lest the previous efforts be ‘lost.’
Rather than question how to avoid losing in Afghanistan, policymakers should compare the potential costs of losing to the costs of continuing our commitment.
Unfortunately, while Gelb and Betts’ arguments can help us understand how even good policymaking institutions can result in poor foreign policy outcomes, they offer no real solution for how to short‐circuit this process.
In the case of Vietnam, popular discontent with the war ultimately made it so costly for policymakers that they were forced to reconsider their options. In the case of Afghanistan, where an all‐volunteer force has replaced a popular draft, the 16‐year war is largely invisible to public opinion.
As a result, America’s forever war looks set to continue for a long time.
Ripping into President Trump on “Morning Joe” last week, historian Jon Meacham invoked FDR: “the presidency is preeminently a place of moral leadership.” When I heard that, I thought, man: if that’s right, then we are well and truly…er, doomed. But is it right?
A lot of people seem to think so, including some conservatives. In his Washington Post column this week, Michael Gerson laments “the sad effects of President Trump’s renunciation of moral leadership on American politics and culture.” And in a piece titled “Conservatives Need to Remember, Presidents Affect Culture,” National Review’s David French writes that “In 1998, Bill Clinton damaged the culture for the sake of preserving his political hide”; now, “a GOP president is inflicting even deeper wounds.” Instead of minimizing President Trump’s morally blinkered response to the murder at the neo‐Nazi rally in Charlottesville, French argues, “it’s time for conservatives to remember the cultural power of the presidency” and recapture the moral clarity they had in the Clinton years.
Back then, French argues, conservatives were right to insist that “adultery and perjury matter.” Fair enough: the latter may even have been reasonable grounds for impeachment, though there were better ones, as the great Nat Hentoff noted at the time. Still, where’s the evidence that Clinton’s transgressions “damaged the culture”?
Remember Bill Bennett’s Index of Leading Cultural Indicators? First published in 1994, it armed conservative Cassandras with charts showing the world going to hell on multiple metrics: marriage, sexual license, abortion, drug use, crime, etc. Then a funny thing happened on the way to the Oughties: in the era of Monica Lewinsky and Gennifer Flowers, most of those trends started heading in the right direction.
Toward the end of ‘90s, when the second edition of the Index was being readied, declinism had become a much tougher sell, leaving Bennett harrumphing that: “conservatives are going to have to face the fact that there is some good news on the landscape.”
Declining divorce rates, lower birth rates for unmarried women, and decreased tolerance for adultery were just a few of the trends complicating the narrative of Clinton‐induced cultural rot. Oddly enough, Bill’s extracurricular hijinks didn’t lead Americans to take their marriages less seriously. The number of Americans who believe that “marital infidelity is always wrong” actually went up.
The taboo against adultery, as measured by the answer to that question, began weakening again around 2008. Whatever drove that trend, it couldn’t have been the examples set by George W. Bush and Barack Obama. They were lousy presidents, but worthy enough role models as far as marriage goes. Maybe the “cultural power of the presidency”—for good and for ill—isn’t as vast as French imagines.
Last night President Trump informed the nation that he is escalating America’s war in Afghanistan. That means that our longest war will continue for at least four more years, and likely longer. It also means that more Americans will be sent across the globe to fight – and die – in the pursuit of unclear objectives, and in a conflict that is not vital to U.S. national security.
But Trump assured Americans that he had the strategy for “winning.” While specifics about the new strategy are sketchy, it seems to be more of the same, and more of the same will not improve reality in Afghanistan; it may, in fact, make things worse. At this point, one could be forgiven for seeing America’s efforts in Afghanistan as a sign of insanity: doing the same, but expecting different results.
Cato’s Trevor Thrall and Erik Goepner note that Trump’s strategy is “only a slightly more muscular version of the policy he inherited from Obama. …[but] remains a much less forceful version of Obama’s surge.”
They also point out that Trump’s rhetoric, “like that of previous administrations, makes it sound as though this is America’s war to win or lose. …However, the U.S. has very little control over how the Afghan government will govern or how the Afghan security forces will fight. America, therefore, has little power to affect the outcome of Afghanistan’s civil war.”
In fairness, it’s difficult to envision a strategy that would. But that is an argument to end America’s involvement in Afghanistan’s civil war, not for more of the same. Trump chose the latter, in part, because it is the easier political decision than withdrawal.
Christopher Preble notes that:
Few presidents are criticized for using military force. More often, they are hit for not intervening often enough. Or trying hard enough. Or long enough. Withdrawal without victory is a particularly odious sin.
Therefore, when Donald Trump was presented with an opportunity to redirect U.S. attention and resources, he ignored both the reasonable and well‐considered suggestions to withdraw, as well as the foolish and quixotic proposals. Instead, he chose to kick the can down the road.
Cato scholars had much more to say following the president’s primetime address to the nation, including several articles detailing the many false assumptions that undergird Trump’s rationale for escalating the war. They also addressed the false promises the president made to the nation.
You can read the articles in full by clicking on the links below:
Trump Goes from Afghanistan War Skeptic to True Believer by Christopher Preble
The Slim Chances That President Trump’s Afghanistan Policy Will Succeed: Let’s Look Honestly at Recent History by Trevor Thrall and Erik Goepner
‘New Strategy,’ Same Results by Trevor Thrall and Erik Goepner
Afghanistan Is President Donald Trump’s War Now: Fighting Without Purpose Or End by Doug Bandow
What links the solar eclipse, Brexit, female labor market decisions and the border adjustment tax?
Discussion of all these issues seems to confuse or conflate measured economic activity (GDP) with general economic welfare.
Take the eclipse first. On 18th August, NBC News ran a story headlined “Solar Eclipse Will Cost America Almost $700 Million in Lost Productivity.” For starters this is a bizarre headline. Productivity is usually measured as output per worker‐hour, so it is unclear why taking a break to watch the sun disappear would affect the amount you produce in the hours you do work.
Most likely the authors really meant total output, or GDP. Even then it is not clear that this need be true. Workers may have become more productive before or afterwards to compensate for their “time off.” They may also have worked longer in other periods to compensate for the break.
But let’s suppose workers all really did take the time off without any compensating effort or hours elsewhere, and as a result total output fell across the US. Would this matter and would it be “bad for the economy”?
The point of free markets is that they allow us as individuals and groups to make decisions to fulfill our own objectives. Clearly, many of us make decisions not to maximize our lifetime productivity or measured output, and thus do nor maximize GDP. Think of the lawyer who decides to change career and become a teacher, or those with marketable skills who decide to use their time for voluntary work or family life.
If in the process of making free decisions we decide to dedicate more time to activities that reflect our own preferences, then we are maximizing our own welfare. Though the measured economy may be marginally smaller as a result of the eclipse, the fact that many individuals decided it was worth their while to take time out to view suggests their lives were enriched by the experience.
The nature of free choice also gets downplayed in the debate about female labor force participation. Often we hear how much of an economic boost there would be if more women worked, or female productivity improved, and certainly this might boost measured GDP and even improve the public finances. But if decisions to spend a period out of the labor force, or else to pursue an occupation with more time for family‐related activities, reflects personal preferences (for either sex), then allowing these free decisions raises economic welfare. It doesn’t “hurt the economy.”
Another example of this arose in the debate about the proposed border‐adjustment tax. A new paper by Seth Benzell, Laurence Kotlikoff and Guillermo LaGarda modeled the effects of the Republican “Better Way” plan on taxation, and found a significant overall boost to the US capital stock, wages and GDP. Yet by 2100 they found that GDP would actually be lower. Bad news for the US economy, right? No, because (as the authors argue) this would be a reflection of higher wages leading to more people deciding to spend time on leisure. In economic welfare terms, they would still be better off.
Similar reasoning applies in other areas. Several UK economists criticized a report by the Economists for Free Trade group last week for adding the savings from not having to make net contributions to the EU budget to other growth benefits to obtain a total GDP boost. This critique was right as a matter of arithmetic – after all, a £ saving to taxpayers is not the same as the overall effect of that saving on the economy. But undoubtedly a reduction in the burden on UK taxpayers would increase their economic welfare, because they would be free to make more individual decisions about how to spend their money.
All this is not to say that GDP is not, in many cases, a good overall proxy for welfare. Bad policies can both curb reduce GDP and overall economic welfare. Of course, there are other known limitations to its use too including that it includes the production/consumption of goods with negative externalities, leaves out illegal or domestic activity, and often cannot keep up with the improvements in quality of goods.
What the above examples show, however, is that we should be particularly wary when people start talking about how people’s free decisions “hurt the economy.” We value leisure and the ability to fulfill our own desires too.
The past couple years have been rough for free traders in the United States – not only did an avowed protectionist win the White House for the first time in decades, but he did so while claiming that protectionism has been an effective policy throughout the nation’s history. Even worse, President Trump and his advisers were far from alone in perpetuating and repeating the view that past restrictions on foreign competition were unequivocally successful in achieving their stated policy objectives: decreased imports, increased jobs, industrial revival, opened foreign markets, and American economic prosperity more broadly.
For us trade policy wonks, Trump’s revisionist history – and a lack of a vocal response to it from his political opponents, pundits and journalists – was both surprising and depressing because it seemingly ignored a vast repository of academic analyses of and contemporaneous reporting on the periods and policies in question. Then again, one could hardly blame most laypeople for not digging into and regurgitating the academic work because at the time there really wasn’t an easily searchable survey of the many failures of American trade protectionism. Early this year, I set out to change that, and the result is a new paper for Cato out today. In it we see that the actual scholarship demonstrates how American protectionism—even in the periods most often cited as “successes”— not only imposed immense economic costs on American consumers and the broader economy but also failed to achieve its primary policy aims and fostered political dysfunction along the way.
The paper surveys academic literature from three periods of American history, demarcated by milestones in the evolution of the U.S. and multilateral trading system: (1) from the founding to the U.S. entry into the General Agreement on Tariffs and Trade (GATT) in 1947; (2) from the GATT’s early years to the creation of its successor, the World Trade Organization (WTO), in 1995; and (3) the current WTO era. These surveys show that, contrary to the fashionable rhetoric, American protectionism has repeatedly failed as an economic strategy:
- Pre‐GATT. U.S. history from the Founding through the early 20th century shows protectionism decreasing in effectiveness and increasing in costs for consumers and the economy more broadly. Multiple academic studies of the period between the Civil War and the Great Depression—often argued to be a golden era of American tariffs and industrial prosperity—show protectionism to have inhibited, rather than encouraged, industrial and broader economic growth. Instead, other economic factors—particularly rapid population expansion—drove American growth during this era. The protectionism of this era is also shown to have fostered modern American lobbying and rent seeking and, as a result, to have been closely associated with political corruption. Overall, however, pre–20th century U.S. trade policy provides few real economic lessons for modern policymakers because of the stark social, legal, and economic differences between that period and today.
From GATT to WTO. The findings from numerous studies of protectionist measures during the GATT period of general trade liberalization are unequivocal: U.S. protectionism not only produced far higher total economic costs than benefits, but also more often than not failed even to achieve its intended objective, whether that be the rejuvenation of an ailing American industry and its workforce or the opening of new U.S. export markets. In particular, these studies show the high economic costs of U.S. protectionism. For example, studies of specific U.S. import restrictions between 1950 and 1990 found that the measures cost U.S. consumers an average of $620,000 in current dollars per job supposedly saved in the protected industry at issue. By contrast, at the current hourly U.S. manufacturing wage of $20.69, a typical factory worker makes a little over $41,000 per year.
Studies also found that protectionist measures failed in most cases to prevent further increases in imports or declines in U.S. jobs, finding only one instance—the bicycle industry—in which protectionist measures apparently resuscitated the industry in question. One analysis found that threats of retaliation through section 301 of U.S. trade law, again in the news last week, failed to achieve even partial success more than half the time, with actual retaliation working less than 20 percent of the time. Even the most heralded examples of American protectionist successes during this era—motorcycle safeguards that supposedly saved Harley‐Davidson and the U.S.-Japan Semiconductor Trade Agreement—have been revealed to have imposed immense costs on U.S. consumers and companies for very little, if any, actual gains.
These outcomes would likely be worse if similar policies were implemented today, owing to increased American integration into the global economy, the proliferation of global supply chains, the rise of other economic powers, and the creation of the WTO. Thus, protectionism today would yield even more pain for even less gain.
WTO. Following the advent of the World Trade Organization in 1995, American unilateral protectionism retreated—relegated to relatively few trade barriers on politically sensitive goods and services dating back decades and to narrow administrative actions under U.S. “trade remedy” laws, which govern antidumping, countervailing duties, and safeguards. The results of this protectionism, however, were no better than the previous eras and arguably worse given the U.S. participation in the WTO and further integration of the U.S. and global economies. Both created tangible ramifications (i.e., new prospects of retaliation and greater harms to import‐dependent U.S. companies) that did not previously exist.
Macroeconomic studies continue to show that U.S. protectionism imposes significant harms on American consumers and the broader economy. Examinations of trade remedies in specific sectors—steel, high‐tech goods, lumber, paper, and tires—show massive consumer costs and the failure to revive the companies seeking protection. U.S. antidumping law has repeatedly been found not only to hurt U.S. consumers and many large American exporters, but also to only rarely improve the state of the protected industry. Instead, what often lies in the wake of protection is bankruptcy for the very firms that lobbied for protection. Other nontariff barriers, such as those on meat labeling, sugar, and maritime shipping, have proven no better and, in many cases, have led to foreign retaliation or the threat thereof.
In recent years, academic work and political commentary have focused on whether the “free trade consensus” view in America may have underestimated the disruptions to the U.S. economy caused by heightened import competition. This discussion, while worthwhile, has spawned troubling suggestions from scholars, pundits, and politicians that the U.S. government should be more willing to experiment again with protectionism to help American workers and the economy, particularly the manufacturing sector. My paper should help disabuse them of such ideas.
With little doubt, the United States has struggled in recent years to adapt to significant economic disruptions, whether due to trade, automation, innovation, or changing consumer tastes. How we should respond to these challenges warrants discussion and consideration of various policy ideas. What should not be up for debate, however, is whether protectionism would help to solve the country’s current problems. History is replete with examples of the failure of American protectionism; unless our policymakers quickly relearn this history, we may be doomed to repeat it.
The paper — “Doomed to Repeat It: The Long History of America’s Protectionist Failures” — is available for download here.
The views expressed herein are those of Scott Lincicome alone and do not necessarily reflect the views of his employers.