Tag: productivity

A Wall Street Journal Column Understates the Size of U.S. Manufacturing

The Wall Street Journal’s December 1 “Ahead of the Tape” column, by Kelly Evans, says “manufacturing is a relatively small part of the economy; It employs about 9% of the work force and accounts for about the same percentage of GDP.” Actually, manufacturing accounts for about 12 percent of nominal GDP.  But that, too, is misleading.  

Chicago Fed economist William Strauss explains why neither U.S. manufacturing’s share of employment nor its share of GDP captures the actual strength of manufacturing:

Between 1950 and 2007 (prior to the severe recession), manufacturing output was just over 600% higher while over the same period growth in real GDP of the U.S. was only a slightly lesser 560%. Yet, the manufacturing share of GDP declined markedly over this period as measured in current dollar value of output. In 1950, the manufacturing share of the U.S. economy amounted to 27% of nominal GDP, but by 2007 it had fallen to 12.1%. How did a sector that experienced growth at a faster pace than the overall economy become a smaller part of the overall economy? The answer again is productivity growth. The greater efficiency of the manufacturing sector afforded either a slower price increase or an outright decline in the prices of this sector’s goods. As one example, inflation (as measured by the Consumer Price Index) averaged 3.7% between 1980 and 2009, while at the same time the rise in prices for new vehicles averaged 1.7%. So while the number (and quality) of manufactured goods had been rising over time, their relative value compared with the output of other sectors did not keep pace. This allowed manufactured goods to be less costly to consumers and led to the manufacturing sector’s declining share of GDP.

Those who imagine “we don’t make anything anymore,” as Donald Trump claims, don’t grasp the magnitude of America’s industrial productivity gains.

In reality, the U.S. is by far the world’s largest manufacturer, with China trailing by 22 percent according to U.N. data for 2008 and arguably much more when we’re not in recession.

The GM ‘Turnaround’ in Bastiat’s View

GM’s long-rumored initial public stock offering will take place Thursday and self-anointed savior of the U.S. auto industry, Steven Rattner, is pretty bullish about the prospect of investors turning out in droves. 

I’ve been saying for a while that I thought the government’s exposure [euphemism for taxpayer losses] in the auto bailout was in the $10-billion to $20-billion range.

But since investor interest has pushed the initial price up from the $26-to-$29 per share range to the $32-$33 range, Rattner now believes:

[T]his exposure is in the single-digit billion range, and arguably potentially better.

I won’t argue with Rattner’s numbers.  After all, they affirm one of my many criticisms of the bailout: that taxpayers would never recoup the value of their “investment.”  My bigger problem is with Rattner’s cavalier disregard for the other enduring—and arguably more significant—costs of the auto bailouts.

Rattner is like the foil in Frederic Bastiat’s excellent, but not-famous-enough, 1850 parable, That Which is Seen and That Which is Unseen.    Rattner touts what is seen, namely that GM and Chrysler still exist.  And they exist because of his and his colleagues’ commitment to a plan to ensure their survival, along with the hundreds of thousands (if not millions, as some “estimates” had it) of jobs that were imperiled had those companies vanished.  (For starters, I very much question even what is seen here. I am skeptical of the counterfactual that GM and Chrysler would have disappeared and that there would have been significantly more job loss in the industry than there actually was during the recession and restructuring.  But I’ll grant his view of what is seen because, frankly, the specifics are irrelevant in the final analysis).

For what is seen, Rattner admirably admits of a cost.  And that cost is not insignificant.  It is anywhere from $65 billion to $82 billion (the range of the cost of the bailout) minus what is being paid back and what investors are willing to pay for GM shares—in the “single-digit billion range,” as Rattner says.  But Rattner is willing to stand by that trade-off, claiming his efforts and the billions in “government exposure” were a small price to pay for saving the U.S. auto industry, as it were.  It’s merely a difference in philosophy or compassion that animates bailout critics, according to this position.

No.  Not so fast.  All along (quite contemptuously in this op-ed, which I criticized here) Rattner has been unwilling to acknowledge the costs that are unseen.  Those unseen costs include:

  • the added uncertainty that pervades the private sector and assigns higher risks and thus higher costs to investing and hiring (whom might government favor or punish next?);
  • the diversion of resources from productive to political purposes in the business community (instead of buying that machinery to churn out better or more lawn mower engines, better to hire lobbyists to keep Washington apprised of how important we are or how this or that policy might be beneficial to the national employment picture!);
  • excessive risk-taking and other uneconomic behavior that falls under the rubric of moral hazard from entities that might consider themselves too-big-to-fail (perhaps, even, the New GM!);
  • growing aversion to—and rising cost of—corporate debt (don’t forget what happened to Chrysler’s “preferred” bondholders in the bankruptcy process!);
  • the sales and market share that should have gone to Ford or Honda or VW as part of the evolutionary market process;
  • the fruitful R&D expenditures of those more disciplined companies;
  • the expansion of job opportunities at those companies and their suppliers;
  • productivity gains passed on to workers in the form of higher wages or to consumers as lower prices;
  • the diminution of the credibility needed to discourage foreign governments from meddling in markets, often to the detriment of U.S. enterprises.

 The list goes on.

 Yet, Rattner, seemingly oblivious to the fact that the economy remains stuck in the mire, speaks triumphantly of the successful auto bailout.  But nobody ever doubted that taxpayer resources in the hands of policymakers willing to push the bounds of legality could “rescue” GM from a fate it deserved.  The concern was that policymakers would do just that, leaving behind wreckage to our institutions not immediately discernible.  But anemic economic activity, 9.6 percent unemployment, and a private sector unwilling to invest is pretty darn discernible at this point.

Rattner should take off the tails, put down the champagne flute, and acknowledge what was originally unseen.

The USPS’s ‘Automation Refugees’

Jim O’Brien, a vice-president at Time Inc. and chairman of the Mailers Council, recently guest-blogged on the U.S. Postal Service’s inspector general’s web site on the subject of “automation refugees.”

O’Brien explains the origination of the term:

Back in 1990, Halstein Stralberg coined the term “automation refugees” to describe Postal Service mail processing employees who were assigned to manual operations when automation eliminated the work they had been doing. Since the Postal Service couldn’t lay off these employees, they had to be given something to do, and manual processing seemed to have an inexhaustible capacity to absorb employees by the simple expedient of reducing its productivity. The result was a sharp decline in mail processing productivity and a sharp increase in mail processing costs for Periodicals class. Periodicals class cost coverage has declined steadily since that time.

O’Brien then tells of visiting seventeen mail processing facilities as part of a Joint Mail Processing Task Force in 1998. During those visits he noted that the periodical sorting machines always happened to be down even though the machines were supposed to be operating seventeen hours a day. Although the machines weren’t working, manual operations were always up and running.

A decade later, O’Brien points out that the situation apparently hasn’t changed:

More Periodicals mail is manually processed than ever, and manual productivity continues to decline. Periodicals Class now only covers 75% of its costs. How can this dismal pattern of declining productivity and rising costs continue more than two decades after it was first identified, especially when the Postal Service has invested millions of dollars in flats automation equipment?

O’Brien probably answered this question when he noted that the USPS couldn’t lay off these automation refugees back in 1990.

As I’ve discussed before, the USPS has a major union problem. A new Government Accountability Office report cites as a problem the fact that most postal employees are protected by “no-layoff” provisions. The USPS must also let go lower-cost part-time and temporary employees before it can lay off a full-time worker not covered by a no-layoff provision.

Unfortunately, recent comments from members of the House Oversight and Government Reform Committee showed an unjustified concern for how potential reforms would affect postal employees. Labor isn’t the only problem facing the USPS, but Congress needs to understand that the postal service’s expensive unionized workforce is a crippling burden.

Six Reasons to Downsize the Federal Government

1. Additional federal spending transfers resources from the more productive private sector to the less productive public sector of the economy. The bulk of federal spending goes toward subsidies and benefit payments, which generally do not enhance economic productivity. With lower productivity, average American incomes will fall.

2. As federal spending rises, it creates pressure to raise taxes now and in the future. Higher taxes reduce incentives for productive activities such as working, saving, investing, and starting businesses. Higher taxes also increase incentives to engage in unproductive activities such as tax avoidance.

3. Much federal spending is wasteful and many federal programs are mismanaged. Cost overruns, fraud and abuse, and other bureaucratic failures are endemic in many agencies. It’s true that failures also occur in the private sector, but they are weeded out by competition, bankruptcy, and other market forces. We need to similarly weed out government failures.

4. Federal programs often benefit special interest groups while harming the broader interests of the general public. How is that possible in a democracy? The answer is that logrolling or horse-trading in Congress allows programs to be enacted even though they are only favored by minorities of legislators and voters. One solution is to impose a legal or constitutional cap on the overall federal budget to force politicians to make spending trade-offs.

5. Many federal programs cause active damage to society, in addition to the damage caused by the higher taxes needed to fund them. Programs usually distort markets and they sometimes cause social and environmental damage. Some examples are housing subsidies that helped to cause the financial crises, welfare programs that have created dependency, and farm subsidies that have harmed the environment.

6. The expansion of the federal government in recent decades runs counter to the American tradition of federalism. Federal functions should be “few and defined” in James Madison’s words, with most government activities left to the states. The explosion in federal aid to the states since the 1960s has strangled diversity and innovation in state governments because aid has been accompanied by a mass of one-size-fits-all regulations.

For more, see DownsizingGovernment.org.

Unions, Productivity, and the 2010 Economic Report of the President

I’ve become a fan over the years of the annual Economic Report of the President, released around this time each year by the Council of Economic Advisers. The more than 100 tables in the back of the book provide an invaluable picture of the economy over many decades, covering all the major indicators from output and employment to interest rates and trade. Each report also contains chapters explaining the economic thinking behind administration policies.

Chapter 10 of the latest report focuses on “Fostering Productivity Growth through Innovation and Trade.” For critics of trade, it offers sound economic reasons why trade raises U.S. productivity and, thus, over the long run, U.S. living standards.

One of ways trade promotes growth is “Firm Productivity.” Economists have come to appreciate that firms within an industry will differ in their productivity. Those that are more productive will tend to grow and prosper in larger and more competitive global markets. As a result,

when a country opens to trade, more productive firms grow relative to less productive firms, thus shifting labor and other resources to the better organized firms and increasing overall productivity. Even if workers do not switch industries, they move from firms that are either poorly managed or that use less advanced technology and production processes toward the more productive firms.

The report doesn’t mention this, but one reason why firms differ in their productivity is unionization. As I spell out in an “Economic Watch” column in today’s Washington Times, and explore in more detail in the latest Cato Journal, unionized firms tend to lose market share to non-unionized firms:

The weight of evidence indicates that, for most firms in most sectors, unionization leaves companies less able to compete successfully. The core problem is that unions cause compensation to rise faster than productivity, eroding profits while at the same time reducing the ability of firms to remain price-competitive. The result over time is that unionized firms have tended to lose market share to non-unionized firms, in domestic as well as international markets.

Compared to equivalent non-unionized competitors, unionized firms are associated with lower profits, less investment in physical capital, and less spending on research and development. By exposing an industry (say, automobiles) to more vigorous international competition, trade accelerates the shift from less competitive unionized firms to more competitive non-unionized firms.

Economists serving a Democratic administration would be understandably reluctant to say such a thing explicitly, but it is certainly there between the lines in Chapter 10 of the new Economic Report of the President.

A Few Notes on Climate Change

As the Copenhagen Climate Conference is taking place, it is appropriate to clarify once again what is more or less accurately known about the climate of our planet and about climate change.

Obviously, a brief post can not substitute for detailed studies of professionals in a variety of scientific disciplines – climatology, atmospheric physics, chemistry, geology, astronomy, and economics. However, a short post can summarize basic theses on the main trends in climate evolution, on its forecasts, and on its actual and projected effects.

1. The Earth’s climate is constantly changing. The climate was changing in the past, is changing now and, obviously, will be changing in the future – as long as our planet exists.

2. Climatic changes are largely cyclical in nature. There are various time horizons of climatic cycles – from the annual cycle known to everyone to cycles of 65-70 years, of 1,300 years, or of 100,000 years (the so called Milankovitch cycles).

3. There is no fundamental disagreement among scientists, public figures and governments about the fact that the climate is  changing. There is a broad consensus that climate changes occur constantly. The myth, created by climate alarmists, that their opponents deny climate change is sheer propaganda.

4. Current debate among climatologists, economists and public figures is not about the fact of climate change, but about other issues. In particular, disagreements exist on:
- Comparative levels of modern day temperatures (relative to the historically observed),
- The direction of climate change depending on the length of record,
- The extent of climate change,
- The rate of climate change,
- Causes of climate change,
- Forecasts of climate change,
- Consequences of climate change,
- The optimal strategy for human beings to respond to climate change.

5. Unbiased answers to many of these issues are critically dependent on a chosen time horizon – whether it is 10 years, or 30 years, or 70 years, or 1000 years, or 10,000 years, or hundreds of thousands or millions of years. Depending on the time horizon, the answers to many of these questions may be different, even opposite.

6. The current level of global temperature in historical perspective is not unique. The average temperature of the Earth is now estimated at about 14.5 degrees Celsius. In our planet’s history there have been few periods when the Earth’s temperature was lower than the current – in the early Permian period, in the Oligocene, and during periodic glaciations in the Pleistocene. For most of the time during the last half billion years, the air temperature at the Earth’s surface greatly exceeded the current one, and for about half of this period it was approximately 25°C, or 10°C higher than the current temperature. Regular glaciations of cold periods during the Pleistocene era lasted for approximately 90,000 years, with a low temperature of approximately 5°C below that of the present, alternated by warm interglacial periods (for 4,000-6,000 years) with temperatures of 1-3°C higher than at present. Approximately 11,000 years ago the last significant increase in temperature began (of approximately 5°C), during which time a huge glacier, that covered a considerable part of Eurasia and America, had melted. Climate warming has played a key role in humanity’s acquisition of the secrets of agriculture and in its transition to civilization. Over the past 11,000 years there were at least five distinct warm periods, the so-called “climatic optima” when the temperature of the planet was at 1-3°C higher than at present.

7. The focus of climate change depends critically on the choice of time horizon. In the past 11 years (1998-2009 years) global temperature was flat. Before that, in the preceding 20 years (1979-1998 years) it increased by about 0.3°C. Before that, during the preceding 36 years (1940-1976 years) the temperature fell by about 0.1°C. Before that, for the preceding two centuries (1740 – 1940 years), the overall trend in global temperature was mainly neutral – with periodic warming, followed by cooling, and then again warming. Over the past three centuries (from the turn of 18th century), the temperature in the northern hemisphere has increased by approximately 1.3°C, from the trough of the so-called “Little Ice Age” (LIA) during the years 1500-1740 years, followed by the contemporary climatic optimum (CCO), which started around 1980. During the three centuries preceding the LIA, the temperature in the northern hemisphere was falling compared to the level it was during the medieval climatic optimum (MCO) in the 8th – 13th centuries. Depending on the chosen time frame the long-term temperature trend has a different trajectory. For periods of the last 2,000 years, the last 4,000 years, and the last 8,000 years, the trend was negative. For periods of the past 1,300 years, the last 5,000 years, and the last 9,000 years it was positive.

8. The rate of contemporary climate change is much more modest in comparison with the rate of climatic changes observed earlier in the history of the planet. The Intergovernmental Panel on Climate Change (IPCC) describes the increase in the global temperature by 0.76°C over the last century (1906-2005 years) as extraordinary. There is reason to suspect this temperature value is somewhat overstated. However, the main point is that previous rises in temperature were greater than those in the modern era. Comparable data demonstrate that the increase in temperature, for example, in Central England in the 18th century (by 0.97°C) was more significant than in the 20th (by 0.90°C). The climatic changes in Central Greenland over the past 50,000 years show that there were at least a dozen periods during which the regional temperature increased by 10-13°C. Given the correlation between changes in temperature at high latitudes and globally, those shifts in temperature regime in Greenland meant a rise in global temperature by 4-6°C. Such a rate was approximately 5-7 times faster than the actual (and, perhaps, slightly exaggerated) temperature increase in the 20th century.

9. The rate of current climate change (the speed of modern warming) by historical standards is not unique. According to IPCC data, the rate of temperature increase over the past 50 years was 0.13°C per decade. According to comparable data, obtained through instrumental measurements, a higher rate of temperature increase was observed at least three times: in the late 17th century – early 18th century; in the second half of the 18th century; and in the late 19th century – early 20th century. The centennial rate of warming in the 20th century is slower than the warming in the 18th century that was instrumentally recorded and slower than the warming in at least 13 cases over the past 50,000 years that were measured by palaeoclimatic methods.

10. Among the causes of climate change in the pre-industrial era there were hardly any anthropogenic factors – due to modest population size and mankind’s limited economic activities. But the range of climatic fluctuations and their rate and peak values in the pre-industrial era exceeded the parameters of climate change recorded in the industrial period.

11. During the industrial age (since the beginning of the 19th century) climate change is believed to be under the impact of both groups of factors – of natural and of anthropogenic character. Since the rate of climate change in the industrial age is so far noticeably smaller than at some time in the pre-industrial age, there is no basis for the assertion that anthropogenic factors had already become as significant as natural factors, even less for the assertion that they overwhelm natural factors.

12. Factors of anthropogenic climate change are rather diverse and can not be confined to carbon dioxide only. Mankind impacts local, regional and global climate by constructing buildings and structures, heating houses, industrial and public premises, by logging and planting forests, plowing arable land, damming rivers, draining and irrigating lands, leveling and paving territories, conducting industry, issuing aerosols, etc.

13. There is no consensus in the scientific community on the role of carbon dioxide in climate change. Some scientists believe that it is crucial, others believe that it is secondary to other factors. There are also serious disagreements on the nature and direction of possible causality between concentration of carbon dioxide in the atmosphere and temperature: some researchers believe the former causes temperature to rise, others argue the opposite – that fluctuations in temperature cause changes in carbon dioxide concentration.

14. Unlike carbon monoxide (CO), carbon dioxide (CO2) is harmless to humans; in contrast to aerosol, a harmful and dangerous substance, carbon dioxide does not pollute the environment. It has neither a color, nor a taste, nor a smell. Therefore, popularly used photos and videos showing factory chimney stacks emitting smoke and cars emitting exhaust to illustrate carbon dioxide are just misleading – CO2 is invisible; what is visible in those images are pollutants. It should also be noted that the increased concentration of carbon dioxide in the air has a positive impact on the productivity of plants, including agricultural crops.

15. The relationship of the concentration of carbon dioxide to climate change remains a subject of intense scientific debate. True, the concentration of carbon dioxide in the atmosphere over the past two centuries increased from 280 parts per million of air particles in the early 19th century to 388 particles in 2009. It is also true that the global temperature in that period rose by about 0.8°C. But whether these two factors are connected is unclear. The dynamics of CO2 concentration did not correlate well with the expected changes in temperature. The significant and rapid increases in global temperature during the interglacial periods of the Pleistocene, during the Medieval Climatic Optima, in the 18th century, were not preceded by an increase in carbon dioxide concentration. In the industrial age, an increase in carbon dioxide concentration in the atmosphere was not always accompanied by a rise in global temperature. In 1944-1976 CO2 concentration increased by 24 units – from 308 to 332 particles, but the global temperature fell 0.1°C. In 1998-2009 CO2 concentration increased by 21 units – from 367 to 388 particles, but the global temperature trend remained flat. In the first half of the 1940’s the decline in the concentration of carbon dioxide by 3 units (as a result of the massive destruction caused by World War II) did not prevent the global temperature to rise by 0.1°C.

16. So far global climate models demonstrate their limited effectiveness. The complex nature of the climate system is not reflected adequately enough in the global climate models whose use has recently spread around the world. The projections developed on their basis in the late 1990s through the early 2000s predicted the global temperature to rise by 1.4-5.8°C till the end of the 21st century with a 0.2-0.4°C increase already in the first decade. In reality during 1998-2009 the temperature was flat at best.

17. Forecasts of global climate change made at the beginning of this decade by Russian scientists (from the Arctic and Antarctic Research Institute, the Voejkov Main Geophysical Observatory) predicted a fall in the global temperature by 0.6°C by 2025-2030 in comparison with a temperature peak reached at the end of the 20th century. So far the actual temperature for the last decade has not risen.

18. Implications of climate change for human beings differ greatly depending on their direction, size and rate. An increase in temperature leads as a rule to a softer and moister climate, while a decline in temperature leads to a harder and drier climate. It was a climatic optimum in the Holocene period with temperatures 1-3°C higher than today that greatly contributed to the birth of civilization. Conditions for people’s life and economic activities in warmer climates are usually more favorable than in colder environments. In warmer climates there is usually more precipitation than in drier areas, the cost of heating and volume of food required to sustain human life is lower, while vegetation and navigation periods are longer, and crops’ yields are higher.

19. Methods “to combat global warming” by reducing carbon dioxide emissions suggested by climate alarmists are scientifically unfounded in the absence of extraordinary or unusual changes in climate during the modern era. Such measures, if adopted, are especially dangerous for mid- and lower income countries. Those measures would effectively cut those countries off the path to prosperity and hinder their ability to close the gap with more developed nations.

20. The impact of all anthropogenic factors (not only CO2) on climate is unclear when compared with factors of nature. Therefore, the most effective strategy for humanity in responding to different types of climate change is adaptation. That approach is exactly the way that humans have reacted to the larger-scale climatic changes in the past, even though they were less prepared then for such changes. Now mankind has greater resources to adapt to lesser climate fluctuations and it is better equipped for them scientifically, technically and psychologically. The adaptation of humanity to climate changes is incomparably less costly than other options being proposed and imposed by climate alarmists. Human society has already adopted to climate change and will continue to do so as long as economy and society are vibrant and free.

The Land Is There, the Cubans Are There, but the Incentives Are Not

The Washington Post has an interesting story today on the program of the Cuban government to transfer idle state-owned land to private farmers so they can resurrect the dilapidated agricultural sector on the communist island. As Ian Vásquez and I wrote in the chapter on U.S. policy toward Cuba in Cato Handbook for Policymakers, before this reform, the agricultural productivity of Cuba’s tiny non-state sector (comprising cooperatives and small private farmers) was already 25 percent higher than that of the state sector.

At stake is an issue of incentives. Collective land doesn’t give farmers an incentive to work hard and be productive, since the benefits of their labor go to the government who distributes them (in theory) evenly among everyone, regardless of who worked hard or not. While with private property, “The harder you work, the better you do,” as a Cuban farmer said in the Post story.

The country’s ruler, Raúl Castro, recently declared that “The land is there, and here are the Cubans! Let’s see if we can get to work or not, if we produce or not… The land is there waiting for our sweat.” However, it’s not a matter of just having land and lots of people. It’s also a matter of incentives to produce. Failing to see this, as in the case of Cuba’s failed communist model, is a recipe for failure.