I’d like, however, to press a more subtle, though potentially more damaging, concern. Lower population growth rates have the potential undermine the virtuous cycle of risk‐taking and innovation. Without policy changes, economies will find themselves trapped in rapid boom‐and‐bust cycles that net out to pathetically slow growth rates even in per capita terms.
Though the economic growth dynamics are complex, the underlying intuition is straightforward. A growing population provides a buffer for risky capital investments. A growing population, and hence potential workforce, demands more capital each year and crucially demands a larger total pool or stock of capital. The economy cannot get by simply by replacing worn‐out machines and refurbishing dilapidated buildings. The total stock of machinery and the total number of buildings needs to grow. If not, capital will become scarce relative to labor, wages will fall and the profits from new capital investments will soar. These soaring profits will provide an irresistible lure for new investment.
The potential for soaring profits also provide a cushion for innovation. Innovators may guess wrong and invest in capital that turns out to be less, rather than more, efficient or effective than the current modes of production. If the population size is stagnant then society responds by shunning the innovation and simply continuing to replace existing machines and buildings using the existing technology.
A growing population can’t get by that way. They need new capital and will settle for the less effective and efficient innovation if nothing else is available. In economic terms, a growing population provides price support for even unsuccessful innovations and thus limits the downside risk for innovators.
This insight into the innovative process can help us see the Industrial Revolution and subsequent process of global industrialization as part of a virtuous cycle in which innovation led to rising populations and rising populations reduced the downside risk of further innovation.
Now we face an end to this virtuous cycle, not because innovation itself is petering out, but because population growth is. Moreover, because innovations themselves are self‐supporting — for example advanced hardware increases demand for advanced software and vice versa — the net slowdown can be dramatic. Today, for example, the growing global middle class provides rapacious demand for smart devices. This demand means enormous profits for industry leaders, such as Apple and Samsung. Yet, that same rapacious demand attracts new (if unsuccessful) entrants like Facebook and sustains old quasi‐successful ones like HTC. It also keeps the leaders ever focused on further innovation.
This need not be the case. It is not merely the advance of the underlying technological know‐how that creates this innovative race. For decades the PC‐industry was a wellspring of innovation. Yet, that innovation has cooled to a snail’s pace as the demand for PCs was sated. The underlying technology is the same as smart devices. It wasn’t lack of know‐how that killed PC innovation. It was lack of sustained demand growth.
What Can Be Done?
For a problem this interwoven, a successful policy intervention needs to attack from the demand and supply sides simultaneously. I’ll offer what I think of a prototypical set of policies. The pair are meant to be illustrative rather than prescriptive. There are other policy sets which could operate as well, so long as those sets combined the key elements outlined here.
One the demand side we can make up for slower population growth with less restrictive monetary policy. This is a tradeoff that has held throughout time. Periods of rapid population growth—in the US, often driven by strong immigration—are times when monetary policy could be tight with few economic dislocations. Times of declining and even negative population growth—we are thinking now of early 21st century Japan—meant seemingly modest monetary policy rules could be disastrously tight.
If we think about how loose monetary policy works, it will become clear why this is the case. Fans of tight money will argue that loose money robs a nation’s citizens of their purchasing power. Each year the money that they have saved buys less than the year before. Where, however, does this purchasing power go? In a modern economy it is almost always transferred to the younger generation.
Middle‐aged citizens are typically savers but young citizens are typically borrowers. Loose money means that after accounting for inflation, the real interest rate paid by borrowers is very low. Indeed, if inflation is high enough and interest rates low enough the real interest rate paid by borrowers can be zero or even negative. This means that younger borrowers are benefited at the expense of older savers. In and of itself this is a delicate balance to strike. The good monetary policy maker wants to ensure that neither group is unduly harmed.
Slowing population growth, however, changes the calculus. Continued innovation depends on an expanding capital stock. This implies that the younger generation must have the means to engage in even greater investment than their predecessors. If they cannot do this by the sheer weight of numbers then society can benefit by providing them with easier repayment terms. The consequences of failing to do this are severe. Stringent payment terms will mean that failed innovation leads to a wave of bankruptcies. Those bankruptcies in turn threaten the entire savings and investment community and drag down the value of investments even for the older generation. This is largely the story of the late 20th/early 21st century bubbles, from Japan in the 1980s, Korea in the 1990s and the United States in the 2000s. In each case investment got ahead of itself—as it has done many times in the past—however, low population growth combined with difficult repayment terms turned what would have been a market correction in to a full‐scale financial collapse, followed by stagnant growth.
It’s no accident that this phenomenon appeared in Japan first. As its population began to stagnate well before the rest of the industrialized world, investors found themselves with loads of capital, a dearth of workers, and repayment terms they could not meet.
The result was not pretty. A strong recession failed to clear away the wreckage of failed businesses. The economy stagnated for over a decade. Stock prices peaked in 1989, never to return their highs. Decades worth of savings were wiped out.
A New Kind of Labor Force
While Japan’s demand‐side response was tepid and tragic, it fared better on the supply side. Government and industry in Japan invested heavily in robotics, completely dominating the field through the end of the 20th and into the 21st. While the rest of the world has started to catch up, as late as 2005 roughly 40 percent of all robots were located in Japan1.
What distinguishes robots from smart devices or simply technology in general is that they are designed to replace the role of a human operator. A non‐robotic machine or device is designed with the intention that it will always depend on human supervision. For growth policy this implies that smart devices and machines, like traditional capital, are inherently complementary to labor. Their profit and risk profiles will match that of standard capital and they suffer from exposure to population dynamics.
Robots, by contrast have the capacity to operate without supervision, meaning that tasks can be accomplished labor‐free. This helps insulate the robotic investment from labor force dynamics. There’s no worry that universities will turn out too few employees with the requisite “robot skills” or that the operators will be lured away to elderly home care or some other field with soaring demand and hence wages.
This independence, however, exposes the robotic enterprise to a separate sort of risk. Without a human operator, there is no well defined distinction between malfunction and operator error. Yet the same types of problems that lead to operator error can occur. The control system for a robot cannot be pre‐programmed for every possible situation because it is impossible for its programmers—who are only human—to imagine every possible situation. The control system must take in data and make judgment calls about the best likely response to a situation. Autonomous vehicles are an obvious example.
Autonomous vehicles, otherwise known as driverless cars, can function independently of a human operator. They can negotiate traffic and respond in real time to the seemingly infinite threats and obstacles that appear on the road. Naturally there is no way to pre‐program the vehicle for every possible encounter. The vehicle must have objectives, such as “do not strike a child running into the road” that interfere with other objectives such as “do not swerve into the opposite lane” that together interfere with other objectives such as “do not slam the brakes too hard when closely followed by another car.”
Inevitably, some car somewhere will eventually make what most observers regard as a poor choice. Human drivers make mistakes because they are tired, distracted, or confused. These same phenomena affect robots as well. When their CPUs have been working for a long time in hot and humid conditions, their cores begin to overheat and an overheating core operates slower and less efficiently. When faced with an unusual set of sensor data, the robot must divert more of its processing power to analyze the nature and potential threat of the situation than when in more standard conditions. When the robot is presented with highly complex, interwoven problems with constantly changing inputs, portions of its processing can become caught in infinite loops, which require the process to be aborted and attempted again.
When a human operator harms someone as a result of tiredness, distraction, or confusion, she may have her culpability judged by peers and acquitted or held liable based on the judgment of these individuals. A similar process examines the executives and owners of a company whose device malfunctions.
The robot, however, does not fit neatly into this paradigm. It cannot be judged as a human, but at the same time the executives who manufactured the device the robot controlled could not—by the very definition of robot—have foreseen what the robot would do in every applicable situation.
Yet, absent any other mechanism for restitution, we should expect victims to seek redress from the manufacturer. Manufacturers therefore are and will continue to be reluctant to take the “robotic leap.” That is, they are willing to employ ever more complex smart devices, so long as the ultimate judgment rests in the hands of a human being. This keeps them chained to the effects of population and labor force growth. For many devices—including the automobile—the robotic leap is not far from a technological standpoint. Liability concerns are the primary hurdle.
These concerns are only magnified when we move to food service, home care, housekeeping, child care, and a host of other functions where labor shortages could be relieved by robotic devices. We have the tools. We have the technology. We lack the legal structure.
Creating the proper legal structure will require the following two steps. The first is cleaving legally the robotic operator from the device. For example, one would not own a Toyota Camry with a self‐driving feature designed by Google. Instead, one would own a Toyota Camry and then have that car driven for him by the Google autonomous vehicle robot. Toyota is only responsible for making a Camry that works. The choices the AV robot makes has nothing to do with Toyota.
Second, Google, or whatever company it likely spins off for the purpose of creating robots, would be required to join one or more indemnity fund. These funds would be run by and for the manufacturers of robots and would determine whether or not potential robots showed sufficient judgment to be covered. This determination process need not expose the source code of the robot. We imagine the robot could be exposed to simulations or external tests similar to a human who seeks insurance coverage or professional licensing.
If robots are found legally culpable for a poor decision, the indemnity funds compensate the victims and perhaps pay fines to the state if needed. It is then up to the fund to decide whether a specific robotic model should be expelled from the fund.
The opinions expressed here are solely those of the author and do not necessarily reflect the views of the Cato Institute. This essay was prepared as part of a special Cato online forum on reviving economic growth.