Delong..contd
The Pattern of Growth in the Later 1990s
Compare our use of information technology today with our predecessors' use of
information technology half a century ago. The decade of the 1950s saw electronic
computers largely replace mechanical and electromechanical calculators and sorters as
the world's automated calculating devices. By the end of the 1950s there were roughly
2000 installed computers in the world: machines like Remington Rand UNIVACs, IBM
11 See Crafts (2002).
10
702s, or DEC PDP-1s. The processing power of these machines averaged perhaps 10,000
machine instructions per second.
Today, talking rough orders of magnitude only, there are perhaps 300 million active
computers in the world with processing power averaging several hundred million
instructions per second. Two thousand computers times ten thousand instructions per
second is twenty million. three hundred million computers times, say, three hundred
million instructions/second is ninety quadrillion--a four-billion-fold increase in the
world's raw automated computational power in forty years, an average annual rate of
growth of 56 percent per year.
Such a sustained rate of productivity improvement at such a pace is unprecedented in our
history. Moreover, there is every reason to believe that this pace of productivity growth in
the leading sectors will continue for decades. More than a generation ago Intel
Corporation co-founder Gordon Moore noticed what has become Moore's Law--that
improvements in semiconductor fabrication allow manufacturers to double the density of
transistors on a chip every eighteen months. The scale of investment needed to make
Moore's Law hold has grown exponentially along with the density of transistors and
circuits, but Moore's Law has continued to hold, and engineers see no immediate barriers
that will bring the process of improvement to a halt anytime soon.
11
Investment Spending
As the computer revolution proceeded, nominal spending on information technology
capital rose from about one percent of GDP in 1960 to about two percent of GDP by
1980 to about three percent of GDP by 1990 to between five and six percent of GDP by
2000. All throughout this time, Moore’s Law—the rule of thumb enunciated by Intel
cofounder Gordon Moore that every twelve to eighteen months saw a doubling of the
density of transistors that his and other companies could put onto a silicon wafer—meant
that the real price of information technology capital was falling as well. As the nominal
spending share of GDP spent on information technology capital grew at a rate of 5
percent per year, the price of data processing—and in recent decades data
communications—equipment fell at a rate of between 10 and 15 percent per year as well.
At chain-weighted real values constructed using 1996 as a base year, real investment in
information technology equipment and software was an amount equal to 1.7 percent of
real GDP in 1987. By 2000 it was an amount equal to 6.8 percent of real GDP. The steep rise in real investment in information processing equipment (and software)
drove a steep rise in total real investment in equipment: by and large, the boom in real
investment in information processing equipment driven by rapid technological progress
and the associated price declines was an addition to, not a shift in the composition of
overall real equipment investment.
Macro Consequences
A naïve back-of-the-envelope calculation would suggest that this sharp rise in equipment
investment was of sufficient magnitude to drive substantial productivity acceleration: at a
total social rate of return to investment of 15 percent per year, a 6 percentage-point rise in
14
the investment share would be predicted to boost the rate of growth of real gross product,
at least, by about 1 percentage point per year. And that is the same order of magnitude as
the acceleration of economic growth seen in the second half of the 1990s.
The acceleration in the growth rate of labor productivity and of real GDP in the second
half of the 1990s effectively wiped out all the effects of the post-1973 productivity
slowdown. The U.S. economy in the second half of the 1990s was, according to official
statistics and measurements, performing as well in terms of economic growth as it had
routinely performed in the first post-World War II generation. It is a marker of how much
expectations had been changed by the 1973 to 1995 period of slow growth that 1995-
2001 growth was viewed as extraordinary and remarkable.
Nevertheless, the acceleration of growth in the second half of the 1990s was large enough
to leave a large mark on the economy even in the relatively short time it has been in
effect. Real output per person-hour worked in the nonfarm business sector today is ten
percent higher than one would have predicted back in 1995 by extrapolating the 1973 to
1995 trend. That such a large increase in the average level of productivity can be
accumulated over a mere seven years just by getting back to what seemed “normal”
16
before 1973 is an index of the size and importance of the 1973 to 1995 productivity
slowdown.
Cyclical Factors
Alongside the burst of growth in output per person-hour worked came significantly better
labor market performance. The unemployment rate consistent with stable inflation, which
had been somewhere between 6 and 7 percent of the labor force from the early 1980s into
the early 1990s, suddenly fell to 5 percent or even lower in the late 1990s. All estimates
of non-accelerating-inflation-rates-of-unemployment are hazardous and uncertain,13 but
long before 2001 the chance that the inflation-unemployment process was a series of
random draws from the same urn after as before 1995 was negligible.
This large downward shift in the NAIRU posed significant problems for anyone wishing
to estimate the growth of the economy’s productive potential over the 1990s. Was this
fall in the NAIRU a permanent shift that raised the economy’s level of potential output?
Was it a transitory result of good news on the supply-shock front—falling rates of
increase in medical costs, falling oil prices, falling other import prices, and so forth—that
would soon be reversed? If the fall in the NAIRU was permanent, then presumably it
produced a once-and-for-all jump in the level of potential output, not an acceleration of
the growth rate of potential output. But how large a once-and-for-all jump? Okun’s law
would suggest that a two percentage-point decline in the unemployment rate would be
associated with a 5 percent increase in output. Production functions would suggest that a
two percentage-point decline in the unemployment rate would—after taking account of
the effect of falling unemployment on the labor force and the differential impact of the
13 See Staiger, Stock, and Watson (1997).
18
change in unemployment on the skilled and the educated—be associated with a roughly
1.5 percent increase in output.
However, none of the other cyclical indicators suggested that the late-1990s economy
was an unusually high-pressure economy. The average workweek was no higher in 2000
19
when the unemployment rate approached 4 percent than it had been in 1993 when the
unemployment rate fluctuated between 6 and 7 percent.
Capacity utilization was lower during the late 1990s than it had been during the late
1980s, when unemployment had been 1.5 percentage points higher. 14 Low and not rising
inflation, a relatively short workweek, and relatively low capacity utilization—these all
suggested that the fall in the unemployment rate in the late 1990s was not associated with
the kind of high-pressure economy assumed by Okun’s Law.
How Useful Will Computers Be?
What factors determine what the ultimate impact of these technologies will be? What is
there that could interrupt a relatively bright forecast for productivity growth over the next
decade? There are three possibilities: The first is the end of the era of technological
revolution—the end of the era of declining prices of information technology capital. The
second is a steep fall in the share of total nominal expenditure devoted to information
33
technology capital. And the third is a steep fall in the social marginal product of
investment in information technology—or, rather, a fall in the product of the social return
on investment and the capital-output ratio. The important thing to focus on in forecasting
the future is that none of these have happened: In 1991-1995 semiconductor production
was half a percent of nonfarm business output; in 1996-2000 semiconductor production
averaged 0.9 percent of nonfarm business output. Nominal spending on information
technology capital rose from about one percent of GDP in 1960 to about two percent of
GDP by 1980 to about three percent of GDP by 1990 to between five and six percent of
GDP by 2000. Computer and semiconductor prices declined at 15-20 percent per year
from 1991-1995 and at 25-35 percent per year from 1996-2000.
However, whether nominal expenditure shares will continue to rise in the end hinges on
how useful data processing and data communications products turn out to be. What will
be the elasticity of demand for high-technology goods as their prices continue to drop?
The greater is the number of different uses found for high-tech products as their prices
decline, the larger will be the income and price elasticities of demand--and thus the
stronger will be the forces pushing the expenditure share up, not down, as technological
advance continues. All of the history of the electronics sector suggests that these
elasticities are high, nor low. Each successive generation of falling prices appears to
produce new uses for computers and communications equipment at an astonishing rate.
The first, very expensive, computers were seen as good at performing complicated and
lengthy sets of arithmetic operations. The first leading-edge applications of large-scale
34
electronic computing power were military: the burst of innovation during World War II
that produced the first one-of-a-kind hand-tooled electronic computers was totally funded
by the war effort. The coming of the Korean War won IBM its first contract to actually
deliver a computer: the million-dollar Defense Calculator. The military demand in the
1950s and the 1960s by projects such as Whirlwind and SAGE [Semi Automatic Ground
Environment]--a strategic air defense system--both filled the assembly lines of computer
manufacturers and trained the generation of engineers that designed and built.
The first leading-edge civilian economic applications of large--for the time, the 1950s--
amounts of computer power came from government agencies like the Census and from
industries like insurance and finance which performed lengthy sets of calculations as they
processed large amounts of paper. The first UNIVAC computer was bought by the
Census Bureau. The second and third orders came from A.C. Nielson Market Research
and the Prudential Insurance Company. This second, slightly cheaper, generation was of
computers was used not to make sophisticated calculations, but to make the extremely
simple calculations needed by the Census, and by the human resource departments of
large corporations. The Census Bureau used computers to replace their electromechanical
tabulating machines. Businesses used computers to do the payroll, reportgenerating,
and record-analyzing tasks that their own electro-mechanical calculators had
previously performed.
The still next generation of computers--exemplified by the IBM 360 series--were used to
stuff data into and pull data out of databases in real time--airline reservations processing
35
systems, insurance systems, inventory control. It became clear that the computer was
good for much more than performing repetitive calculations at high speed. The computer
was much more than a calculator, however large and however fast. It was also an
organizer. American Airlines used computers to create its SABRE automated
reservations system, which cost as much as a dozen airplanes. The insurance industry
automated its back office sorting and classifying.
Subsequent uses have included computer-aided product design, applied to everything
from airplanes designed without wind-tunnels to pharmaceuticals designed at the
molecular level for particular applications. In this area and in other applications, the
major function of the computer is not as a calculator, a tabulator, or a database manager,
but is instead as a what-if machine. The computer creates models of what-if: would
happen if the airplane, the molecule, the business, or the document were to be built up in
a particular way. It thus enables an amount and a degree of experimentation in the virtual
world that would be prohibitively expensive in resources and time in the real world.
The value of this use as a what-if machine took most computer scientists and computer
manufacturers by surprise. None of the engineers designing softare for the IBM 360
series, none of the parents of Berkeley UNIX, nobody before Dan Bricklin programmed
Visicalc had any idea of the utility of a spreadsheet program. Yet the invention of the
spreadsheet marked the spread of computers into the office as a what-if machine. Indeed,
the computerization of Americas white-collar offices in the 1980s was largely driven by
36
the spreadsheet program's utility--first Visicalc, then Lotus 1-2-3, and finally Microsoft
Excel.
For one example of the importance of a computer as a what-if machine, consider that
today's complex designs for new semiconductors would be simply impossible without
automated design tools. The process has come full circle. Progress in computing depends
upon Moore's law; and the progress in semiconductors that makes possible the continued
march of Moore's law depends upon progress in computers and software.
As increasing computer power has enabled their use in real-time control, the domain has
expanded further as lead users have figured out new applications. Production and
distribution processes have been and are being transformed. Moreover, it is not just
robotic auto painting or assembly that have become possible, but scanner-based retail
quick-turn supply chains and robot-guided hip surgery as well.
In the most recent years the evolution of the computer and its uses has continued. It has
branched along two quite different paths. First, computers have burrowed inside
conventional products as they have become embedded systems. Second, computers have
connected outside to create what we call the world wide web: a distributed global
database of information all accessible through the single global network. Paralleling the
revolution in data processing capacity has been a similar revolution in data
communications capacity. There is no sign that the domain of potential uses has been
exhausted.
37
One would have to be pessimistic indeed to forecast that all these trends are about to
come to an end. One way to put it is that modern semiconductor-based electronics
technologies fit Bresnahan and Trajtenberg's (1995) definition of a "general purpose
technology"--one useful not just for one narrow class but for an extremely wide variety of
production processes, one for which each decline in price appears to bring forth new uses,
one that can spark off a long-lasting major economic transformation. There is room for
computerization to grow on the intensive margin, as computer use saturates potential
markets like office work and email. But there is also room to grow on the extensive
margin, as microprocessors are used for tasks like controlling hotel room doors or
changing the burn mix of a household furnace that few, two decades ago, would have
thought of.
Previous Industrial Revolutions
The first of these is that previous industrial revolutions driven by general purpose
technologies have seen an initial wave of adoption followed by rapid total factor
38
productivity growth in industries that use these new technologies as businesses and
workers learn by using. So far this has not been true of our current wave of growth. As
Robert Gordon (2002) has pointed out at every opportunity, there has been little if any
acceleration of total factor productivity growth outside of the making of high-tech
equipment itself: the boosts to labor productivity look very much like what one would
expect from capital deepening alone, not what one would expect from the fact that the
new forms of capital allow more efficient organizations.
Paul David (1991) at least has argued that a very large chunk of the long-run impact of
technological revolutions does emerge only when people have a chance to thoroughly
learn the characteristics of the new technology and to reconfigure economic activity to
take advantage of it. In David’s view, it took nearly half a century before the American
economy had acquired enough experience with electric motors to begin to use them to
their full potential. By his reckoning, we today are only halfway through the process of
economic learning needed for us to even begin to envision what computers will be truly
useful for.
Moreover, as Crafts (2000) argues, the striking thing is not that there was a “Solow
paradox” of slow productivity growth associated with computerization, but that people
did not expect the economic impact to start slow and gather force over time. As he writes,
“in the early phases of general purpose technologies their impact on growth is modest.” It
has to be modest: “the new varieties of capital have only a small weight relative to the
39
economy as a whole.” But if they are truly general-purpose technologies, their weight
will grow. |