Historical Statistics of the United States Millennial Edition Online
Essay
HSUS Home
About HSUS Home HSUS Web Help Frequently Asked Questions Contact Us User Guide
  PDF  
 
 
 
 
Home > Part C - Economic Structure and Performance > Chapter Cg - Science, Technology, and Productivity
doi:10.1017/ISBN-9780511132971.Cg.ESS.01   PDF 220Kb

 
Contributors:

Stanley L. Engerman and Gavin Wright

 





The term "science” refers to a systematic body of knowledge about the physical world and its phenomena. In the modern world, scientific knowledge is pursued and codified through structured institutions such as universities, laboratories, and professional societies. By these criteria, the beginnings of American science might be traced to the founding of the American Philosophical Society by Benjamin Franklin in 1743. But it was not until well into the nineteenth century that American science began to assume anything resembling its modern shape, with the establishment of programs in mineralogy, chemistry, and geology at several universities (see Bruce 1987). "Technology,” on the other hand, refers much more broadly to practical knowledge about techniques, methods, and procedures in productive activity, even when these practices are not well understood in a scientific sense. Thus, the history of American technology goes back to the earliest human settlements on these continents.
Nathan Rosenberg has written:

One of the more misleading consequences of thinking about technology as the mere application of prior scientific knowledge is that this perspective obscures a very elemental point: Technology is itself a body of knowledge about certain classes of events and activities. It is not merely the application of knowledge brought from another sphere. It is a knowledge of techniques, methods and designs that work, and that work in certain ways and with certain consequences, even when we cannot explain exactly why…. Indeed, if the human race had been confined to technologies that were understood in a scientific sense, it would have passed from the scene long ago. (Rosenberg 1982, p. 143)

The relationship at any time between science and technology can be complex. New technologies have sometimes resulted from prior changes in science, but as Rosenberg points out, technological practice has historically preceded scientific understanding. Technologies became increasingly science-based in the twentieth century, so much so that the two modes of advance have often been mutually reinforcing. Even with a scientific base, it is important conceptually to distinguish between the introduction of new methods that increase society's technical potential and the benefits that flow from the diffusion, expanded use, and ongoing improvement in already existing technologies. Because of the complexity of these interactions, the capacity of statistical evidence to track historical changes in science and technology is inherently imperfect.




Long before the United States became an important player in world science, the country's mechanics and inventors began to develop new and distinctive technologies in such fields as firearms, steamboats, farm machinery, sewing machines, and other machine tools.1 The longest-running quantitative measure of this technological activity is the series on patents issued for inventions under the Patent Law of 1790, authorized under Article I, Section 8, of the Constitution of 1789 (series Cg30). The American patent system was far more accessible and inexpensive than its British counterpart, and perhaps as a result, the United States surpassed Great Britain in patents per capita as early as 1810 (Khan and Sokoloff 2001, p. 239).
In 1836 the Patent Office introduced the examination system, whereby applicants were required to demonstrate novelty and compliance with requirements of the statute. Hence, as of this date the data differentiate between patent applications (series Cg27) and patents issued (series Cg30). The initial effect of this tightening of standards was a sharp drop in patent applications. But the reform significantly increased potential returns to patenting, and patents per capita surged between the 1850s and the 1870s, reaching a peak shortly after the turn of the twentieth century, sometimes known as the "golden age of the individual inventor.”  The increase in numbers and potential value of patents led, in turn, to the rise of an elaborate set of institutions concerned with trade and litigation in patent rights (Lamoreaux and Sokoloff 1999).
Scholars such as Schmookler and Griliches have analyzed patent data as an indicator of the pace of innovative activity in the economy (Schmookler 1966; Griliches 1998). Such inferences must be drawn with caution, however, because patents constitute only one particular form of protection for technology, not equally appropriate for all sectors of the economy (Levin, Klevorik, et al. 1987). The decline in patents per capita in the twentieth century, for example, probably reflects the rise of corporate research labs and other patent substitutes rather than a decline in the pace of innovative activity (see Figure Cg-A). Another observable trend in the late twentieth century is the rise of patents issued to residents of foreign countries, reflecting the globalization of technology and markets for technology (series Cg37).
Copyrights are another form of intellectual property. The first U.S. copyright law also dates from 1790 – at which time it applied only to maps, charts, and books – but the statistical compilations begin only in 1870 (series Cg1). The definition of copyrighted material expanded over time to include prints (1802), musical compositions (1831), dramatic compositions (1856), photographs (1865), paintings and other works of fine art (1870), and motion pictures (1912). Meanwhile the period of copyright protection was extended from 14 to 28 years, which became renewable for another 28 years in 1909. Protected trademarks were introduced in 1870 and, with several changes in the law, were under the Act of 1946 for a time of 20 years, with renewal possible for successive 20-year terms.
Despite the ostensibly parallel histories of patents and copyrights, the issues involved and the trajectories of change have been very different for the two institutions. Although the United States was a center of innovation in technology, the country was a net importer of literary and artistic works throughout the nineteenth century. American copyright laws did not provide protection for foreign works, and, as a result, domestic producers freely pirated materials from overseas. Only in 1891 was copyright protection extended to citizens of countries with which the United States had reciprocal agreements. Copyright protection is significantly weaker than patent protection because of legal interpretations such as the right of "fair use” under specified circumstances. Attempts to deploy copyright law as an auxiliary form of protection for new technologies have largely been turned back by American courts (Khan and Sokoloff 2001, p. 238).




With the advent of industrial research laboratories around 1900, American technological innovation increasingly originated in structured institutional settings, which was the dominant trend in most of the twentieth century. The trend may be observed in the increased number of patents awarded to corporations, which surpassed the number awarded to individuals by the 1930s (series Cg32). See Figure Cg-B. The expansion of research laboratories was closely related to the increased employment of scientists and engineers by industrial firms, underscoring the synergies between organized innovation and the rise of the American research university.2
Although the trend toward organized and science-based research dates from the early twentieth century, the major stimulus for federal research support came with World War II and subsequently with the defense priorities of the Cold War. The dual justifications for federal research subsidies – to strengthen national defense but also to build the foundations for new technologies and economic growth – are popularly associated with the publication in 1945 of Science, The Endless Frontier by Vannevar Bush (1945), who as Director of the Office of Scientific Research and Development was responsible for 6,000 scientists involved in the war effort. The postwar structure of federal support, however, differed considerably from Bush's proposal for a single civilian agency overseeing all federal science policy and funding. By the 1950s, more than 85 percent of federal research and development (R&D) spending went to the Defense Department and the Atomic Energy Agency (Table Cg182–202).
Although federal support for R&D originated before World War II (the main prewar contributions being in agriculture and in aeronautics), national estimates by agency and by performing sector were compiled by the National Science Foundation only beginning in 1953 (Table Cg110–181, Table Cg182–202, Table Cg203–211). It seems safe to say that the figures represent a major increase in the level of national resources devoted to research, even though the concept of "R&D” as a distinct category of activity did not really exist before that time. The postwar volume of R&D activity was large not only relative to the nation's previous history but also relative to advances by other nations of the world. By the 1970s, however, the leading foreign industrial countries (West Germany, France, the United Kingdom, and Japan) had closed or greatly narrowed this gap in R&D spending as a share of GDP (Mowery and Rosenberg 1998, p. 30).
A number of broad trends may be seen in these tables. The growth of federal research funding reached a plateau in the 1990s with the end of the Cold War. Total national R&D expenditures continued to rise, however, as federal support was largely replaced by private expenditures, encouraged by the favorable tax treatment in expensing of such expenditures (series Cg110). The vast majority of expenditures by industry, however, have been for "applied” rather than "basic” research, much of it devoted to product development (Table Cg203–211).
Within federal R&D spending, the overwhelming dominance of the defense budget has receded, though it remains the single largest category. Since the 1980s, the most rapidly growing area of research has been health (series Cg193 and series Cg195). Large infusions of federal funding have facilitated major advances in pharmaceuticals, medical devices, and biomedical science. Federal funds have been complementary to an even larger increase in research expenditures by private pharmaceutical firms.
An instructive if by no means comprehensive indicator of the rise of American science to world prominence is the international pattern of Nobel Prize awards (Table Cg212–235). Based upon a bequest of Alfred Nobel (1833–1896), the inventor of dynamite, the Nobel Prize has been awarded since 1901, in several different fields of science – physics, chemistry, physiology or medicine, literature, peace, and, starting in 1969, economic science. Seen as the outcome of worldwide competition, they are indicative of the caliber of individual research in different countries.
One clear pattern is the sharp increase in the proportion of scientific awards going to citizens of the United States after 1940, compared to the number in the first half-century of competition. The relative paucity of scientific awards before World War II, at a time where the United States had a higher productivity in the overall economy than did those countries whose scientists won a Nobel Prize, suggests that national scientific standing was not a necessary precondition for U.S. economic growth. The initial jump was associated with the out-migration of scientists from Europe that began in the 1930s. From 1943 to 1967, eleven of the twenty-two American winners of the Nobel Prize in Physics had been born in Europe (Table Ad938–949). Although many in this cohort were refugees, the educational and research opportunities provided by the expansion of American research universities after World War II have continued to attract foreigners, who are well-represented among American Nobel Prize winners. The dominance of the United States in the scientific awards is thus as much the result of economic growth as the cause.




Table Cg241–250, Table Cg251–257, Table Cg258–264 deal with growth in the output of computers and their changing prices. They are meant to highlight the dramatic effects upon business and consumers of this most conspicuous and important technological change of recent years. The diffusion and improvement of computer technologies has been nothing short of spectacular, but the magnitude of the revolution was by no means obvious from the beginning. The first fully electronic digital computer is generally considered to be the ENIAC, developed at the University of Pennsylvania during World War II. The abstract analysis of John von Neumann formed the conceptual basis for the next machine (the EDVAC, the first stored-program computer) and for virtually all subsequent computers. These first-generation computers were gigantic, slow, and prone to breakdown, giving rise to the view before 1950 that total world demand could be satisfied by not much more than a handful of computers. Although computers quickly gained in speed and reliability, throughout the 1950s and 1960s the market was indeed dominated by large mainframe computers, of which the primary producer was IBM (Table Cg241–250). That company did not foresee the rapid spread of smaller and more portable machines, made possible by the commercialization of the integrated circuit microprocessor by the Intel Corporation in 1971. By 1995 the industry was selling more than two million personal computers per year, and the sale of mainframes had dwindled to a trickle (series Cg243).3
The rapid progress of the underlying technologies may be measured by any number of price indexes and performance indicators (Table Cg251–257, Table Cg258–264). The most famous of these is Moore's Law, which originated in 1965 with an observation by Intel founder Gordon E. Moore, who noted that the number of transistors per integrated circuit had doubled every eighteen months. Although Moore predicted that the trend would continue through 1975, the pace showed no signs of slackening through the end of the century (series Cg263). Many other measures of computer memory, speed, and capability have shown similarly persistent dynamic tendencies.
As costs have fallen and performance has improved, a major question has arisen regarding appropriate measures of the value of computer production and price deflators to use when measuring real output. The standard measure used to evaluate a machine is the cost of producing that machine, but where the machine has a specific use and the cost of its use falls sharply, economists often argue that a better measure of the value of the computer would be the cost per calculation or per task performed, the so-called hedonic measure of computer value. Such adjustments have a major impact on measured rates of productivity growth, as well as on comparisons of productivity growth between industries. As a result, these relatively technical issues have been central to the debate over the "productivity paradox,” which refers to the observation that rapid technological progress and diffusion of computer technologies coincided with an apparent slowing of the rate of productivity growth in the U.S. economy between 1970 and 1995.4




Figure Cg-A. Patents issued for inventions – per capita: 1790–1997

Sources




Figure Cg-B. Patents issued for inventions: 1901–1998

Sources




Bruce, Robert V. 1987. The Launching of Modern American Science, 1846–1876. Knopf.
Brynjolfsson, Erik, and Lorin M. Hitt. 2000. "Beyond Computation: Information Technology, Organizational Transformation and Business Performance.”  Journal of Economic Perspectives 14 (Fall): 23–48.
Bush, Vannever. 1945. Science, The Endless Frontier. U.S. Government Printing Office.
David, Paul A. 1990. "The Computer and the Dynamo: A Historical Perspective on the Modern Productivity Paradox.”  American Economic Review 80: 355–61.
Geiger, Roger L. 1986. To Advance Knowledge: The Growth of American Research Universities, 1900–1940. Oxford University Press.
Gordon, Robert J. 1990. The Measurement of Durable Goods Prices. University of Chicago Press.
Gordon, Robert J. 2000. "Does the ‘New Economy’ Measure up to the Great Inventions of the Past?” Journal of Economic Perspectives 14 (Fall): 49–74.
Griliches, Zvi. 1998. R&D and Productivity: The Econometric Evidence. University of Chicago Press.
Habakkuk, H. J. 1962. American and British Technology in the Nineteenth Century. Cambridge University Press.
Hounshell, David A. 1984. From the American System to Mass Production, 1800–1932. Johns Hopkins University Press.
Hounshell, David, and John Kenly Smith Jr. 1988. Science and Corporate Strategy: DuPont R&D, 1902–1980. Cambridge University Press.
Khan, Zorina B., and Kenneth L. Sokoloff. 2001. "History Lessons: The Early Development of Intellectual Property Institutions in the United States.”  Journal of Economic Perspectives 15 (Summer): 233–46.
Lamoreaux, Naomi R., and Kenneth L. Sokoloff. 1999. "Investors, Firms, and the Market for Technology: United States Manufacturing in the Late Nineteenth and Early Twentieth Centuries.”  In Naomi Lamoreaux, Daniel M. G. Raff, and Peter Temin, editors. Learning by Doing in Markets, Firms, and Countries. University of Chicago Press.
Levin, Richard A., A. Klevorick, et al. 1987. "Appropriating the Returns from Industrial Research and Development,” Brookings Papers on Economic Activity 3: 783–820.
Mowery, David, and Nathan Rosenberg. 1989. Technology and the Pursuit of Economic Growth. Cambridge University Press.
Mowery, David, and Nathan Rosenberg. 1998. Paths of Innovation: Technological Change in Twentieth Century America. Cambridge University Press. A version also appears in Stanley L. Engerman and Robert E. Gallman, editors. The Cambridge Economic History of the United States, Volume 3, The Twentieth Century. Cambridge University Press.
Oliner, Stephen D., and Daniel E. Sichel. 2000. "The Resurgence of Growth in the Late 1990s: Is Information Technology the Story?” Journal of Economic Perspectives 14 (Fall): 3–22.
Rosenberg, Nathan. 1976. Perspectives on Technology. Cambridge University Press.
Rosenberg, Nathan. 1982. Inside the Black Box: Technology and Economics. Cambridge University Press.
Rosenberg, Nathan. 1994. Exploring the Black Box. Cambridge University Press.
Schmookler, Jacob. 1966. Invention and Economic Growth. Harvard University Press.




......................................

1.
For accounts of these developments, see Habakkuk (1962), Rosenberg (1976, 1994), and Hounshell (1984).
2.
For historical discussions, see Geiger (1986), Hounshell and Smith (1988), and Mowery and Rosenberg (1989, 1998).
3.
These developments are described in Gordon (1990) and Mowery and Rosenberg (1998, Chapter 6).
4.
For alternative perspectives on this relationship, see David (1990), Griliches (1998), Brynjolfsson and Hitt (2000), Gordon (2000), and Oliner and Sichel (2000).

 
 
 
 
Cambridge University Press www.cambridge.org Go to topTop