Are We “Eating the Family Cow”?

January 25, 2019

Previous | Next

Peter Temin, “Finance in Economic Growth: Eating the Family Cow.” Institute for New Economic Thinking, Working Paper No. 86, December 17, 2018.

If you rely on the family cow for its milk (or income from selling the milk), it’s best not to eat the cow. That’s a commonsense basis for the economic idea that future income depends on capital assets. Taking it a step further, long-term economic growth depends on expanding capital assets.

In a manufacturing economy, seeing that expansion is fairly easy. We can see automobile manufacturers building more assembly lines and making more cars. The transition to a service economy has clouded our vision somewhat. Temin describes the new reality: “Agriculture, mining, construction and manufacturing occupied only one-quarter of the labor force in 1990 and fell below one-seventh in 2016. The rest of the labor force is working in services.”

In the service economy, expanding capital assets involves more than buying buildings or equipment. It involves such intangibles as financial assets, knowledge and intellectual property. But our national accounting system does not yet reflect that. The Bureau of Economic Analysis, which produces the National Income and Product Accounts (NIPA), acknowledges the problem: “While all countries account for investment in tangible assets in their gross domestic product (GDP) statistics, no country currently includes a comprehensive estimate of business investment in intangible assets in their official accounts.”

Temin says that this puts economists in the position of tackling 21st-century issues with data designed for the 20th century. “Students are told that the economy has de-industrialized, but they have not been made aware how the growing share of services makes the measurement of macroeconomic variables increasingly difficult.”

This might not be so much of a problem if intangible assets were expanding nicely right along with tangible assets. But Temin doesn’t think they are. His concern is that we may actually be consuming rather than expanding our most important intangible assets, putting our future economic growth at risk, but our national accounting system is not equipped to detect the problem:

Existing NIPA data fail to describe the future path of growth in our new economy because they lack output data on financial, human and social capital investments. They fail to show that the United States is consuming its capital stock now and will suffer later, rather like killing the family cow to have a steak dinner.

Investment in standard accounting

In standard national accounting, Investment (I) is one of the components of GDP, along with Consumption (C), Government spending (G) and Net Exports (NX). It is defined narrowly, however, as spending on plants, equipment and new inventory by firms, and real estate investments by households. It only includes real assets, not financial assets or human capital.

But aren’t you making an investment when you use surplus income to buy assets like stocks and bonds in a retirement account? We certainly do call it investing because the general idea is the same. Like a piece of machinery, a financial asset is a form of wealth that can produce future income. But unlike the machine, your retirement account is not necessarily making anything to add to the Gross Domestic Product. Your financial acquisitions are accounted for on the income side of the economy, as a form of Saving (S). Of course, if the company that issues stocks or bonds uses the money it receives to build a factory, then that does become an investment for purposes of the national accounts. That distinction makes sense, at least within the framework of a manufacturing economy.

Another exclusion from investment is education, although we may think of it as an investment in human capital. In standard accounting, an educational expense is part of Consumption (C) if made by a household, and part of Government spending (G) if made by a government.

The challenge of accounting for intangible investments

While many economists find it reasonable to talk about intangible investments, incorporating them into the national accounting is not easy. Not only are they inherently hard to measure, but sometimes they may seem downright illusory. We can walk into a factory and see what it produces, but how do we distinguish a service from a disservice, or a productive financial asset from a “toxic asset”? This section will focus on financial assets, but similar valuation problems emerge with other intangible assets as well.

If you are just saving for retirement, that “investment” may be reasonably distinguished from those that contribute to future production. On the other hand, that distinction may be harder to justify for a financial firm that acquires financial assets in the normal course of business.

Consider a bank that makes a profit by accepting deposits at low interest rates and making loans at higher rates. It is providing financial services such as facilitating home buying. If it uses some of its profits to expand its operations, surely it is expanding its income-producing assets and thus investing. In standard accounting, only the purchase of tangible assets such as new branch offices would count as investing, but shouldn’t the strengthening of its capital position also count? Doesn’t that also enable it to provide more services and generate more income?

Some economists have found it useful to expand the concepts of capital and investment to include all forms of wealth. That’s what Piketty was doing when he studied how the rich get richer. He concluded that a higher rate of return on capital relative to the rate of economic growth is associated with greater inequality. But that broad a definition of capital doesn’t work for all purposes. Some forms of wealth are clearly not capital in the sense of a “factor of production.” Collectibles like gold coins or art works don’t produce anything, and they may not even appreciate. Some financial assets fluctuate wildly in market value, and so their potential to generate income is hard to evaluate. If we are interested in productive financial assets, they are easier to talk about than to measure.

Perhaps the biggest problem involves assets so overvalued as to be dangerous. That was a huge problem leading up to the 2008 financial crisis. Lenders were making too many risky mortgage loans and selling them off to other financial institutions, which then packaged them as overrated investments to be bought by unsuspecting customers. Building more car factories obviously produces more goods for car buyers, but financial acquisitions don’t always provide more services for financial clients or income for owners. Temin quotes Simon Kuznets, who remarked in 1937, “It would be of great value to have national income estimates that would remove from the total the elements which. . .represent dis-service rather than service. Such estimates would subtract from the present national income. . .a great many of the expenses involved in financial and speculative activities.” Financial wheeling and dealing in the mortgage industry did a disservice to all kinds of people, from the borrowers who were encouraged to buy homes they couldn’t afford and then got foreclosed on, to the insurers who insured the overvalued bundles of mortgages and the investors who bought them. When the future income from the overvalued assets turned out to be illusory, the asset values collapsed and fortunes were lost.

The country has experienced a dramatic expansion of financial services, but we have no straightforward way of measuring the output of services, the increase in productive capital, or the contribution to long-term economic growth. Temin is among the skeptics who doubt that the accumulation of financial capital is doing as much for us as we think.

Investment in America: Tangible capital assets

We turn now to the question of how well the United States is investing in its economic future. We’ll start with investment in tangible capital assets as revealed by standard accounting, and then turn to the more challenging question of intangibles.

Temin refers to private fixed investments as “Keynesian” because of the roots of that concept in Keynesian macroeconomics. He says that “no one disputes the low level of Keynesian investment in recent years.” Here I’m also going to quote myself, since I commented on this investment situation in my recent post on “Forty Years of Reaganomics“:

Another goal of Reaganomics was to increase saving and private sector investment. Tax cuts would give people more money to save as well as consume, and strong consumer demand would encourage the investment of those savings in business expansion. Economic growth should remain strong, since the rising investment component of GDP would offset the falling government component.
. . . .
I do not see in the macroeconomic indicators a surge of saving or investment since 1980. Before then, saving was running at about 19-22% of national income, while investment was in the range of 16-18% of GDP. Reaganomics got off to an auspicious start, with saving up to almost 23% and investment up to 20% by the end of Reagan’s first term. But since then, saving and investment have generally been no higher than they were before. Saving is now at 19% of GNI, and investment is at 17% [of GDP].

The results are even more disappointing if one considers the tangible investments that are made by government and accounted for within government spending. The U.S. continues to neglect its aging infrastructure, allowing its roads, bridges and transit systems to deteriorate. Failure to mitigate the effects of climate change will damage our infrastructure as well, creating a further drag on economic growth.

Investment in America: Financial capital assets

If we could consider intangible assets along with other forms of productive capital, would that brighten or darken our picture of investment and future growth? Temin believes that we are reducing rather than increasing our stock of productive intangible assets. Here we are not so much investing as disinvesting.

In public finance, tax cuts have damaged the federal government’s financial position, creating a larger liability in the form of the national debt. That will probably inhibit the government’s future ability to contribute to GDP with spending on public goods and services.

In private finance, expansion of the financial services industry appears to be producing diminishing returns. Temin cites cross-national research showing that a growing financial sector contributes to economic growth only up to a point, but tends to reduce growth once it becomes too large. This implies that too little of that sector’s capital is being put to productive use.

Temin’s prime example of the wasteful use of capital is private equity firms:

Private equity firms have grown in recent decades to raise capital from wealthy individuals and institutions to make risky investments that promise high returns. Private equity firms buy companies and use high leverage to make these high returns. The debts, which have fixed interest payments, provide high returns on the capital invested by rich investors since all the profits go to them. And if the company fails, the debts default and investors walk away without loss. Society picks up the tab.

Brian Alexander’s Glass House told the story of the destructive effects of private equity firms on the economy of Lancaster, Ohio. I concluded my review of the book by linking the wasteful use of capital with economic inequality:

The wealthy minority have a lot of capital available to invest. But very weak income growth for the majority limits their ability to spend on new products. Under those conditions, it is not surprising that a lot of capital would go to buy existing enterprises rather than create new ones; nor is it surprising that cost-cutting rather than expansion of production would be a favored route to profit. If this strategy works to make the 1% richer despite hollowing out the middle class, that only reinforces the inequality and sluggish growth, creating a vicious cycle.

Temin’s conclusion is similar: “Recent research finds that finance has grown to the point where it no longer continues to benefit the entire economy, but it instead increases the incomes of the richest Americans at the expense of everyone else.”

Investment in America: Human and social capital

Research on human capital has focused primarily on education.

Macroeconomic thinkers say that education is the key to national success in the world where developing countries like China and Japan challenge the United States’ economic leadership. Letting our human capital decay may be the most important problem for future generations.

Here the picture is also discouraging: States have been cutting support for public universities; teacher pay has been declining relative to other jobs; and “the first education budget of the new administration in 2017. . .cut over ten billion dollars from federal education initiatives. . . .”

With regard to social capital, Temin thinks of it as “a new name for the old idea of community. . . .” Like other intangible assets, “no way has been found to include it in GDP.” But strong communities support economic productivity in many subtle ways. People with strong support groups make better, more productive workers.

An example of how social policy can strengthen or weaken community is the rate of incarceration. Locking criminals up protects the community, but locking too many people up for doing too little undermines community. “Young, poor and dominantly minority men and (to a lesser extent) women cycle through jails, prisons and then back into the community. They disrupt families, weaken social networks and other forms of social support, putting children at risk and promoting delinquency.”

Temin also cites Pearlstein and Wu’s critique of the brand of capitalism promoted by business schools and the business community in recent years. The pursuit of economic efficiency to the exclusion of other goals can also weaken community “by undermining trust and discouraging socially cooperative behavior.”

Temin states his general conclusion this way:

The evidence shown here reveals that we are investing less than before in Keynesian investment of private fixed assets and dis-investing other forms of capital. Financial investment is negative due to rapidly rising government debt and private financial investments that redistribute income toward the top end of the income distribution. Investments in human and social capital–both outside the BEA’s methodology–clearly are negative.

This is a bold thesis. However, the very fact that current accounting practices do not allow us to measure intangible investment with any precision limits our ability to test it. Temin relies on selected studies using unconventional data and impressionistic evidence to make his case, and I think his argument has merit. But economists have their work cut out for them if they are to achieve a comprehensive and realistic assessment of the nation’s investments in its economic future.

Taxing the Rich

January 16, 2019

Previous | Next

Paul Krugman, “The Economics of Soaking the Rich,” The New York Times, 01/05/2019.

Peter Diamond and Emmanuel Saez, “The Case for a Progressive Tax: From Basic Research to Policy Recommendations.” Journal of Economic Perspectives, Vol. 25, #4 (Fall 2011): 165-190.

The resurgence of the Democratic Party in the 2018 elections is encouraging more discussion of some progressive policy ideas. One of those is a more progressive income tax, as some Democrats advocate higher taxes on the wealthy in order to fund liberal programs like early childhood education and assistance for college tuition.

Paul Krugman’s column calls attention to both the political debate over progressive taxation and the relevant economic research, in particular the work by Peter Diamond and Emmanuel Saez. They make a strong case that a higher top tax rate is in the public interest.

The problem

The top federal bracket is now 37%, which applies to income over $510,300 for individuals and $612,350 for married couples filing joint returns. Is 37% too low, too high, or about right?

Advocates for a higher rate argue that once people reach a certain level of wealth, no one derives much benefit from their additional consumption. Society would benefit from some redistribution of wealth from those who have enough to those who have too little, either through direct spending on the needy or general spending on public goods like good roads or good schools. On the other hand, advocates for a low rate argue that high taxes reduce people’s incentive to earn income by making useful contributions to the economy. When taxes get too high, public revenue can actually drop because the tax base shrinks.

The optimal tax theory presented by Diamond and Saez acknowledges the effects of taxes on both social welfare and personal incentive. Optimal taxation requires some trade-off between the two.

Social welfare is larger when resources are more equally distributed, but redistributive taxes and transfers can negatively affect incentives to work, save, and earn income in the first place. This creates the classical trade-off between equity and efficiency which is at the core of the optimal income tax problem.

The social welfare effect

The idea that money received by very high earners could be put to better use through taxation and public spending is based on the “marginal utility of consumption.” That is the idea that how much you value an additional dollar you get to spend depends on how many dollars you already have. Each increment of income matters more to a poor person than a rich person, and so the marginal utility of consumption declines dramatically as income increases. Taxing the highest incomes in order to spend for the benefit of the less affluent majority should contribute positively to the general good.

The potential positive effect of taxing the rich can be easily calculated by first computing the tax revenue obtained from the highest bracket. For example, if the highest bracket begins at $500,000, and the average taxpayer in that bracket makes $1.5 million, then $1 million is taxable at the highest rate. A 37% rate raises an average of $370,000 from each top-bracket taxpayer, in addition to whatever is collected from the income falling into lower brackets.

The potential effect on the public good is offset by the loss to rich people of their benefit from private consumption. But since the marginal utility of consumption declines so dramatically as income increases, this offset is very small. Diamond and Saez ignore it in order to simplify their mathematical presentation, but make an adjustment for it before they are finished. One estimate is that marginal utility for people in the top 1% is only 3.9% of the marginal utility of the median-income family.

The potential social welfare effect of taxing the rich is only achieved if the revenue is spent usefully, as opposed to foolishly or wastefully. But that is more of a political problem than a problem of economic theory.

The behavioral effect

How much do high taxes reduce incentives to earn high incomes? That depends on what the authors call “behavioral elasticity,” which they define as the “percent increase in average reported income…when the net-of-tax rate increases by 1 percent.” If the tax rate is r, then the net-of-tax rate is 1-r, the portion of top-bracket income you get to keep. The more you get to keep, the greater incentive you should have to earn.

The paper’s formulas and charts are confusing for us non-economists, so here’s a concrete example with simple numbers. Let’s start with the average wealthy taxpayer described above, earning $1.5 million and paying top-bracket rates on $1 million (the portion falling over the $500,000 threshold). Say that a future Congress is considering raising the top bracket rate from 40% to 45%. If income remained the same, that would generate additional revenue of $50,000 (5% of $1 million) per wealthy taxpayer. But now income goes down some because of reduced incentive, in response to the change in the net-of-tax rate. Reducing it from 60% to 55% is a percentage decline of 5/60 or 8.3%. Research on behavioral elasticity has found that the effect on income is about one-quarter of the percentage change in the net-of-tax rate, in this case about 2.1%. So the estimated decline in income is 2.1% of $1.5 million, or $31,500, leaving $1,468,500 to be taxed, $968,500 of it at the top rate. Top-bracket tax revenue from this taxpayer still goes up, but only from $400,000 (40% of $1 million) to $435,825 (45% of $968,500). The behavioral effect is significant, but it is not enough to offset all the additional revenue from the tax hike.

Suppose, however, that we start from a much higher initial rate, and consider increasing the top rate from 75% to 80%. Starting from the same taxpayer earning $1.5 million, the potential increase in revenue is the same, $50,000, but the effect on the incentive to earn is in theory much larger. With the net-of-tax rate for top-bracket income only 25% to begin with, taxing away another 5% has a greater proportional impact. The reduction from 25% to 20% is a percentage decline of 5/25 or 20%. The estimated effect on income is again one-quarter of that or 5%. The estimated decline in income is $75,000, leaving $1,425,000 to be taxed, $925,000 at the top rate. Top-bracket tax revenue from this taxpayer now goes down, from $750,000 (75% of $1 million) to $740,000 (80% of $925,000). Raising the top bracket further when it is already so high is counter-productive.

The optimal top tax rate

These examples demonstrate the general principle that when the top rate is relatively low, raising it can generate additional revenue to be used to enhance general social welfare. But when the top rate is already relatively high, trying to raise it further can be counter-productive.

Diamond and Saez use the same mathematical relationships to calculate the sweet spot where revenue is maximized. The optimal top rate comes out 73%. Rates below that sacrifice revenue and overlook an opportunity to increase the general welfare. Rates above that reduce revenue by reducing the incentive to earn high incomes.

Three additional points are worth noting:

  1. The analysis to this point does not take into account the loss of consumption utility for the wealthy taxpayer. However, the marginal utility of consumption for the rich is so low that incorporating it into the analysis only lowers the optimal tax rate by about one percentage point.
  2. The optimal rate should take into account all taxes based on income, not just federal income taxes. At the time the authors were writing, other taxes added about 7.5% to the total tax burden, after federal deductions for state taxes were considered. So a total top rate of 73% implies a federal top rate of no more than 65.5%.
  3. The case for raising middle-class taxes would be much weaker, because of their much higher marginal utility of consumption. The more that people are already spending their income on essentials, the less government can enhance the common good through taxing and spending.

The politics of taxation

The United States has been in a relatively low-tax era since the “Reagan Revolution” of the 1980s. From the 1950s to 1970s, the top federal rate was in the 70-90% range. Since the 1980s, it has dropped from 50% to today’s 37%. According to the Diamond-Saez analysis, 90% is too high, but 37% is far too low.

During this era, achieving lower taxes, especially for the wealthy, has been the highest domestic priority for Republicans. They have pursued this goal almost without regard for the fiscal consequences, advocating lower taxes whether they can be matched by spending cuts or just result in larger deficits. Their rhetorical arguments have gone far beyond what economists know, belittling the benefits of public spending and exaggerating the deleterious effects of high taxes on earning incentives and economic growth. They have become the party of hostility to government and affection for those who feather their own nests.

The public benefits of high taxes on the rich may–like climate change–be one of those inconvenient truths that too many of our leaders choose to ignore. Rather than dismissing advocates of higher taxes as lunatics, economist Paul Krugman would like us to base tax policy more on facts and reasoned analysis. For example, he points out that the economy grew at a somewhat faster rate during the postwar era of high tax rates than it has grown lately.

Republicans almost universally advocate low taxes on the wealthy, based on the claim that tax cuts at the top will have huge beneficial effects on the economy. This claim rests on research by. . .well, nobody. There isn’t any body of serious work supporting G.O.P. tax ideas, because the evidence is overwhelmingly against those ideas.

Effects of New Technologies on Labor

January 4, 2019

Previous | Next

David Autor and Anna Salomons, “Is automation labor share-displacing? Productivity growth, employment, and the labor share.” Brookings Papers on Economic Activity, Spring 2018.

Daron Acemoglu and Pascual Restrepo, “Robots and jobs: Evidence from US Labor Markets.” National Bureau of Economic Research, March 2017.

I have been interested in automation’s effects on the labor force for a long time, especially since reading Martin Ford’s Rise of the Robots. Ford raises the specter of a “jobless future” and a massive welfare system to support the unemployed.

Here I discuss two papers representing some of the most serious economic research on this topic.

The questions

To what extent do new technologies really displace human labor and reduce employment? The potential for them to do so is obvious. The mechanization of farming dramatically reduced the number of farm workers. But we can generalize only with caution. In theory, a particular innovation could either produce the same amount with less labor (as when the demand for a product is inelastic, often the case for agricultural products), or produce a larger amount with the same labor (when demand expands along with lower cost, as with many manufactured goods). An innovation can also save labor on one task, but reallocate that labor to a different task in the same industry.

Even if technological advances reduce the labor needed in one industry, that labor can flow into other industries. Economists have suggested several reasons that could happen. One involves the linkages between industries, as one industry’s productivity affects the economic activity of its suppliers and customers. If the computer industry is turning out millions of low-cost computers, that can create jobs in industries that use computers or supply parts for them. Another reason is that a productive industry affects national output, income and aggregate demand. The wealth created in one industry translates into spending on all sorts of goods and services that require human labor.

The point is that technological innovations have both direct effects on local or industry-specific employment, and also indirect effects on aggregate employment in the economy as a whole. The direct effects are more obvious, which may explain why the general public is more aware of job losses than job gains.

A related question is the effect of technology on wages, and therefore on labor’s share of the economic value added by technological change. Do employers reap most of the benefits of innovation, or are workers able to maintain their share of the rewards as productivity rises? Here too, aggregate results could differ from results in the particular industries or localities experiencing the most innovation.

The historical experience

American history tells a story of painful labor displacement in certain times, places and industries; but also a story of new job creation and widely shared benefits of rising productivity. Looking back on a century of technological change from the vantage point of the mid-20th century, economists did not find negative aggregate effects of technology on employment or on labor’s share of the national income. According to Autor and Salomons:

A long-standing body of literature, starting with research by William Baumol (1967), has considered reallocation mechanisms for employment, showing that labor moves from technologically advancing to technologically lagging sectors if the outputs of these sectors are not close substitutes. Further,…such unbalanced productivity growth across sectors can nevertheless yield a balanced growth path for labor and capital shares. Indeed, one of the central stylized facts of modern macroeconomics, immortalized by Nicholas Kaldor (1961), is that during a century of unprecedented technological advancement in transportation, production, and communication, labor’s share of national income remained roughly constant.

Such findings need to be continually replicated, since they might hold only for an economy in a particular place or time. In the 20th century, the success of labor unions in bargaining for higher wages and shorter work weeks was one thing that protected workers from the possible ill effects of labor-saving technologies.

Recent effects of technological change

Autor and Salomons analyze data for OECD countries for the period 1970-2007. As a measure of technological progress, they use the growth in total factor productivity (TFP) over that period.

They find a direct negative impact of productivity growth on employment within the most affected industries. However, they find two main indirect effects that offset the negative impact for the economy as a whole:

First, rising TFP within supplier industries catalyzes strong, offsetting employment gains among their downstream customer industries; and second, TFP growth in each sector contributes to aggregate growth in real value added and hence rising final demand, which in turn spurs further employment growth across all sectors.

To put it most simply, one industry’s productivity may limit its own demand for labor, but its contribution to the national output and income creates employment opportunities elsewhere.

With regard to labor’s share of the economic benefits, the findings are a little different. Here again, the researchers find a direct negative effect within the industries most affected by technological innovation. But in this case, that effect is not offset, for the most part, by more widespread positive effects.

The association between technological change and labor’s declining share varied by decade. Labor’s share actually rose during the 1970s, declined in the 1980s and 90s, and then fell more sharply in the 2000s. The authors mention the possibility that the newest technologies are especially labor-displacing, but reach no definite conclusion. Another possibility is that non-technological factors such as the political weakness of organized labor are more to blame.

The impact of robotics

Autor and Salomons acknowledge that because they used such a general measure of technological change, they couldn’t assess the impact of robotics specifically. They do cite work by Georg Graetz and Guy Michaels that did not find general negative effects of robots on employment or labor share in countries of the European Union. That’s important, since many European countries have gone farther than we have in adopting robots.

The paper by Acemoglu and Restrepo focuses on the United States for the period 1990-2007. (They deliberately ended in 2007 so that the impact of the Great Recession wouldn’t muddy the waters.)

The authors used the definition of robot from the International Federation of Robotics, “an automatically controlled, reprogrammable, and multipurpose [machine].” Over the period in question, robot usage increased from 0.4 to 1.4 per thousand workers. “The automotive industry employs 38 percent of existing industrial robots, followed by the electronics industry (15 percent), plastic and chemicals (10 percent), and metal products (7 percent).”

Adoption of industrial robots has been especially common in Kentucky, Louisiana, Missouri, Tennessee, Texas, Virginia and West Virginia. As Thomas B. Edsall titled his recent New York Times column, “The Robots Have Descended on Trump Country.”

Acemoglu and Restrepo classified localities–technically “commuter zones”–according to their “exposure” to robotics, based on their levels of employment in types of jobs most conducive to robotization.

Their first main finding was a direct negative effect of robotics on employment and wages within commuting zones:

Our estimates imply that between 1990 and 2007 the increase in the stock of robots…reduced the employment to population ratio in a commuting zone with the average US change in robots by 0.38 percentage points, and average wages by 0.71 percent (relative to a commuting zone with no exposure to robots). These numbers…imply that one more robot in a commuting zone reduces employment by about 6 workers.

The workers most likely to be affected are male workers in routine manual occupations, with wages in the lower-to-middle range of the wage distribution

In the aggregate, these local effects are partly offset by “positive spillovers across commuting zones”–positive effects on employment and wages throughout the economy. With these spillovers taken into account, the estimated effects of robotics on employment and on wages are cut almost in half, dropping to 0.20 percent and 0.37 percent respectively.

The authors state their conclusion cautiously, as “the possibility that industrial robots might have a very different impact on labor demand than other (non-automation) technologies.”


While there is little doubt that new technologies often displace labor in particular industries and localities, the aggregate effects on employment and wages are less consistent.  Historically (late 19th and early 20th centuries), employment and labor share of income held up very well. For developed countries in the period 1970-2007, Autor and Salomons found a mixed picture, with robust employment but declining labor share after 1980. With respect to robotics specifically, Graetz and Michaels did not find declines in employment or labor share in the European Union, but Acemoglu and Restrepo found some decline in both employment and wages in the U.S.

It seems fair to say that the jury is still out on the effects of automation on the labor force. It may be that automation has no inevitable effect, but that it depends on how we as a society choose to deal with it. We shouldn’t assume a world of mass unemployment and widespread government dependency on the basis of recent, preliminary results from one country. Authors such as Thomas Friedman, who are more optimistic than Martin Ford about the long-run effects of new technologies, have yet to be proved wrong.

National Climate Assessment

December 3, 2018

Previous | Next

U.S. Global Change Research Program. Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, Volume II [Reidmiller, D.R., C.W. Avery, D.R. Easterling, K.E. Kunkel, K.L.M. Lewis, T.K. Maycock, and B.C. Stewart (eds.)]. Washington, DC, 2018.

Federal law requires the government to issue a new climate assessment every four years. Thirteen government agencies participated in the current assessment, including the Departments of State, Commerce, Interior, Transportation, Health and Human Services, Defense, Agriculture, and Energy; along with the National Science Foundation, Smithsonian Institution, U.S. Agency for International Development, National Aeronautics and Space Administration, and Environmental Protection Agency.

This year’s report is Volume II of the fourth National Climate Assessment. Volume I, issued last year, provided a comprehensive summary of climate changes themselves, based on the best science available. Findings from Volume I are incorporated into the impact assessment in Volume II.

Full disclosure: I haven’t read all 1,656 pages of this report. What I am summarizing is the 40-page overview in Chapter 1, supplemented by a brief look at selected other chapters.

Here is a statement of the report’s general conclusions:

This report draws a direct connection between the warming atmosphere and the resulting changes that affect Americans’ lives, communities, and livelihoods, now and in the future….It concludes that the evidence of human-caused climate change is overwhelming and continues to strengthen, that the impacts of climate change are intensifying across the country, and that climate-related threats to Americans’ physical, social, and economic well-being are rising.

The climate changes

The most obvious change is that the country is getting warmer along with the rest of the planet. On the average, temperatures have increased by 1.8 degrees Fahrenheit across the contiguous United States since around 1900. The warming in Alaska has been even greater. Since the 1980s, arctic sea ice has been decreasing by 11-16% per decade. (Here in the Southeast, the rise in temperature is expected to be smaller than in most of the northern United States, and the main effect will be warmer nights rather than hotter days.)

The median sea level along our coasts has risen about 9 inches since the early 20th century, due to ocean warming and melting land ice. The oceans have also become more acidic as they have absorbed more carbon dioxide.

Although these changes may sound small, they are enough to produce large changes in the weather. Extreme climate-related weather events are becoming more common, and when they occur they often last longer and cause more damage. We are experiencing more frequent and longer heat waves, especially in urban areas. More of our annual rainfall is coming in the form of intense, one-day rainfalls that cause more flooding, especially in coastal areas affected by higher sea levels and tides. Recent hurricanes have been especially severe. Hot, dry conditions have contributed to unusually large wildfires in the western states.

Scientists have known for a long time that naturally occurring greenhouse gases in the atmosphere trap some of the heat radiating from the Earth’s surface. That keeps the planet surface warm enough to be habitable for living things. But in the industrial era, human activity has added dramatically to these gases, especially through emissions of carbon dioxide.  Carbon dioxide in the atmosphere has increased by about 40%. Burning fossil fuels accounts for about 85% of all greenhouse gas emissions by humans.

Scientists have concluded that non-human factors alone cannot account for climate change. In fact, “Without human activities, the influence of natural factors alone would actually have had a slight cooling effect on global climate over the last 50 years.” Scientists have found no credible alternative to the consensus view that human activity, especially the burning of fossil fuels, is responsible for global warming.

Scenarios for future change

Science is often in the position of being unable to predict the future with certainty, but being able to project possible futures based on reasonable assumptions. For example, financial planners cannot predict a couple’s future retirement income without making assumptions about how much they will save, how they will invest it, and how closely market performance will conform to historical averages. So they often construct multiple scenarios based on a reasonable range of assumptions.

Climate scientists do not know how much global temperatures will increase or how much sea levels will rise, mainly because they don’t know how much humans will curb their emissions of greenhouse gases. So the National Climate Assessment relies on a “suite of possible scenarios” based on “Representative Concentration Pathways”–that is, different concentrations of carbon dioxide in the atmosphere later this century.

Some additional rise in temperature is inevitable over the next few decades, even if we start reducing carbon emissions now. That’s because the climate system responds slowly to changes in greenhouse gas levels. Even in the lowest scenario considered, “additional increases in temperatures across the contiguous United States of at least 2.3ºF (1.3ºC) relative to 1986-2015 are expected by the middle of this century.” Around mid-century, the various scenarios diverge dramatically. In a “very low” scenario, which assumes that emissions are peaking now, temperatures level off after 2050. In a “low” scenario, which assumes that emissions peak around 2050, temperatures rise another 2.3-6.7ºF (1.3-3.7ºC) by the end of the century. In a “higher” scenario, which assumes that emissions keep rising, temperatures rise another 5.4-11.0ºF (3.0-6.1ºC) by the end of the century, resulting in the most catastrophic effects. “Coastal flood heights that today cause major damages to infrastructure would become common during high tides nationwide.”

Perhaps the most sobering finding is that current global trends in annual greenhouse gas emissions are most consistent with the highest scenario considered. The evidence so far does not inspire confidence that global emissions are peaking now or will do so in the near future. But business as usual is not a viable option, and serious emission reductions are required soon to avoid catastrophic outcomes before the end of the century.

This is probably what the White House was referring to when it dismissed the report as being “largely based on the most extreme scenario,” as if the researchers deliberately chose the most pessimistic assessment in order to upset people. What they actually did was project a reasonable range of scenarios, but then come to the scientific conclusion that allowing carbon emissions to rise would result in the least desirable outcome. What the Trump administration is doing is rejecting the report as too dire, and then insisting on continuing the policies that are most likely to produce the most dire results!

The risks

The report discusses three kinds of risks arising from climate change: risks to economy and infrastructure, risks to the natural environment and “ecosystem services,” and risks to human health and well-being.

Climate change could cause substantial damage to infrastructure and private property. Regional economies and industries that depend on favorable climate conditions, such as agriculture, tourism and fisheries, will be especially vulnerable. The researchers tried to quantify many of these effects throughout the report, but the overview included only a few general references to dollar amounts: “The potential for losses in some sectors could reach hundreds of billions of dollars per year by the end of this century.”

Climate change produces economic losses in many different ways. Extreme heat waves and more powerful storms put stress on energy systems and create widespread power outages. Much of the infrastructure for producing energy is located along ocean and gulf coasts that are vulnerable to strong hurricanes and increased flooding. Although the warming climate lengthens the growing season, yields for many major crops are expected to decline because of excessive heat and greater pest activity. Bad weather around the world also impacts on the U.S. economy by disrupting international supply chains, as happened with hard-drive imports from Thailand in 2011.

An example of what the report calls “ecosystem services” is the availability of water or snow for drinking and recreation. In the Southwest, “intensifying droughts, heavier downpours, and reduced snowpack are combining with other stressors such as groundwater depletion to reduce the future reliability of water supplies in the region.”

Heat waves are associated with higher death rates, especially for older adults, pregnant women, and children. Climate change will also increase exposure to pollen allergens associated with allergic illnesses such as asthma and hay fever. And as North America warms, disease-carrying insects from Southern climates are migrating northward.

Since low-income communities are especially vulnerable to many of these risks, “climate change threatens to exacerbate existing social and economic inequalities….”

Reducing the risks

Many states, cities and businesses have taken some measures to reduce greenhouse gas emissions or to adapt to climate changes. However, the report states that “these efforts do not yet approach the scale needed to avoid substantial damages to the economy, environment, and human health expected over the coming decades.” Time is of the essence, since some effects of climate change will be difficult or impossible to reverse if they are allowed to occur. Once ice sheets melt enough to make sea levels higher and coastal cities uninhabitable, those conditions may persist for thousands of years.

The report makes a distinction between mitigation, which reduces risks by limiting further climate change, and adaptation, which reduces risks by softening the impact of the climate changes that do occur.

With regard to mitigation, one area where some progress has been made is the power sector of the economy. Greenhouse gas emissions from power generation dropped 25% from 2005 to 2016, the greatest decline for any sector. As a result, the sector with the largest emissions is now transportation.

Last week’s announcement of automobile plant closings by GM is very much related to our emissions problems. By increasing domestic gasoline production, we have lowered the price of gas, but that encourages consumers to buy larger, less fuel-efficient vehicles. The plants to be closed are producing smaller, more fuel-efficient cars for which there is currently less demand. Gas prices are artificially low, since they don’t include the ultimate cost to society of environmental damage. A carbon tax would be a rational way of discouraging our short-sighted reliance on fossil fuels, but that requires courageous leadership and a better-informed public.

With regard to adaptation, some communities have taken measures like strengthening buildings to withstand extreme weather events. They tend, however, to prepare only for the range of events that are familiar to them, rather than the even wider range that scientists anticipate. They are more likely to raise construction standards than to block construction altogether in high-risk locations.

In many areas, the costs that can be avoided through advance planning and investment are substantial. For example, “More than half of damages to coastal property are estimated to be avoidable through adaptation measures such as shoreline protection and beach replenishment.” Nevertheless, the researchers believe that moving populations away from the most threatened coastlines will become unavoidable.

Crisis, what crisis?

With the issuance of this report, the American people have just experienced a remarkable spectacle. On the one hand, thirteen federal agencies have collaborated to produce the most scientific report they can assemble on climate change, as mandated by federal law. On the other hand, the Trump administration has tried to bury it by releasing it on Black Friday and encouraging people not to believe it. “Whatever happened to Global Warming?” the president tweeted during a recent cold snap.

The President of the United States has considerable power both to define issues and to address them. This president is doing what he can to ridicule the whole idea of controlling emissions and undermine existing efforts to do so. He has blocked the previous administration’s clean energy initiative, lowered standards for vehicle fuel efficiency, taken the U.S. out of the Paris Agreement, and encouraged more fossil fuel production. I am tempted to use a term like “criminal negligence” to refer to his leadership in this area, except that ignorance is not a crime.

President Trump’s idea of a real crisis is illegal immigration. But according to The Economist, total apprehensions at the border were already way down before he even took office. “Not only have the migration numbers tumbled and the share of Mexicans among them dwindled. More Mexicans are now returning to Mexico than are coming to the United States illegally.” Central American migrants have increased in the past decade, but their numbers are still too small to change the overall story. By trying to close the border to asylum seekers, Trump provokes border confrontations that seem to confirm his largely phony crisis.

Given Donald Trump’s hyper-nationalist worldview and distorted priorities, it’s no wonder that immigration far surpasses climate change as a hot political topic and campaign issue. Too many Americans seem willing to follow this president wherever he leads, even if it’s over a climate-change cliff. Isn’t it time for the country to come to its senses and demand realistic, fact-based leadership?

Thank You for Being Late (part 3)

November 26, 2018

Previous | Next

A “moral fork in the road”

Another thing I liked about Friedman’s book was his willingness to talk about morality. Many liberals seem to shy away from the subject, perhaps because they don’t want to be confused with moral traditionalists. Personally, I think that liberals are in a good position to seize the moral high ground on many issues. After all, they are committed to such worthy values as social justice, equal opportunity, freedom of expression, scientific inquiry, nonviolent conflict resolution, and oh yes, democracy.

Friedman says that in the age of accelerations he describes, “leadership is going to require the ability to come to grips with values and ethics.” In particular, new technologies have empowered people to contribute more than ever to the solution of social problems, but also to disrupt social order on a more destructive scale. “As a species, we have never stood at this moral fork in the road–where one of us could kill all of us and all of us could fix everything if we really decided to do so.”

Friedman describes cyberspace as a lawless, amoral sphere badly in need of normative guidance. We are turning over too many decisions to impersonal algorithms that lack any moral compass. We thought that we could just connect everybody through social media and good things would happen. Now we realize that communities of users need some way of establishing moral norms, so that technically-empowered bad actors don’t destroy cyberspace for everyone.

Character and community

Friedman agrees with David Brooks that “most of the time character is not an individual accomplishment. It emerges through joined hearts and souls, and in groups.”

But what groups? The federal government is too big, too bureaucratic, and too slow to respond to emerging needs. On the other hand, families are too small and fragile to provide their members with the necessary supports. (He doesn’t elaborate on the second point, which would get into the debate over whether it “takes a village” or just a family to develop a good citizen.) Friedman looks to the local level beyond the family, asserting that “the healthy city, town or community is going to be the most important governing building block in the twenty-first century.” He quotes Gidi Grinstein as saying that such a model community would be “focused on supporting the employability, productivity, inclusion, and quality of life of its members.”

Here Friedman draws on his own experience of St. Louis Park, the suburb of Minneapolis where he grew up. There in the 1950s, American Jews escaped their urban ghetto and coexisted primarily with “a bunch of progressive Scandinavians.” Social harmony was helped along by what the White House Council of Advisers called the “Age of Shared Growth” (1948-1973) characterized by high productivity, economic expansion and broad distribution of the benefits. The result was an “imbedded habit of inclusion,” a tradition carried on today in efforts to assimilate today’s minorities. Friedman says that the jury is still out on whether the community can truly overcome today’s social divisions, but at least it is trying.

I think his point is well taken that pluralism and inclusion are keys to moral as well as economic progress.

How optimistic?

The subtitle of the book is “An Optimist’s Guide to Thriving in the Age of Accelerations.” To a degree, I share the author’s optimism. New technologies eliminate some jobs, but they can also increase productivity, lower costs for consumers, and create new jobs in expanding industries for those who acquire the appropriate skills. Globalization can expand the pool of human knowledge and generate more solutions to human problems. Even climate change has an upside, in that it should lead us to develop clean energy industries on a grand scale.

At times, however, I thought that Friedman underestimated the obstacles to be overcome, especially because of today’s economic inequality. I agree that technological change and rising productivity could be helpful in raising wages, as they were in the “Age of Shared Growth.” But even then, workers didn’t get their share of the benefits without a power struggle between business and labor, one in which government also weighed in. I agree that globalization expands the pool of human knowledge, but it can also result in a race to the bottom, where countries compete by cutting wages and skimping on environmental protection.

Friedman’s book helps convince me that massive investments in human capital are needed to qualify people for the jobs and lifestyles of the future. But his eighteen specific recommendations seem to fall a bit short, especially in the area of education. He wants to make postsecondary education tax deductible, but the people who find college least affordable are in such low tax brackets that the deduction has little value for them. He wants to eliminate corporate taxes, which would produce a windfall primarily for the richest tenth of the population who own over 80% of the stock.

The general premise of the book remains sound, however. It is that we shouldn’t panic or withdraw from the emerging world, but embrace the future with serious reflection and a positive attitude.