Taxing the Rich

January 16, 2019

Previous | Next

Paul Krugman, “The Economics of Soaking the Rich,” The New York Times, 01/05/2019.

Peter Diamond and Emmanuel Saez, “The Case for a Progressive Tax: From Basic Research to Policy Recommendations.” Journal of Economic Perspectives, Vol. 25, #4 (Fall 2011): 165-190.

The resurgence of the Democratic Party in the 2018 elections is encouraging more discussion of some progressive policy ideas. One of those is a more progressive income tax, as some Democrats advocate higher taxes on the wealthy in order to fund liberal programs like early childhood education and assistance for college tuition.

Paul Krugman’s column calls attention to both the political debate over progressive taxation and the relevant economic research, in particular the work by Peter Diamond and Emmanuel Saez. They make a strong case that a higher top tax rate is in the public interest.

The problem

The top federal bracket is now 37%, which applies to income over $510,300 for individuals and $612,350 for married couples filing joint returns. Is 37% too low, too high, or about right?

Advocates for a higher rate argue that once people reach a certain level of wealth, no one derives much benefit from their additional consumption. Society would benefit from some redistribution of wealth from those who have enough to those who have too little, either through direct spending on the needy or general spending on public goods like good roads or good schools. On the other hand, advocates for a low rate argue that high taxes reduce people’s incentive to earn income by making useful contributions to the economy. When taxes get too high, public revenue can actually drop because the tax base shrinks.

The optimal tax theory presented by Diamond and Saez acknowledges the effects of taxes on both social welfare and personal incentive. Optimal taxation requires some trade-off between the two.

Social welfare is larger when resources are more equally distributed, but redistributive taxes and transfers can negatively affect incentives to work, save, and earn income in the first place. This creates the classical trade-off between equity and efficiency which is at the core of the optimal income tax problem.

The social welfare effect

The idea that money received by very high earners could be put to better use through taxation and public spending is based on the “marginal utility of consumption.” That is the idea that how much you value an additional dollar you get to spend depends on how many dollars you already have. Each increment of income matters more to a poor person than a rich person, and so the marginal utility of consumption declines dramatically as income increases. Taxing the highest incomes in order to spend for the benefit of the less affluent majority should contribute positively to the general good.

The potential positive effect of taxing the rich can be easily calculated by first computing the tax revenue obtained from the highest bracket. For example, if the highest bracket begins at $500,000, and the average taxpayer in that bracket makes $1.5 million, then $1 million is taxable at the highest rate. A 37% rate raises an average of $370,000 from each top-bracket taxpayer, in addition to whatever is collected from the income falling into lower brackets.

The potential effect on the public good is offset by the loss to rich people of their benefit from private consumption. But since the marginal utility of consumption declines so dramatically as income increases, this offset is very small. Diamond and Saez ignore it in order to simplify their mathematical presentation, but make an adjustment for it before they are finished. One estimate is that marginal utility for people in the top 1% is only 3.9% of the marginal utility of the median-income family.

The potential social welfare effect of taxing the rich is only achieved if the revenue is spent usefully, as opposed to foolishly or wastefully. But that is more of a political problem than a problem of economic theory.

The behavioral effect

How much do high taxes reduce incentives to earn high incomes? That depends on what the authors call “behavioral elasticity,” which they define as the “percent increase in average reported income…when the net-of-tax rate increases by 1 percent.” If the tax rate is r, then the net-of-tax rate is 1-r, the portion of top-bracket income you get to keep. The more you get to keep, the greater incentive you should have to earn.

The paper’s formulas and charts are confusing for us non-economists, so here’s a concrete example with simple numbers. Let’s start with the average wealthy taxpayer described above, earning $1.5 million and paying top-bracket rates on $1 million (the portion falling over the $500,000 threshold). Say that a future Congress is considering raising the top bracket rate from 40% to 45%. If income remained the same, that would generate additional revenue of $50,000 (5% of $1 million) per wealthy taxpayer. But now income goes down some because of reduced incentive, in response to the change in the net-of-tax rate. Reducing it from 60% to 55% is a percentage decline of 5/60 or 8.3%. Research on behavioral elasticity has found that the effect on income is about one-quarter of the percentage change in the net-of-tax rate, in this case about 2.1%. So the estimated decline in income is 2.1% of $1.5 million, or $31,500, leaving $1,468,500 to be taxed, $968,500 of it at the top rate. Top-bracket tax revenue from this taxpayer still goes up, but only from $400,000 (40% of $1 million) to $435,825 (45% of $968,500). The behavioral effect is significant, but it is not enough to offset all the additional revenue from the tax hike.

Suppose, however, that we start from a much higher initial rate, and consider increasing the top rate from 75% to 80%. Starting from the same taxpayer earning $1.5 million, the potential increase in revenue is the same, $50,000, but the effect on the incentive to earn is in theory much larger. With the net-of-tax rate for top-bracket income only 25% to begin with, taxing away another 5% has a greater proportional impact. The reduction from 25% to 20% is a percentage decline of 5/25 or 20%. The estimated effect on income is again one-quarter of that or 5%. The estimated decline in income is $75,000, leaving $1,425,000 to be taxed, $925,000 at the top rate. Top-bracket tax revenue from this taxpayer now goes down, from $750,000 (75% of $1 million) to $740,000 (80% of $925,000). Raising the top bracket further when it is already so high is counter-productive.

The optimal top tax rate

These examples demonstrate the general principle that when the top rate is relatively low, raising it can generate additional revenue to be used to enhance general social welfare. But when the top rate is already relatively high, trying to raise it further can be counter-productive.

Diamond and Saez use the same mathematical relationships to calculate the sweet spot where revenue is maximized. The optimal top rate comes out 73%. Rates below that sacrifice revenue and overlook an opportunity to increase the general welfare. Rates above that reduce revenue by reducing the incentive to earn high incomes.

Three additional points are worth noting:

  1. The analysis to this point does not take into account the loss of consumption utility for the wealthy taxpayer. However, the marginal utility of consumption for the rich is so low that incorporating it into the analysis only lowers the optimal tax rate by about one percentage point.
  2. The optimal rate should take into account all taxes based on income, not just federal income taxes. At the time the authors were writing, other taxes added about 7.5% to the total tax burden, after federal deductions for state taxes were considered. So a total top rate of 73% implies a federal top rate of no more than 65.5%.
  3. The case for raising middle-class taxes would be much weaker, because of their much higher marginal utility of consumption. The more that people are already spending their income on essentials, the less government can enhance the common good through taxing and spending.

The politics of taxation

The United States has been in a relatively low-tax era since the “Reagan Revolution” of the 1980s. From the 1950s to 1970s, the top federal rate was in the 70-90% range. Since the 1980s, it has dropped from 50% to today’s 37%. According to the Diamond-Saez analysis, 90% is too high, but 37% is far too low.

During this era, achieving lower taxes, especially for the wealthy, has been the highest domestic priority for Republicans. They have pursued this goal almost without regard for the fiscal consequences, advocating lower taxes whether they can be matched by spending cuts or just result in larger deficits. Their rhetorical arguments have gone far beyond what economists know, belittling the benefits of public spending and exaggerating the deleterious effects of high taxes on earning incentives and economic growth. They have become the party of hostility to government and affection for those who feather their own nests.

The public benefits of high taxes on the rich may–like climate change–be one of those inconvenient truths that too many of our leaders choose to ignore. Rather than dismissing advocates of higher taxes as lunatics, economist Paul Krugman would like us to base tax policy more on facts and reasoned analysis. For example, he points out that the economy grew at a somewhat faster rate during the postwar era of high tax rates than it has grown lately.

Republicans almost universally advocate low taxes on the wealthy, based on the claim that tax cuts at the top will have huge beneficial effects on the economy. This claim rests on research by. . .well, nobody. There isn’t any body of serious work supporting G.O.P. tax ideas, because the evidence is overwhelmingly against those ideas.


Effects of New Technologies on Labor

January 4, 2019

Previous | Next

David Autor and Anna Salomons, “Is automation labor share-displacing? Productivity growth, employment, and the labor share.” Brookings Papers on Economic Activity, Spring 2018.

Daron Acemoglu and Pascual Restrepo, “Robots and jobs: Evidence from US Labor Markets.” National Bureau of Economic Research, March 2017.

I have been interested in automation’s effects on the labor force for a long time, especially since reading Martin Ford’s Rise of the Robots. Ford raises the specter of a “jobless future” and a massive welfare system to support the unemployed.

Here I discuss two papers representing some of the most serious economic research on this topic.

The questions

To what extent do new technologies really displace human labor and reduce employment? The potential for them to do so is obvious. The mechanization of farming dramatically reduced the number of farm workers. But we can generalize only with caution. In theory, a particular innovation could either produce the same amount with less labor (as when the demand for a product is inelastic, often the case for agricultural products), or produce a larger amount with the same labor (when demand expands along with lower cost, as with many manufactured goods). An innovation can also save labor on one task, but reallocate that labor to a different task in the same industry.

Even if technological advances reduce the labor needed in one industry, that labor can flow into other industries. Economists have suggested several reasons that could happen. One involves the linkages between industries, as one industry’s productivity affects the economic activity of its suppliers and customers. If the computer industry is turning out millions of low-cost computers, that can create jobs in industries that use computers or supply parts for them. Another reason is that a productive industry affects national output, income and aggregate demand. The wealth created in one industry translates into spending on all sorts of goods and services that require human labor.

The point is that technological innovations have both direct effects on local or industry-specific employment, and also indirect effects on aggregate employment in the economy as a whole. The direct effects are more obvious, which may explain why the general public is more aware of job losses than job gains.

A related question is the effect of technology on wages, and therefore on labor’s share of the economic value added by technological change. Do employers reap most of the benefits of innovation, or are workers able to maintain their share of the rewards as productivity rises? Here too, aggregate results could differ from results in the particular industries or localities experiencing the most innovation.

The historical experience

American history tells a story of painful labor displacement in certain times, places and industries; but also a story of new job creation and widely shared benefits of rising productivity. Looking back on a century of technological change from the vantage point of the mid-20th century, economists did not find negative aggregate effects of technology on employment or on labor’s share of the national income. According to Autor and Salomons:

A long-standing body of literature, starting with research by William Baumol (1967), has considered reallocation mechanisms for employment, showing that labor moves from technologically advancing to technologically lagging sectors if the outputs of these sectors are not close substitutes. Further,…such unbalanced productivity growth across sectors can nevertheless yield a balanced growth path for labor and capital shares. Indeed, one of the central stylized facts of modern macroeconomics, immortalized by Nicholas Kaldor (1961), is that during a century of unprecedented technological advancement in transportation, production, and communication, labor’s share of national income remained roughly constant.

Such findings need to be continually replicated, since they might hold only for an economy in a particular place or time. In the 20th century, the success of labor unions in bargaining for higher wages and shorter work weeks was one thing that protected workers from the possible ill effects of labor-saving technologies.

Recent effects of technological change

Autor and Salomons analyze data for OECD countries for the period 1970-2007. As a measure of technological progress, they use the growth in total factor productivity (TFP) over that period.

They find a direct negative impact of productivity growth on employment within the most affected industries. However, they find two main indirect effects that offset the negative impact for the economy as a whole:

First, rising TFP within supplier industries catalyzes strong, offsetting employment gains among their downstream customer industries; and second, TFP growth in each sector contributes to aggregate growth in real value added and hence rising final demand, which in turn spurs further employment growth across all sectors.

To put it most simply, one industry’s productivity may limit its own demand for labor, but its contribution to the national output and income creates employment opportunities elsewhere.

With regard to labor’s share of the economic benefits, the findings are a little different. Here again, the researchers find a direct negative effect within the industries most affected by technological innovation. But in this case, that effect is not offset, for the most part, by more widespread positive effects.

The association between technological change and labor’s declining share varied by decade. Labor’s share actually rose during the 1970s, declined in the 1980s and 90s, and then fell more sharply in the 2000s. The authors mention the possibility that the newest technologies are especially labor-displacing, but reach no definite conclusion. Another possibility is that non-technological factors such as the political weakness of organized labor are more to blame.

The impact of robotics

Autor and Salomons acknowledge that because they used such a general measure of technological change, they couldn’t assess the impact of robotics specifically. They do cite work by Georg Graetz and Guy Michaels that did not find general negative effects of robots on employment or labor share in countries of the European Union. That’s important, since many European countries have gone farther than we have in adopting robots.

The paper by Acemoglu and Restrepo focuses on the United States for the period 1990-2007. (They deliberately ended in 2007 so that the impact of the Great Recession wouldn’t muddy the waters.)

The authors used the definition of robot from the International Federation of Robotics, “an automatically controlled, reprogrammable, and multipurpose [machine].” Over the period in question, robot usage increased from 0.4 to 1.4 per thousand workers. “The automotive industry employs 38 percent of existing industrial robots, followed by the electronics industry (15 percent), plastic and chemicals (10 percent), and metal products (7 percent).”

Adoption of industrial robots has been especially common in Kentucky, Louisiana, Missouri, Tennessee, Texas, Virginia and West Virginia. As Thomas B. Edsall titled his recent New York Times column, “The Robots Have Descended on Trump Country.”

Acemoglu and Restrepo classified localities–technically “commuter zones”–according to their “exposure” to robotics, based on their levels of employment in types of jobs most conducive to robotization.

Their first main finding was a direct negative effect of robotics on employment and wages within commuting zones:

Our estimates imply that between 1990 and 2007 the increase in the stock of robots…reduced the employment to population ratio in a commuting zone with the average US change in robots by 0.38 percentage points, and average wages by 0.71 percent (relative to a commuting zone with no exposure to robots). These numbers…imply that one more robot in a commuting zone reduces employment by about 6 workers.

The workers most likely to be affected are male workers in routine manual occupations, with wages in the lower-to-middle range of the wage distribution

In the aggregate, these local effects are partly offset by “positive spillovers across commuting zones”–positive effects on employment and wages throughout the economy. With these spillovers taken into account, the estimated effects of robotics on employment and on wages are cut almost in half, dropping to 0.20 percent and 0.37 percent respectively.

The authors state their conclusion cautiously, as “the possibility that industrial robots might have a very different impact on labor demand than other (non-automation) technologies.”

Summary

While there is little doubt that new technologies often displace labor in particular industries and localities, the aggregate effects on employment and wages are less consistent.  Historically (late 19th and early 20th centuries), employment and labor share of income held up very well. For developed countries in the period 1970-2007, Autor and Salomons found a mixed picture, with robust employment but declining labor share after 1980. With respect to robotics specifically, Graetz and Michaels did not find declines in employment or labor share in the European Union, but Acemoglu and Restrepo found some decline in both employment and wages in the U.S.

It seems fair to say that the jury is still out on the effects of automation on the labor force. It may be that automation has no inevitable effect, but that it depends on how we as a society choose to deal with it. We shouldn’t assume a world of mass unemployment and widespread government dependency on the basis of recent, preliminary results from one country. Authors such as Thomas Friedman, who are more optimistic than Martin Ford about the long-run effects of new technologies, have yet to be proved wrong.


National Climate Assessment

December 3, 2018

Previous | Next

U.S. Global Change Research Program. Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, Volume II [Reidmiller, D.R., C.W. Avery, D.R. Easterling, K.E. Kunkel, K.L.M. Lewis, T.K. Maycock, and B.C. Stewart (eds.)]. Washington, DC, 2018.

Federal law requires the government to issue a new climate assessment every four years. Thirteen government agencies participated in the current assessment, including the Departments of State, Commerce, Interior, Transportation, Health and Human Services, Defense, Agriculture, and Energy; along with the National Science Foundation, Smithsonian Institution, U.S. Agency for International Development, National Aeronautics and Space Administration, and Environmental Protection Agency.

This year’s report is Volume II of the fourth National Climate Assessment. Volume I, issued last year, provided a comprehensive summary of climate changes themselves, based on the best science available. Findings from Volume I are incorporated into the impact assessment in Volume II.

Full disclosure: I haven’t read all 1,656 pages of this report. What I am summarizing is the 40-page overview in Chapter 1, supplemented by a brief look at selected other chapters.

Here is a statement of the report’s general conclusions:

This report draws a direct connection between the warming atmosphere and the resulting changes that affect Americans’ lives, communities, and livelihoods, now and in the future….It concludes that the evidence of human-caused climate change is overwhelming and continues to strengthen, that the impacts of climate change are intensifying across the country, and that climate-related threats to Americans’ physical, social, and economic well-being are rising.

The climate changes

The most obvious change is that the country is getting warmer along with the rest of the planet. On the average, temperatures have increased by 1.8 degrees Fahrenheit across the contiguous United States since around 1900. The warming in Alaska has been even greater. Since the 1980s, arctic sea ice has been decreasing by 11-16% per decade. (Here in the Southeast, the rise in temperature is expected to be smaller than in most of the northern United States, and the main effect will be warmer nights rather than hotter days.)

The median sea level along our coasts has risen about 9 inches since the early 20th century, due to ocean warming and melting land ice. The oceans have also become more acidic as they have absorbed more carbon dioxide.

Although these changes may sound small, they are enough to produce large changes in the weather. Extreme climate-related weather events are becoming more common, and when they occur they often last longer and cause more damage. We are experiencing more frequent and longer heat waves, especially in urban areas. More of our annual rainfall is coming in the form of intense, one-day rainfalls that cause more flooding, especially in coastal areas affected by higher sea levels and tides. Recent hurricanes have been especially severe. Hot, dry conditions have contributed to unusually large wildfires in the western states.

Scientists have known for a long time that naturally occurring greenhouse gases in the atmosphere trap some of the heat radiating from the Earth’s surface. That keeps the planet surface warm enough to be habitable for living things. But in the industrial era, human activity has added dramatically to these gases, especially through emissions of carbon dioxide.  Carbon dioxide in the atmosphere has increased by about 40%. Burning fossil fuels accounts for about 85% of all greenhouse gas emissions by humans.

Scientists have concluded that non-human factors alone cannot account for climate change. In fact, “Without human activities, the influence of natural factors alone would actually have had a slight cooling effect on global climate over the last 50 years.” Scientists have found no credible alternative to the consensus view that human activity, especially the burning of fossil fuels, is responsible for global warming.

Scenarios for future change

Science is often in the position of being unable to predict the future with certainty, but being able to project possible futures based on reasonable assumptions. For example, financial planners cannot predict a couple’s future retirement income without making assumptions about how much they will save, how they will invest it, and how closely market performance will conform to historical averages. So they often construct multiple scenarios based on a reasonable range of assumptions.

Climate scientists do not know how much global temperatures will increase or how much sea levels will rise, mainly because they don’t know how much humans will curb their emissions of greenhouse gases. So the National Climate Assessment relies on a “suite of possible scenarios” based on “Representative Concentration Pathways”–that is, different concentrations of carbon dioxide in the atmosphere later this century.

Some additional rise in temperature is inevitable over the next few decades, even if we start reducing carbon emissions now. That’s because the climate system responds slowly to changes in greenhouse gas levels. Even in the lowest scenario considered, “additional increases in temperatures across the contiguous United States of at least 2.3ºF (1.3ºC) relative to 1986-2015 are expected by the middle of this century.” Around mid-century, the various scenarios diverge dramatically. In a “very low” scenario, which assumes that emissions are peaking now, temperatures level off after 2050. In a “low” scenario, which assumes that emissions peak around 2050, temperatures rise another 2.3-6.7ºF (1.3-3.7ºC) by the end of the century. In a “higher” scenario, which assumes that emissions keep rising, temperatures rise another 5.4-11.0ºF (3.0-6.1ºC) by the end of the century, resulting in the most catastrophic effects. “Coastal flood heights that today cause major damages to infrastructure would become common during high tides nationwide.”

Perhaps the most sobering finding is that current global trends in annual greenhouse gas emissions are most consistent with the highest scenario considered. The evidence so far does not inspire confidence that global emissions are peaking now or will do so in the near future. But business as usual is not a viable option, and serious emission reductions are required soon to avoid catastrophic outcomes before the end of the century.

This is probably what the White House was referring to when it dismissed the report as being “largely based on the most extreme scenario,” as if the researchers deliberately chose the most pessimistic assessment in order to upset people. What they actually did was project a reasonable range of scenarios, but then come to the scientific conclusion that allowing carbon emissions to rise would result in the least desirable outcome. What the Trump administration is doing is rejecting the report as too dire, and then insisting on continuing the policies that are most likely to produce the most dire results!

The risks

The report discusses three kinds of risks arising from climate change: risks to economy and infrastructure, risks to the natural environment and “ecosystem services,” and risks to human health and well-being.

Climate change could cause substantial damage to infrastructure and private property. Regional economies and industries that depend on favorable climate conditions, such as agriculture, tourism and fisheries, will be especially vulnerable. The researchers tried to quantify many of these effects throughout the report, but the overview included only a few general references to dollar amounts: “The potential for losses in some sectors could reach hundreds of billions of dollars per year by the end of this century.”

Climate change produces economic losses in many different ways. Extreme heat waves and more powerful storms put stress on energy systems and create widespread power outages. Much of the infrastructure for producing energy is located along ocean and gulf coasts that are vulnerable to strong hurricanes and increased flooding. Although the warming climate lengthens the growing season, yields for many major crops are expected to decline because of excessive heat and greater pest activity. Bad weather around the world also impacts on the U.S. economy by disrupting international supply chains, as happened with hard-drive imports from Thailand in 2011.

An example of what the report calls “ecosystem services” is the availability of water or snow for drinking and recreation. In the Southwest, “intensifying droughts, heavier downpours, and reduced snowpack are combining with other stressors such as groundwater depletion to reduce the future reliability of water supplies in the region.”

Heat waves are associated with higher death rates, especially for older adults, pregnant women, and children. Climate change will also increase exposure to pollen allergens associated with allergic illnesses such as asthma and hay fever. And as North America warms, disease-carrying insects from Southern climates are migrating northward.

Since low-income communities are especially vulnerable to many of these risks, “climate change threatens to exacerbate existing social and economic inequalities….”

Reducing the risks

Many states, cities and businesses have taken some measures to reduce greenhouse gas emissions or to adapt to climate changes. However, the report states that “these efforts do not yet approach the scale needed to avoid substantial damages to the economy, environment, and human health expected over the coming decades.” Time is of the essence, since some effects of climate change will be difficult or impossible to reverse if they are allowed to occur. Once ice sheets melt enough to make sea levels higher and coastal cities uninhabitable, those conditions may persist for thousands of years.

The report makes a distinction between mitigation, which reduces risks by limiting further climate change, and adaptation, which reduces risks by softening the impact of the climate changes that do occur.

With regard to mitigation, one area where some progress has been made is the power sector of the economy. Greenhouse gas emissions from power generation dropped 25% from 2005 to 2016, the greatest decline for any sector. As a result, the sector with the largest emissions is now transportation.

Last week’s announcement of automobile plant closings by GM is very much related to our emissions problems. By increasing domestic gasoline production, we have lowered the price of gas, but that encourages consumers to buy larger, less fuel-efficient vehicles. The plants to be closed are producing smaller, more fuel-efficient cars for which there is currently less demand. Gas prices are artificially low, since they don’t include the ultimate cost to society of environmental damage. A carbon tax would be a rational way of discouraging our short-sighted reliance on fossil fuels, but that requires courageous leadership and a better-informed public.

With regard to adaptation, some communities have taken measures like strengthening buildings to withstand extreme weather events. They tend, however, to prepare only for the range of events that are familiar to them, rather than the even wider range that scientists anticipate. They are more likely to raise construction standards than to block construction altogether in high-risk locations.

In many areas, the costs that can be avoided through advance planning and investment are substantial. For example, “More than half of damages to coastal property are estimated to be avoidable through adaptation measures such as shoreline protection and beach replenishment.” Nevertheless, the researchers believe that moving populations away from the most threatened coastlines will become unavoidable.

Crisis, what crisis?

With the issuance of this report, the American people have just experienced a remarkable spectacle. On the one hand, thirteen federal agencies have collaborated to produce the most scientific report they can assemble on climate change, as mandated by federal law. On the other hand, the Trump administration has tried to bury it by releasing it on Black Friday and encouraging people not to believe it. “Whatever happened to Global Warming?” the president tweeted during a recent cold snap.

The President of the United States has considerable power both to define issues and to address them. This president is doing what he can to ridicule the whole idea of controlling emissions and undermine existing efforts to do so. He has blocked the previous administration’s clean energy initiative, lowered standards for vehicle fuel efficiency, taken the U.S. out of the Paris Agreement, and encouraged more fossil fuel production. I am tempted to use a term like “criminal negligence” to refer to his leadership in this area, except that ignorance is not a crime.

President Trump’s idea of a real crisis is illegal immigration. But according to The Economist, total apprehensions at the border were already way down before he even took office. “Not only have the migration numbers tumbled and the share of Mexicans among them dwindled. More Mexicans are now returning to Mexico than are coming to the United States illegally.” Central American migrants have increased in the past decade, but their numbers are still too small to change the overall story. By trying to close the border to asylum seekers, Trump provokes border confrontations that seem to confirm his largely phony crisis.

Given Donald Trump’s hyper-nationalist worldview and distorted priorities, it’s no wonder that immigration far surpasses climate change as a hot political topic and campaign issue. Too many Americans seem willing to follow this president wherever he leads, even if it’s over a climate-change cliff. Isn’t it time for the country to come to its senses and demand realistic, fact-based leadership?


Thank You for Being Late (part 3)

November 26, 2018

Previous | Next

A “moral fork in the road”

Another thing I liked about Friedman’s book was his willingness to talk about morality. Many liberals seem to shy away from the subject, perhaps because they don’t want to be confused with moral traditionalists. Personally, I think that liberals are in a good position to seize the moral high ground on many issues. After all, they are committed to such worthy values as social justice, equal opportunity, freedom of expression, scientific inquiry, nonviolent conflict resolution, and oh yes, democracy.

Friedman says that in the age of accelerations he describes, “leadership is going to require the ability to come to grips with values and ethics.” In particular, new technologies have empowered people to contribute more than ever to the solution of social problems, but also to disrupt social order on a more destructive scale. “As a species, we have never stood at this moral fork in the road–where one of us could kill all of us and all of us could fix everything if we really decided to do so.”

Friedman describes cyberspace as a lawless, amoral sphere badly in need of normative guidance. We are turning over too many decisions to impersonal algorithms that lack any moral compass. We thought that we could just connect everybody through social media and good things would happen. Now we realize that communities of users need some way of establishing moral norms, so that technically-empowered bad actors don’t destroy cyberspace for everyone.

Character and community

Friedman agrees with David Brooks that “most of the time character is not an individual accomplishment. It emerges through joined hearts and souls, and in groups.”

But what groups? The federal government is too big, too bureaucratic, and too slow to respond to emerging needs. On the other hand, families are too small and fragile to provide their members with the necessary supports. (He doesn’t elaborate on the second point, which would get into the debate over whether it “takes a village” or just a family to develop a good citizen.) Friedman looks to the local level beyond the family, asserting that “the healthy city, town or community is going to be the most important governing building block in the twenty-first century.” He quotes Gidi Grinstein as saying that such a model community would be “focused on supporting the employability, productivity, inclusion, and quality of life of its members.”

Here Friedman draws on his own experience of St. Louis Park, the suburb of Minneapolis where he grew up. There in the 1950s, American Jews escaped their urban ghetto and coexisted primarily with “a bunch of progressive Scandinavians.” Social harmony was helped along by what the White House Council of Advisers called the “Age of Shared Growth” (1948-1973) characterized by high productivity, economic expansion and broad distribution of the benefits. The result was an “imbedded habit of inclusion,” a tradition carried on today in efforts to assimilate today’s minorities. Friedman says that the jury is still out on whether the community can truly overcome today’s social divisions, but at least it is trying.

I think his point is well taken that pluralism and inclusion are keys to moral as well as economic progress.

How optimistic?

The subtitle of the book is “An Optimist’s Guide to Thriving in the Age of Accelerations.” To a degree, I share the author’s optimism. New technologies eliminate some jobs, but they can also increase productivity, lower costs for consumers, and create new jobs in expanding industries for those who acquire the appropriate skills. Globalization can expand the pool of human knowledge and generate more solutions to human problems. Even climate change has an upside, in that it should lead us to develop clean energy industries on a grand scale.

At times, however, I thought that Friedman underestimated the obstacles to be overcome, especially because of today’s economic inequality. I agree that technological change and rising productivity could be helpful in raising wages, as they were in the “Age of Shared Growth.” But even then, workers didn’t get their share of the benefits without a power struggle between business and labor, one in which government also weighed in. I agree that globalization expands the pool of human knowledge, but it can also result in a race to the bottom, where countries compete by cutting wages and skimping on environmental protection.

Friedman’s book helps convince me that massive investments in human capital are needed to qualify people for the jobs and lifestyles of the future. But his eighteen specific recommendations seem to fall a bit short, especially in the area of education. He wants to make postsecondary education tax deductible, but the people who find college least affordable are in such low tax brackets that the deduction has little value for them. He wants to eliminate corporate taxes, which would produce a windfall primarily for the richest tenth of the population who own over 80% of the stock.

The general premise of the book remains sound, however. It is that we shouldn’t panic or withdraw from the emerging world, but embrace the future with serious reflection and a positive attitude.


Thank You for Being Late (part 2)

November 15, 2018

Previous | Next

Today I’ll discuss two chapters of Thomas Friedman’s Thank You for Being Late that I found especially insightful: Ch. 8 on the implications of new technologies for employment, and Ch. 9 on the problem of global order.

The future of work

Friedman begins his discussion of work with a bold pronouncement: “Let’s get one thing straight: The robots are not destined to take all the jobs. That happens only if we let them–if we don’t accelerate innovation in the labor/education/start-up realms, if we don’t reimagine the whole conveyer belt from primary education to work and lifelong learning.”

I was pleased to find that Friedman’s position is similar to the one I laid out in my critique of Martin Ford’s The Rise of the Robots Ford predicted a future of massive unemployment, with millions of displaced workers relying on government for a minimal income. We could get a taste of that during a transitional period, but I don’t think that’s a very good description of where we are ultimately headed.

Friedman doesn’t deny that smart machines can now perform many tasks currently or formerly performed by humans. But he makes a sharp distinction between automating tasks and automating whole jobs so as to eliminate the human contribution altogether. The upside of automation is increased productivity. Workers aided by new technologies can produce more per hour, reducing the unit cost of what they produce. That can create a larger market for the product, increasing the demand for labor in a given occupation. A car was an expensive luxury item before the assembly line cut costs to create a mass market and a booming industry. Friedman reports that “employment grows significantly faster in occupations that use computers more,” as in banking and paralegal work.

To give an example from my own experience, financial planning software has automated many of the most tedious tasks involved in preparing a retirement plan, such as mathematically projecting future income from savings rates and asset allocation choices. But that hasn’t resulted in a reduced need for financial planners. On the contrary, it has made the services of a planner affordable for more people. Planners can spend less time doing calculations but more time relating to their clients.

Friedman says, “Jobs are not going away, but the needed skills for good jobs are going up.” What are disappearing are well-paid jobs with only modest skill requirements, like twentieth-century manufacturing jobs.

Retooling education

In general, today’s good jobs require more education; yet it does not follow that a college education necessarily qualifies a person for a good job. That’s not because a liberal education is a waste of time, but because it is only a foundation that must be built upon with lifelong job-relevant learning.

Friedman quotes MIT economist David Autor, who stresses the need for more than one kind of learning: “If it’s just technical skill, there’s a reasonable chance it can be automated, and if it’s just being empathetic or flexible, there’s an infinite supply of people, so a job won’t be well paid. It’s the interaction of both that is virtuous.”

Friedman is a strong believer in a broad, basic education that includes “strong fundamentals in writing, reading, coding, and math; creativity, critical thinking, communication, and collaboration; grit, self-motivation, and lifelong learning habits; and entrepreneurship and improvisation….” Even a robotics enthusiast like Martin Ford acknowledges that humans surpass robots in general intelligence, as opposed to specialized task capabilities.

However, recipients of this basic education will also have to cope with rapidly changing workplace requirements. Technology will play a central role here, both in creating the automated systems with which workers interact, and in enhancing learning processes themselves. Friedman wants to “turn AI into IA,” by which he means turning artificial intelligence into intelligent assistance to support lifelong learning. “Intelligent assistance involves leveraging artificial intelligence to enable the government, individual companies, and the nonprofit social sector to develop more sophisticated online and mobile platforms that can empower every worker to engage in lifelong learning on their own time, and to have their learning recognized and rewarded with advancement.” When the time comes to pick up a new skill, you can probably find an app to help you learn it.

Friedman describes AT&T as one company that is demanding more lifelong learning of its employees, but supporting it with measures like tuition reimbursements, online courses developed in collaboration with online providers, and promotions for those who acquire new skills. This represents a new social contract between employer and employee–“You can be a lifelong employee if you are ready to be a lifelong learner.”

Every major economic shift has involved the rise of a new asset class, such as land in the agrarian economy and physical capital in the industrial economy. The rising asset class today is human capital, and that is where society’s investments must be increasingly concentrated.

The threat of global disorder

In the immediate aftermath of the Cold War, after the collapse of the Soviet Union, The U.S. remained the only superpower and the most obvious model for other countries to emulate. Many thinkers expressed the hope that the world could move faster in the direction of American-style democracy and capitalism. But then the interventions in such places as Iraq and Afghanistan failed to produce stable democracies, the Great Recession called into question capitalist progress, and Americans lowered their expectations for world leadership.

What Friedman calls the post-post-Cold War world is characterized by shrinking American power, especially in the Middle East, and new challenges arising from the accelerations in technological change, globalization and environmental degradation.  In large areas of the less developed world, the danger is that states will fail and societies will sink into disorder, dragging the global political order and economy down. Environmental disasters like deforestation in Central America or drought and desertification in sub-Sahara Africa are uprooting people from their traditional relationship to the land. And while some poorer countries are advancing by providing cheap labor to the global economy, the future may belong to those who can provide smarter labor, and that requires greater investments in human capital.

Friedman says that during the Cold War, superpower competition gave America a reason to assist developing countries, in order to keep them in our camp. The mid-twentieth century economic boom also gave us the means to do so. While many Americans are now inclined to turn their back on the rest of the world, Friedman makes a case for renewed global involvement: “While we cannot repair the wide World of Disorder on our own, we also cannot just ignore it. It metastasizes in an interdependent world. If we don’t visit the World of Disorder in the age of accelerations, it will visit us.” The dislocated people in failed states can become refugees or terrorists. The same technologies that can empower people to learn and produce more can empower them to build improvised explosive devices triggered by cell phones, or perhaps a weapon of mass destruction.

In Friedman’s view, the best thing the U.S. could do to “help stabilize the World of Disorder and widen the islands of decency” would be to help fund schools and universities. He would also like to see us help the poorest people make a living in their own villages by assisting them with their environmental problems. He points out that it costs only 100-300 dollars to restore a hectare of degraded land.

In a world of enhanced interdependence, the haves would do well to invest in the development of the have-nots, domestically and globally. If we do not rise together, we will very likely fall together.

Continued