Ages of American Capitalism (part 2)

June 29, 2021

Previous | Next

Jonathan Levy divides American economic history into four ages:

  1. The Age of Commerce (1660-1860)
  2. The Age of Capital (1860-1932)
  3. The Age of Control (1932-1980)
  4. The Age of Chaos (1980- )

Here I discuss the Age of Control, which is the era of the New Deal, World War II, and the postwar prosperity.

The Great Depression

“Rarely if ever before had an industrial economy been so poised on the brink of a great leap forward in wealth-generating enterprise. But it had stalled in mid-leap.” The run-up to the economic crash of 1929 is a prime example of one of Levy’s general observations, that major investment booms involve both long-term fixed investment that drives real economic growth and speculative bubbles that end badly. In this case, much of the new investment was in “Fordist” mass production of consumer goods like cars and home appliances. The electric assembly line represented the “largest surge in labor productivity ever recorded.” Unfortunately, high productivity does not translate directly into sustained economic expansion and lasting prosperity. The story of economic history must include the cyclical fluctuations in confidence and credit as well as the linear trend in technological innovation and rising productivity.

At the beginning of the 1920s, investor confidence was high. But one reason for the high confidence also contained the potential for a boom-and-bust cycle. During World War I, European governments had gone off the gold standard and expanded the money supply in order to finance the war effort. Now they returned to the gold standard and restricted the money supply to fight inflation. The U.S. Federal Reserve went along by raising interest rates. These policies contributed to a temporary situation of price stability, confidence in the currency, and a willingness of investors to lend, but at relatively high rates. The high rates set a high bar for business investments, which only made sense as long as the returns exceeded the costs of borrowing.

All was well as long as confidence in future profits remained high but also realistic. But once the boom got going, speculators could easily borrow too much in order to pay too much for assets whose real prospects couldn’t justify their cost. When these unrealistic expectations went unfulfilled and profits didn’t materialize, confidence was shaken, credit dried up, and investment collapsed. In 1931, the Federal Reserve made things worse by further tightening the money supply. By the time Franklin Roosevelt was elected in 1932, the economy had entered a “liquidity trap.” Precautionary liquidity had taken over, and businesses were afraid to invest in production even when they could borrow at lower rates. The Fed brought interest rates down, but it was too late. The economy hit bottom in 1933, with economic output only half of what it had been in 1929 and unemployment over 20%. The most productive factories the world had ever seen couldn’t sustain prosperity if they were idle.

New Deal capitalism

One of the first things the new administration had to do was counter the collapse of confidence that kept the economy in the liquidity trap. Much economic activity had simply come to a halt, as lenders were afraid to lend money they might not get back, businesses were afraid to produce goods that wouldn’t sell, and consumers were afraid to spend what little money they had as incomes fell. Roosevelt’s famous declaration that “we have nothing to fear but fear itself” was more than just rhetoric. The crisis of confidence went beyond the economy to challenge government as well. As some European countries turned to authoritarian leaders to address the crisis, many Americans questioned whether liberal democracy was up to the task.

Roosevelt believed that it was, and he welcomed the characterization of his policies as “liberal”. In its early days around the time of the Civil War, the Republican Party had been the liberal party, but now Democrats earned that label, leaving many Republicans to play the part of conservative doubters of the New Deal.

Democratic efforts to get the economic crisis under control initiated the “Age of Control.” One of the first things Roosevelt did was adopt a policy he called “definitely controlled inflation” by taking the country off the gold standard. Rather than be inhibited by the supply of gold, the money supply could expand along with economic activity. In fact, monetary expansion could encourage economic activity—at least in the short run—by putting more dollars in the hands of spenders, including the government itself.

Levy distinguishes two different kinds of economic liberalism, regulatory and developmental. While the New Deal was strong on economic regulation, it was weaker on directing the nation’s investment, a weakness that Levy sees as a problem to this day.

The two main objects of New Deal regulation were business practices and income security. The Security Exchange Act created the SEC to regulate publicly traded corporations and curb the worst abuses associated with financial speculation. It banned insider trading, required regular financial reports, and contained many provisions to prevent fraud. The Social Security Act created not only social insurance for retirees, but unemployment compensation and aid to poor women and children. The National Labor Relations Act guaranteed the right of workers to organize and collectively bargain. The Fair Labor Standards Act set maximum hours and minimum wages. New agricultural programs supported commodity prices to provide more stable farm incomes. If Americans were more secure in their incomes, they would feel more comfortable buying the goods that the emerging mass-production economy was capable of producing.

Developmental liberalism tried to stimulate investment in two ways. It lent capital to private investors, especially in banking, real estate and agriculture. It also made massive public investments in infrastructure through projects like the Tennessee Valley Authority.

These programs were liberal but not radical. They did not overturn the fundamental assumptions or power structures of capitalism; nor did they bring the Depression to an end, although the economy did improve from 1933 to 1936. Levy’s assessment:

New Deal capitalism was a variety of capitalism because the discretionary power of when and where to invest remained in the hands of the owners of capital. During the 1930s, whether the investment was private (incentivized or not) or public, its combined magnitude was simply insufficient to draw out sufficient economic activity to end the Great Depression. A general lack of initiative and spending remained.

In 1937, the government contributed to a “recession within the Depression” by prematurely trying to balance the budget and tighten monetary policy. Levy also suggests that a new kind of liquidity preference played a role, one that he calls “political liquidity.” Industrial capitalists who opposed the New Deal “threatened not to invest unless their political demands were met, especially for lower taxation on their incomes.”

In 1938, Roosevelt accepted deficit spending as a way to stimulate the economy. John Maynard Keynes had presented the rationale for this in The General Theory of Employment, Interest, and Money (1936). But he also warned, in 1940, “It is, it seems, politically impossible for a capitalistic democracy to organize expenditure on the scale necessary to make the grand experiment which would prove my case—except in war conditions.”

World War II

As Keynes expected, the massive government spending required by World War II was what brought the economy back to full production. It also provided a powerful psychological stimulus, generating popular support for an all-out political and economic effort to win the war. Economic preferences shifted dramatically toward fixed investment and away from any kind of liquidity—precautionary, speculative, or political. Why be shy about investing, when the government provided a willing buyer for all the armaments a factory could turn out? By 1942, the U.S. was winning the “war of the factories,” surpassing both Germany and Japan in the production of munitions.

Big Government liberalism thrived in both its regulatory and developmental aspects. On the regulatory side, government raised taxes, rationed consumer goods like gasoline, and implemented wage and price controls to curb inflation. On the development side, military planners told industry what to invest in.

World War II also encouraged a spirit of shared sacrifice and shared rewards. In addition to winning the war itself, Americans could expect a more equal distribution of economic benefits, through measures such as a more progressive income tax, support for organized labor, and the GI Bill of Rights.

The American military-industrial machine not only won the war, but unlike the economies of other combatants, remained undamaged by the war. Now that we had a fully-functional mass-production system up and running, we just had to convert it to peacetime uses.

Postwar prosperity

Levy uses the term “postwar hinge” to refer to the unique connection between domestic politics and international politics at the end of World War II. “At the war’s close, Americans owned three-quarters of all invested capital in the world, and the U.S. economy accounted for nearly 35 percent of world GDP….” Big Government combined with capitalist industry to make the U.S. the most powerful country in the world, the biggest exporter of products, capital, democratic ideas and consumer culture. The U.S. was the newest hegemonic power, although its hegemony was challenged by its next-strongest rival, the Soviet Union.

As a result of the Bretton Woods conference of 1944, the American dollar became the anchor of the global financial system. The dollar was agreed to have a fixed value in relation to gold. Other currencies would have a value in dollars, but could be revalued under certain circumstances. This arrangement institutionalized the dollar as the world’s strongest currency and helped secure the value of investments denominated in dollars.

Although wartime military spending declined, government contributed in a number of ways to the continuation of the private investment boom—not only maintaining the strength of dollar, but continuing support for income security, maintaining a military establishment during the Cold War, and engaging in Keynesian deficit spending to counter recessions.

At the same time, postwar politics placed definite limitations on government’s role in the economy, especially with respect to developmental liberalism. The owners of capital reassumed control over investment decisions, choosing, for example, to direct investments toward single-family homes and shopping malls in all-white suburbs, while inner cities were allowed to decay. Government cooperated by providing highway construction and racially discriminatory housing loans. Once the Cold War began, conservatives could exploit the fear of communism to defeat liberal proposals for greater government influence. Among the casualties were Harry Truman’s call for national health insurance in his “Fair Deal,” the Taft-Ellender-Wagner public housing bill, and a provision within the Full Employment Act of 1946 that called for supplementing private investment with public investment in order to maintain full employment.

The federal government might tax and redistribute incomes, and it might regulate specific industries, but it remained incapable of acting autonomously and creatively in furtherance of a recognized public interest beyond “national security.” Cold War military spending was the most legitimate form of government expenditure to sustain economic growth…. That the government enjoyed an autonomous arena of action only when targeting benefits toward white male breadwinners, or invoking national security, warped state action at home and abroad…. Surely government planning for long-term economic development on behalf of the public interest was off the table.

The political economy of the postwar era was strong enough to produce a postwar economic boom and raise incomes for millions of white working families. The era became known as a “golden age” of capitalism. Yet it was not sustainable enough to last more than a few decades before things started to go seriously wrong again.

Continued


Ages of American Capitalism

June 24, 2021

Previous | Next

Jonathan Levy. Ages of American Capitalism: A History of the United States. New York: Random House, 2021.

I found this book to be a remarkable work of economics and American history. A lot of economics consists of ahistorical models intended to describe economies in general but no one time and place in particular. Such abstract models are useful tools, but need historical context to bring them to life. On the other hand, many historical accounts do not demonstrate a very good grasp of economic principles. Jonathan Levy is one of those rare scholars with both a fine command of historical facts and great insights into the workings of capitalism. The fact that he has accomplished this at an early stage of his career is all the more impressive.

I cannot do justice to the entire book (over 900 pages long with notes and index), but I will discuss selected chapters. Here I start with the Introduction, which sets forth Levy’s assumptions about the nature of capital and the capitalist economy.

Capital and capitalism

Levy presents three main theses, the first of which is:

Rather than a physical factor of production, a thing, capital is a process. Specifically, capital is the process through which a legal asset is invested with pecuniary value, in light of its capacity to yield a future pecuniary profit.

The first things that come to mind when one hears the word “capital” may be capital goods like the steam engine, which Levy describes as the most important capital good of the Industrial Revolution. But capital is anything that can generate a return greater than its cost. That includes financial capital like stocks and bonds, intellectual capital like software and data, or human capital like educational credentials. In the antebellum South, slaves were the most valuable capital, more costly in total than southern land or northern factories. Many forms of capital produce marketable goods, but capital assets can also appreciate in value while not producing anything at all. What capital assets always have is some “scarcity value” because some people own more of them than others. Those owners are, of course, the people we call capitalists.

Levy defines a capitalist economy as “one in which economic life is broadly geared around the habitual future expectation that capital assets will earn for their owners a pecuniary reward above their cost.” The term expectation is key. A future reward is always uncertain, since it depends on the “flow of historical events.” What drives capitalism is investment, and what drives investment is confidence in some future. Andrew Carnegie could only sink massive quantities of financial capital into steel mills if he believed in the future demand for steel. In the twentieth century, when the economy produced far more consumer goods, business confidence became more dependent on consumer confidence, which depended in turn on consumer income and its security. People can buy more if they have a reliable income and are confident they can make next month’s rent.

Another important form of confidence is faith in the value of the nation’s currency, especially on the part of financial investors. Lenders would like to know that the dollars they lend will come back to them—with interest or dividends, of course—in dollars that have retained their value. They want the government to maintain the scarcity value of money through relatively tight control of the money supply. A challenge for monetary policy is to maintain the confidence of lenders while allowing the money supply to grow along with the expansion of domestic product and commerce. “Because a capitalist financial system is a perpetual leap of faith, over and over again, confidence becomes the emotional and psychological mainstream of economic activity.” The corollary to that proposition is that every financial crisis is a crisis of confidence. That’s why financial crises have often been called “panics”.

I would add that capitalism may require a modern culture that encourages faith in a worldly future, one imagined as better than the present. That view is in contrast to a static conception of life or a preoccupation with an otherworldly afterlife. (Although that is a secular vision, the historical religions of Judaism and Christianity may have prepared the way with their belief in a future “promised land.”) Interest in a worldly future was a feature of the Enlightenment, and one of its proponents was Adam Smith, a major figure in the Scottish Enlightenment and the father of classical economics.

Profit motives in political and social context

The second of Levy’s three theses is:

Capital is defined by the quest for a future pecuniary profit. Without capital’s habitual quest for pecuniary gain, there is no capitalism. But the profit motive of capitalists has never been enough to drive economic history, not even the history of capitalism.

The pursuit of profit, while central to capitalism, is normally part of some larger project. Henry Ford had a vision not only of selling automobiles, but of helping create a society of mass production, consumption and leisure, a vision he got partly by reading Emerson. He also rather heavy-handedly imposed a vision of the ideal worker, characterized by self-discipline, sobriety and frugality and enforced by visits of company representatives to workers’ homes. In an earlier time, southern plantation owners were motivated by white supremacy as well as economic profit. Among the many ill-gotten gains were sexual privileges for white males.

Major developments in capitalist history have been political as well as economic projects. When the Republican Party came to power in 1860, its political agenda included stopping the spread of slavery into the West, thus both striking a blow for free labor and undermining the value of slave capital. Republicans also supported the Homestead Act of 1862, which made 160-acre plots of federal land available for western settlers. Since farmland was also a form of capital, this was a contribution to a democratic “politics of capital.” But that was before the huge concentrations of capital in the latter part of the century. As the nation industrialized and urbanized, and mechanization reduced the need for farm labor, the portion of the population that could own land or businesses declined. Political debate eventually came to focus more on a “politics of income.” The New Deal was a political project with large economic implications, as the Roosevelt administration tried to boost incomes by supporting job creation, labor unions and the ideal of the male breadwinner. The “Reagan Revolution” of 1980 was a very different political and economic project, turning away from the democratic politics of income to provide more support for capitalists in the form of reduced taxes and regulation. These interconnections allow Levy to assert, “The history of capitalism must be economic history but also something more.”

Economic growth and the liquidity problem

Levy describes capitalism as having two different dynamics related to time. The first is a long-term, linear pattern of economic growth, driven by technological improvements and increases in productivity. The second is a cyclical pattern of boom and bust, driven by fluctuations in confidence that affect lending and investment. That brings us to Levy’s third thesis:

The history of capitalism is a never-ending conflict between the short-term propensity to hoard and the long-term ability and inducement to invest. This conflict holds the key to explaining many of the dynamics of capitalism over time, including its periods of long-term economic development and growth, and its repeating booms and busts.

The key concept for understanding these dynamics is liquidity. The capitalist economy needs liquid assets, which are assets that hold value but can also be readily exchanged for other assets. Cash is the best example, but financial assets that are easily traded, like Treasury bills, also qualify. But here’s the other side of the coin, no pun intended. For capitalism to work, someone must invest in “relatively illiquid factors of production,” like factories! Up to a point, liquid and illiquid assets work together in capitalism. One kind of liquidity is transactional liquidity, which means that buyers have the cash to buy the products that a capitalist enterprise produces when it invests in the (illiquid) factory or other capital goods.

In other ways, liquidity can be the enemy of long-term investment. In the case of precautionary liquidity, people hoard capital because of a lack of confidence in the future. They are reluctant to invest because of a fear of losing their money. In the case of speculative liquidity, capitalists maintain a large cash position so they can jump in and out of short-term investments, seeking the quickest return. The day trader buys stocks in the morning and sells them in the afternoon, ready with the cash to repeat the process tomorrow. (That works best when prices are generally rising, but as they say, you should never confuse genius with a bull market.)

Historically, investment booms have usually involved both long-term fixed investment and shorter-term speculative trading. When speculation bids assets up to unrealistic levels, the cycle ends in an economic bust, at which point the loss of confidence discourages investment and encourages precautionary liquidity. At the beginning of a boom, confidence is high, based initially on some realistic expectation of production for profit. But the accumulation of profits finances a speculative frenzy, ending ultimately in a crisis of confidence—a panic—and an economic contraction.

In Chapter 7, for example, Levy describes the railroad boom of the 1860s, which ended in the Panic of 1873. Building railroads for steam-powered trains was a pretty good idea, especially to move agricultural produce from midwestern farms to eastern cities and ports. It was a political as well as economic project, with the federal government chartering railroad corporations and granting them millions of acres of land. Soon, however, railroad entrepreneurs—most notably Jay Gould and Cornelius Vanderbilt—were speculating in railroads as well as building them.

Why go through the time, hassle, and uncertainty of actually building a railroad and running it on a profitable basis when credit was readily available (for these men at least)? If the right rumor gripped the trading floor, they could turn a fast buck through leveraged speculation on a financial asset, without ever having to part with liquidity, and put capital on the ground where it became a fixed, running cost.

Notice that this was “leveraged speculation.” The most lucrative way to speculate is to make money with borrowed money (just as a home buyer can do by making only a downpayment, but immediately getting the right to any appreciation). But when doubts appear that a boom is sustainable, credit dries up, debts are called in, and panic-selling devastates asset values. In the Panic of 1873, which began with a credit crunch and higher interest rates in Britain, railroad stocks lost 60% of their value, and half of American railroad corporations went bankrupt. That same year, by the way, Mark Twain and Charles Dudley Warner gave the era its name when they published The Gilded Age.

Similarly, the “greatest leap forward in productive capacity in world history” occurred when Henry Ford introduced his moving assembly line in 1913—another useful idea, although not much loved by those who had to work on it. But this was quickly followed by the frenzied stock market speculation of the 1920s, the crash, and the Great Depression of the 1930s. Most recently, the housing boom of 2003-2006 turned into the Great Recession of 2007-2009.

The “key economic problem,” according to Levy, is the perennial weakness of investment resulting from the hoarding of capital for purposes of precautionary or speculative liquidity. He sees this as an especially acute problem in our economy today. The assumptions he lays out in this Introduction—a broad definition of capital, a conception of motivation that includes but goes beyond financial profit, a conviction that economics and politics are always intertwined, an appreciation of the importance of expectations and confidence, and the tension between some forms of liquidity and long-term productive investment—enable Levy to tell the story of American capitalism in an exciting and insightful way.

Levy’s main message is that a capitalist economy is not a standalone machine that automatically produces general prosperity if it is left alone, as some economists in the neoclassical tradition still teach. Rather he says:

History does not confirm the belief in the existence of some economic mechanism through which the pattern of capital investment will simply lead to the best possible outcome so long as it is not interfered with. One likely outcome, among others, is that the propensity to hoard will win out, exacerbating inequality and crippling economic possibilities. As the profit motive is not enough, a high inducement to investment must come from somewhere outside the economic system, narrowly conceived. History shows that politics and collective action are usually where it comes from.

In other words, getting the economy to work for all of us is a continuing challenge for society in general, and democratic politics in particular.

Continued


The Economists’ Hour (part 3)

May 28, 2021

Previous | Next

Binyamin Appelbaum calls the period from 1969 to 2008 the “Economists’ Hour” because of the unprecedented influence professional economists had over public policy during those years. At the same time, within the economics profession and in public policy discussions, the government activism of Keynesian economics was giving way to the free-market conservatism advocated by Milton Friedman and the Chicago school.

The “Great Moderation”

The latter part of this era, beginning around 1985, goes by another name in economics, the “Great Moderation.” That was a time when both inflation and unemployment were at lower rates than during the stagflation of the 1970s. Unemployment did rise during the recessions of 1990-1991 and 2001, but not as severely as during the recession of 1981-1982, when the Federal Reserve was putting the brakes on the economy to tame inflation.

Not surprisingly, free-market economists attributed the Great Moderation to the success of their recommended policies—lower taxes, less market regulation, and both fiscal and monetary restraint on the part of government. Appelbaum is skeptical, suggesting that other forces were at work. “The ‘peace dividend’ from the end of the Cold War made it easier to reduce federal spending; globalization weighed on wages and prices [that is, global competition made wage and price hikes harder]; new technologies drove a surge in productivity and prosperity.” He also points out what wasn’t so great about those years—relatively slow economic growth, increasing inequality of wealth and income, and reduced public investment in future growth.

The financial crisis

In his assessment, Appelbaum has the benefit of hindsight, since he knows that this period of economic history ended in disaster, the greatest financial collapse since 1929 and the deepest recession since the depression of the 1930s. “The Economists’ Hour did not survive the Great Recession…. In the depths of the Great Recession, only the most foolhardy purists continued to insist that markets should be left to their own devices.”

Appelbaum’s account of what went wrong focuses on developments in the expanding financial services industry. The bank failures of the 1930s had led government to impose stricter regulations, including limits on the interest rates banks could pay depositors or charge borrowers. When other interest rates spiked after the Federal Reserve started tightening the money supply in 1979, pressure from both consumer groups and bankers led Congress to deregulate bank rates. In the previous year, the Supreme Court ruled that credit card companies could charge whatever interest rate was allowed in the state in which they were headquartered, even if they did business nationwide. Financial institutions rushed to locate in states with the most permissive rules, and states competed for their business on that basis.

In addition to relaxing old regulations, governments failed to develop new regulations to keep up with financial innovation. The prevailing view that financial markets were efficient and self-regulating encouraged policymakers to leave them alone. The riskiest innovations were new forms of derivatives—securities whose value depended in some complex way on the fluctuating value of an underlying asset. Instead of buying and selling real estate, investors could buy and sell packages of mortgage loans. They could even invest in a derivative that represented a bet that such packaged loans would either gain or lose value. The gains and losses from derivative investments could be many times the gains and losses from trading in the assets on which they were based.

When Brooksley Born was appointed head of the Commodity Futures Trading Commission by President Clinton, she began to advocate for the regulation of derivatives. She was opposed, however, by Fed Chair Alan Greenspan, Treasury Secretary Robert Rubin, and SEC chair Arthur Levitt. In 2000, Senator Phil Gramm quietly inserted a provision to prohibit such regulation into a broader bill, which passed without much notice.

Signs of the financial trouble ahead appeared early in this period of financial deregulation—or non-regulation. Many Savings & Loans failed between 1986 and 1995 after being bought by financial speculators and swindlers. (The failures cost the taxpayers over $100 billion, since the deposits were federally insured.) Risky investments in derivatives led to bankruptcies like Orange County, California in 1994 and Long-Term Capital Management in 1998. But it was the housing boom of 2003-2006 that encouraged the most dangerous behavior. In their zeal to profit from the boom, financial institutions engaged in subprime lending on a massive scale, lending to borrowers who didn’t qualify for traditional fixed-rate mortgages but could be sold more complicated loans with the potential for spiking monthly payments. Lenders could disguise the shakiness of the individual loans by selling them off in packages that were assumed to minimize risk, just like a diversified stock portfolio. The packages were then overrated and even insured by companies willing to bet on their continuing value. When the bubble of inflated asset prices finally burst and the 2007-2009 recession began, borrowers who owed more than their houses were now worth defaulted in large numbers, causing massive losses to investors and financial institutions.

The financial crisis and deepening recession forced the federal government to act, both to keep huge financial institutions from failing and to stimulate aggregate economic demand with federal spending. Suddenly, Keynesian ideas were dusted off and given a new look.

Larry Summers, who was installed as head of the National Economic Council [in 2009, in the Obama administration] had said in 2001 that government stimulus spending during an economic downturn was “passé” because its merits had been “disproven.” In 2009 he changed his mind. When a reporter asked Summers to describe the government’s plans, he responded with one word: “Keynes.”

The federal stimulus plan passed by Congress in 2009 was controversial, supported by Democrats but opposed by almost every Republican. Obama himself wavered in his support for stimulus, calling for more “belt-tightening” the following year. Support for government austerity remained strong for several more years, but many economists began to believe that budget restraint was prolonging the recession. Paul Krugman and Robin Wells say in their macroeconomics text, “By 2014, the intellectual debate seemed to have gone mostly against the advocates of austerity.”

The Federal Reserve also became more activist, broadening its focus on fighting inflation to include the goal of creating jobs. Not only did the Fed bring short-term interest rates down to near zero, but it tried to reduce long-term rates as well by buying up Treasury and mortgage bonds. (That has the effect of raising prices on bonds and lowering their rates, since bond prices and rates move in opposite directions.)

Appelbaum wrote this book before the 2020 pandemic and associated recession. That new crisis led both the Trump and Biden administrations to propose additional stimulus spending, further undermining the free-market position. The leading advocate of free-market economics, Milton Friedman, did not live to see either the 2008 financial crisis or the pandemic. He died in 2006.

Economies, people, and imagined futures

I will finish by citing two of the grander themes of The Economists’ Hour. In his conclusion, Appelbaum says, “If you have taken anything from this book, I hope it is the knowledge markets are constructed by people, for purposes chosen by people—and they can be changed and rebuilt by people.” We invented the market economy as an awesome wealth-producing machine, but it remains our invention. When we start to think of it as an autonomous machine requiring no creative human intervention, I think we reduce ourselves to cogs within the machine.

A related idea is that we must always keep one eye on whatever future we desire. Appelbaum complains that “the emphasis on growth, now, has come at the expense of the future: tax cuts delivered small bursts of sugar-high prosperity at the expense of spending on education and infrastructure.”

Jonathan Levy adopts a similar perspective in his new economic history, Ages of American Capitalism, which I am currently reading. “Because a capitalist financial system is a perpetual leap of faith, over and over again, confidence becomes the emotional and psychological mainspring of economic activity.” Where do we place our confidence? In the stock market, or a bank, or a technology company, or in world trade? In a country with both a capitalist economy and democratic government, are we wise to put all our confidence in markets and none in our government? Or vice versa? If the economic machine keeps breaking down, or serves the few better than the many, or seems to be pushing us towards environmental disaster, must a democratic people be precluded from trying to actualize a better future through honest political debate and collective decision-making? If a loss of faith in government gave us the “Economists’ Hour,” maybe some recovery of that faith can give us a wiser and more balanced approach to public policy.


The Economists’ Hour (part 2)

May 26, 2021

Previous | Next

During the period Binyamin Appelbaum calls “the economists’ hour” (1969-2008), free-market reformers focused their efforts on a number of policy areas. Here I will discuss five of them: monetary policy, taxation, antitrust enforcement, deregulation, and free trade.

Monetary policy

In 1979, new Federal Reserve chair Paul Volcker adopted Milton Friedman’s recommendation to fight inflation by restricting the growth of the money supply. Limitations on the supply of money made interest rates—the price of money—rise precipitously. That in turn discouraged borrowing for business expansion and consumer spending, bringing on the 1981-82 recession. The rate of inflation did drop dramatically, from 13.5% in 1980 to 3.2% in 1983. The next Fed chair, Alan Greenspan, continued to make fighting inflation the priority even after inflation fell below 3% in the 1990s.

By then the economy was doing better, but still not growing at a very rapid pace. President Clinton wanted to increase government spending to stimulate growth, but was advised by economists to take the path of austerity and progress toward a balanced budget. Appelbaum observes that at this time, “there…was little remaining difference between the two political parties in the United States.” Clinton agreed with Republicans that “the era of Big Government is over.”

Americans benefited from lower inflation as consumers, but were hurt as workers by relatively high unemployment and stagnating wages. The biggest winners from tight monetary policy were wealthy lenders, who could lend money at high interest rates and be repaid with dollars that had not lost any purchasing power.

The benefits of low inflation…were concentrated in the hands of the elite. In the United States in 2007, the top 10 percent of households owned 71.6 percent of the nation’s wealth. By punishing workers and rewarding lenders, monetary policy was contributing to the rise of economic inequality.

Appelbaum quotes John Kenneth Galbraith, “What is called sound economics is very often what mirrors the needs of the respectably affluent.”

Taxation

Some of the free-market economists believed they had a way to reduce inflation and unemployment at the same time. University of Chicago economists Robert Mundell and Arthur Laffer made sweeping claims for the benefits of tax cuts, especially tax cuts for the wealthy. The purpose would not be to stimulate consumer demand, as with the Kennedy-Johnson tax cut, but to expand the economy from the supply side by giving the owners of capital the means and the motivation to work harder and expand their businesses. The benefits would then “trickle down” to everyone. The tax cuts could even pay for themselves as the growing economy generated more income and more tax revenue.

This “supply-side economics” was never very well supported by evidence or fully embraced by mainstream economists, but it became a popular party line for Republican politicians. Under Presidents Ronald Reagan and George H. W. Bush, the top tax rate was reduced first from 70% to 50%, and then to 28%. From there it fluctuated as political control shifted from one party to the other, ending up at 37% after the Trump tax cut in 2017. (The effective tax rate for high-income taxpayers is lower than these numbers suggest, since they pay the top rate only on the portion of income that exceeds the top bracket threshold.)

Milton Friedman did not believe in using tax cuts or government spending as a tool for growing the economy, and he did not expect tax cuts to pay for themselves, as their most enthusiastic supporters claimed. He supported them anyway, however, for a different reason. He “wanted to blow a hole in the federal budget, and then close it with spending cuts.” That would reduce the role of the federal government in the economy, leaving the free-market to work its magic.

Ronald Reagan claimed that he could cut taxes, increase military spending, and still balance the federal budget. Instead, when the optimistic predictions of the supply-siders didn’t pan out, the loss of tax revenue left the country with a growing federal deficit. That experience would be repeated when George W. Bush and Donald Trump cut taxes. After Reagan’s 1981 tax cut, private investment spending as a percentage of GDP increased only briefly, and was generally no higher in the 1980s than it had been in the 1970s. The rate of economic growth was actually a little slower. Tax rates were flatter and less progressive, shifting the tax burden a little more from the rich to the middle class and creating more after-tax inequality. The United States became more dependent on China to finance the deficit. The Chinese earned dollars by selling their manufactured goods to American consumers, then lent those dollars back to us by buying treasury bonds.

Appelbaum is not surprised by these results, saying that economic growth depends mainly on productivity growth, which depends on innovation. Low taxes and government austerity may not help; and they can mean reduced investment in future growth if they require spending reductions in such areas as infrastructure or research and development. He characterizes the tax cuts as a “political triumph and an economic failure.”

Antitrust enforcement

Antitrust laws were supposed to keep business monopolies or oligarchies from overcharging consumers, suppressing competition from smaller firms, and undermining democracy with excessive political influence. By the 1960s, many economists were questioning those alleged benefits.

One problem was that an important method for keeping companies small—blocking mergers and acquisitions—wasn’t working very well. Even without mergers, many companies grew big enough to dominate their industries anyway. Economists also observed that big companies were often efficient enough to deliver goods and services at low prices, so that no apparent harm to consumers occurred. Some argued that “economic efficiency should be the sole standard of antitrust policy, which…meant the government mostly should let corporations do as they pleased.”

During the 1970s and 80s, federal judges began to embrace these views. Antitrust laws were not so much repealed as permissively interpreted. President Nixon’s appointment of four conservative justices to the Supreme Court helped. Large corporations financed university seminars that paid judges to learn the new economic views. By 1990, 40 percent of federal judges had attended programs of this kind organized by Henry Manne, a founder of the discipline economics and law. (Another book I have reviewed, Nancy MacLean’s Democracy in Chains, tells that story in more detail.)

At the same time that economists were taking a more benign view of corporate power, they were taking a dimmer view of union power. They often argued that union wage demands discourage hiring by raising the price of labor. (One could also argue that corporate power discourages labor force participation by holding down wages.) Appelbaum reports that consolidation in the meatpacking industry didn’t appear to hurt consumers or the ranchers who raised the cattle. But real wages declined by 35 percent “as companies shuttered unionized plants and used the threat of closure to squeeze concessions from workers.” He concludes that the “concentration of the corporate sector is tilting the balance of power between employers and workers, allowing companies to demand more and pay less.”

Deregulation

In some industries, consumer protection had taken the form of regulating monopolies or oligopolies rather than antitrust measures. Sometimes, an industry consisting of many small firms wasn’t very practical.

Even as the United States sought to increase competition across much of the economy in the mid-twentieth century through invigorated enforcement of antitrust laws, it was widely accepted that some industries were “natural monopolies”—sectors in which healthy competition was impossible. Electric companies, for example, could compete only by running multiple lines into the same homes, and all but one of those lines would be wasted. The result would be either too much competition, which was bad for the companies, or too little competition, which was bad for consumers. So governments intervened.

Besides public utilities, transportation was another area in which a small number of large but highly regulated companies was considered an acceptable arrangement. For example, in 1938, the new Civil Aeronautics Authority “issued licenses to sixteen airlines and then refused to let anyone else enter the business for the next four decades.” But was that really good for consumers, or just for the favored companies?

Another example:

In 1977, the eight largest trucking companies were twice as profitable as the average Fortune 500 company. It helped that trucking firms, unlike airlines, got to set their own prices. The industry’s milquetoast regulator, the Interstate Commerce Commission (ICC), allowed ten regional bureaus controlled by trucking firms to hold secret hearings and then issue binding prices. This system was also lucrative for the industry’s employees, represented by the belligerent Teamsters union….

By the 1970s, free-market economists were arguing that publicly regulated industries were worse for consumers than unregulated industries. Even consumer advocate Ralph Nader campaigned for less regulation. Some economists went so far as to argue that regulation in general was a waste of effort, since the regulators so often ended up serving the industries they were supposed to regulate.

The deregulation of the airline and trucking industries began with President Carter and continued under Reagan. The Civil Aeronautics Board closed down at the end of 1984. Consumers benefited, at least initially. Companies like Southwest Airlines and UPS lowered the cost of transporting people and goods. On the other hand, wages for truck drivers and flight attendants fell, while executive compensation skyrocketed. After eight airlines consolidated into four in the early 2000s, the price of airline tickets stopped falling.

Currency exchange and free trade

In 1944, the Bretton Woods conference set up a system of international monetary exchange rates based on the dollar. The American dollar anchored the system by having a constant value in gold, while other national currencies were valued in dollars. The American commitment to redeeming dollars in gold upon demand made the dollar the strongest and most desired currency in the world.

There was a downside for us, however. Suppose that another economy, say Japan, recovers from World War II and grows faster than the US economy. To be more specific, suppose their auto industry goes into high gear while ours is—well—stalling out. In a system of floating exchange rates, we might expect the Japanese yen to gain in value against the dollar, because people need yen to buy those great Japanese cars. But fixed exchange rates don’t allow that. So the dollar remains stronger than it deserves to be, allowing Americans to buy Japanese goods at a kind of discount. But the Japanese may be reluctant to use their dollars to buy American cars or other goods, which sell at a kind of premium. They would rather exchange their dollars for gold, producing a run on gold.

Friedman always took the position that free financial markets, not international agreements, should govern currency exchange rates. By the end of the 1960s, most economists agreed. In 1971, President Nixon announced that the United States would no longer guarantee the value of the dollar in gold. The Bretton Woods system fell apart, and floating exchange rates emerged.

Friedman expected that the floating rates would be relatively stable, since they would reflect slow changes in the relative strength of national economies. Others feared that floating rates would be so chaotic as to trigger a collapse in global trade. Neither side got it right. Exchange rates turned out to be more volatile than Friedman had expected, but global trade grew anyway. The new potential for currency trading and speculation did create new risks, however. Some banks failed because they lost their investors’ money in currency trading, and others were caught trying to manipulate exchange rates for their own advantage.

The dollar, which might have been expected to fall when the US left the gold standard, instead continued to rise. Because of the size and strength of the American economy, foreigners saw the dollar as a safe refuge in a world of volatile currencies. High US interest rates during the period of inflation-fighting also encouraged foreigners to invest in dollar-denominated assets. Treasury bonds were valued because the US government had never defaulted on an obligation. And China’s willingness to finance our deficit by holding treasury bonds—as opposed to converting dollars to Chinese yuan—kept their currency cheaper than ours, hurting their own consumers but helping their export-oriented, manufacturing economy. Our economy went in the opposite direction, shifting toward consumption and foreign debt at the expense of manufacturing production and exports. The areas in which jobs were created were those most sheltered from foreign competition, such as health care and retail sales.

Again, there were winners and losers. Consumers used their strong dollars to buy inexpensive foreign goods, and many workers were able to move into the industries that were sheltered from foreign competition. But many others became disenchanted with globalization:

The Georgetown economist Pietra Rivoli argues that opposition to trade is stronger in the United States, in comparison to other developed countries with higher levels of trade, because the social safety net is much weaker. The United States, for example, is the only developed nation that does not provide universal health care. If the people who lose jobs when a factory closes still have health insurance, if training is affordable, if they can find housing in the areas with new jobs and pay for child care, then transitions are manageable. If not, those people are likely to be angrier about globalization—and with ample justification.

These dynamics and their results are the legacy of “the economist’s hour,” and they provide the background for understanding the Great Recession that began in 2007.

Continued


The Economists’ Hour

May 24, 2021

Previous | Next

Binyamin Appelbaum. The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society. New York: Little, Brown and Company, 2019.

Economic journalist Binyamin Appelbaum reports on the prominent role of free-market economists in shaping national policy during the four decades from 1969 to 2008. The period began with the Nixon administration and ended with the financial crisis in the last year of the George W. Bush administration.

Free-market economists like Milton Friedman were influential voices during those years. They were not alone, however. Appelbaum says, “The economists provided ideas and the corporations provided money: underwriting research, endowing university chairs, and funding think tanks like the National Bureau of Economic Research, the American Enterprise Institute, and the Hoover Institution at Stanford University.” In addition, economic conservatives joined with social conservatives to move the Republican Party to the right. They formed a “coalition of the powerful, defending the status quo against threats real and imagined.” For economic conservatives, the threat might be environmental regulations or high taxes. For social conservatives, it might be gay rights or affirmative action. Republican leaders rallied the support of both groups in order to dominate politics during those years. Mainstream economists and moderate Democrats like Bill Clinton went along with much of the new thinking as well.

Appelbaum acknowledges the many economic benefits that resulted as “economists played a leading role in curbing taxation and public spending, deregulating large sectors of the economy, and clearing the way for globalization.” The runaway inflation of the 1970s was tamed; competition increased in some industries; free trade raised incomes in developing countries and brought inexpensive manufactured goods to American consumers. But he also believes that the “market revolution went too far.” Decade by decade, economic growth slowed down, and the benefits of growth went increasingly to the wealthy. Real wages for low-income workers stagnated or declined. The pursuit of short-term benefits put long-term prosperity at risk, as public policy failed to address environmental problems, deteriorating infrastructure, and human capital needs.

The brief era of “activist economics”

When the Great Depression hit in 1929, few economists had the ear of political leaders. Only in 1946 did Congress create the White House Council of Economic Advisers. Four years into the Depression, the British economist John Maynard Keynes published a letter in the New York Times encouraging the recently elected Franklin Roosevelt to stimulate the economy with massive federal spending. Roosevelt met Keynes the following year, but he was reluctant to create very large deficits. New Deal social programs helped reduce unemployment, but wartime spending after 1941 is usually credited with ending the Depression.

Looking back on that experience, economists and policymakers generally accepted the idea that government could steer the economy on a productive path toward full employment and low inflation. The main Keynesian tool would be fiscal policy, which called for spending in excess of tax revenues to stimulate a lagging economy, but curbing spending to cool down an inflationary economy. The prime example of deliberately Keynesian intervention was the tax cut proposed by President Kennedy and signed by President Johnson in 1964. It seemed to work, as unemployment fell from 5.6% in 1964 to 3.5% in 1968.

Opponents of government activism were already having their say, however. “Milton Friedman…wanted to restore the pre-Keynesian consensus that governments could not stimulate economic growth and should not try.” In Capitalism and Freedom in 1962 and A Monetary History of the United States, 1867-1960 (with Anna Jacobson Schwartz in 1963), he argued that the only useful macroeconomic policy was to insure a slow and steady growth of the money supply. The Federal Reserve Bank’s failure to accomplish this was the main reason for either recession or inflation.

Friedman’s ideas were slow to gain traction. Barry Goldwater endorsed them, and Friedman in turn supported him for President, but he lost to Lyndon Johnson in a landslide in 1964. The following president, Richard Nixon, was ideologically sympathetic, saying in his 1968 inaugural address, “Let each of us ask—not just what government will do for me, but what can I do for myself?” Nixon also signed into law one of Friedman’s proposals, the all-volunteer army, in 1971. But when he faced a troubled economy, Nixon also relied on activist policies like deficit spending to fight unemployment and wage-and-price controls to curb inflation.

The stagflation crisis

What really turned the tide in Friedman’s favor was the combination of persistent inflation and unemployment in the 1970s. The rate of inflation hit 5.8% in 1970, 11.1% in 1974, and 13.5% in 1980. Some of the possible reasons were Vietnam War spending with insufficient taxes to pay for it, shortages of foreign oil, and a slump in productivity growth. Friedman blamed the Federal Reserve for allowing the money supply to expand too rapidly. Meanwhile, the rate of unemployment was over 5% for almost the entire decade, with higher spikes during recessions.

Keynesian economics had no good answer for simultaneously high inflation and unemployment. Inflation was supposed to indicate an overheated economy with high aggregate demand pushing up prices. Unemployment was supposed to indicate a sluggish economy with low aggregate demand discouraging production. Appelbaum describes the resulting loss of confidence in Keynesian solutions:

Nixon and his successors, Gerald Ford and Jimmy Carter, kept trying the interventionist prescriptions of the Keynesians until even some of the Keynesians threw up their hands. Juanita Kreps, an economist who served as Carter’s commerce secretary, told the Washington Post when she stepped down in 1979 that her confidence in Keynesian economics was so badly shaken that she did not plan to return to her position as a tenured professor at Duke University. “I don’t know what I would teach,” she said.

The stage was now set for a massive political and policy shift, one that would embrace the free market unencumbered by much interference from government. As President Ronald Reagan announced in his 1981 inaugural address, “Government is not the solution to our problem; government is the problem.”

Continued