During the period Binyamin Appelbaum calls “the economists’ hour” (1969-2008), free-market reformers focused their efforts on a number of policy areas. Here I will discuss five of them: monetary policy, taxation, antitrust enforcement, deregulation, and free trade.
In 1979, new Federal Reserve chair Paul Volcker adopted Milton Friedman’s recommendation to fight inflation by restricting the growth of the money supply. Limitations on the supply of money made interest rates—the price of money—rise precipitously. That in turn discouraged borrowing for business expansion and consumer spending, bringing on the 1981-82 recession. The rate of inflation did drop dramatically, from 13.5% in 1980 to 3.2% in 1983. The next Fed chair, Alan Greenspan, continued to make fighting inflation the priority even after inflation fell below 3% in the 1990s.
By then the economy was doing better, but still not growing at a very rapid pace. President Clinton wanted to increase government spending to stimulate growth, but was advised by economists to take the path of austerity and progress toward a balanced budget. Appelbaum observes that at this time, “there…was little remaining difference between the two political parties in the United States.” Clinton agreed with Republicans that “the era of Big Government is over.”
Americans benefited from lower inflation as consumers, but were hurt as workers by relatively high unemployment and stagnating wages. The biggest winners from tight monetary policy were wealthy lenders, who could lend money at high interest rates and be repaid with dollars that had not lost any purchasing power.
The benefits of low inflation…were concentrated in the hands of the elite. In the United States in 2007, the top 10 percent of households owned 71.6 percent of the nation’s wealth. By punishing workers and rewarding lenders, monetary policy was contributing to the rise of economic inequality.
Appelbaum quotes John Kenneth Galbraith, “What is called sound economics is very often what mirrors the needs of the respectably affluent.”
Some of the free-market economists believed they had a way to reduce inflation and unemployment at the same time. University of Chicago economists Robert Mundell and Arthur Laffer made sweeping claims for the benefits of tax cuts, especially tax cuts for the wealthy. The purpose would not be to stimulate consumer demand, as with the Kennedy-Johnson tax cut, but to expand the economy from the supply side by giving the owners of capital the means and the motivation to work harder and expand their businesses. The benefits would then “trickle down” to everyone. The tax cuts could even pay for themselves as the growing economy generated more income and more tax revenue.
This “supply-side economics” was never very well supported by evidence or fully embraced by mainstream economists, but it became a popular party line for Republican politicians. Under Presidents Ronald Reagan and George H. W. Bush, the top tax rate was reduced first from 70% to 50%, and then to 28%. From there it fluctuated as political control shifted from one party to the other, ending up at 37% after the Trump tax cut in 2017. (The effective tax rate for high-income taxpayers is lower than these numbers suggest, since they pay the top rate only on the portion of income that exceeds the top bracket threshold.)
Milton Friedman did not believe in using tax cuts or government spending as a tool for growing the economy, and he did not expect tax cuts to pay for themselves, as their most enthusiastic supporters claimed. He supported them anyway, however, for a different reason. He “wanted to blow a hole in the federal budget, and then close it with spending cuts.” That would reduce the role of the federal government in the economy, leaving the free-market to work its magic.
Ronald Reagan claimed that he could cut taxes, increase military spending, and still balance the federal budget. Instead, when the optimistic predictions of the supply-siders didn’t pan out, the loss of tax revenue left the country with a growing federal deficit. That experience would be repeated when George W. Bush and Donald Trump cut taxes. After Reagan’s 1981 tax cut, private investment spending as a percentage of GDP increased only briefly, and was generally no higher in the 1980s than it had been in the 1970s. The rate of economic growth was actually a little slower. Tax rates were flatter and less progressive, shifting the tax burden a little more from the rich to the middle class and creating more after-tax inequality. The United States became more dependent on China to finance the deficit. The Chinese earned dollars by selling their manufactured goods to American consumers, then lent those dollars back to us by buying treasury bonds.
Appelbaum is not surprised by these results, saying that economic growth depends mainly on productivity growth, which depends on innovation. Low taxes and government austerity may not help; and they can mean reduced investment in future growth if they require spending reductions in such areas as infrastructure or research and development. He characterizes the tax cuts as a “political triumph and an economic failure.”
Antitrust laws were supposed to keep business monopolies or oligarchies from overcharging consumers, suppressing competition from smaller firms, and undermining democracy with excessive political influence. By the 1960s, many economists were questioning those alleged benefits.
One problem was that an important method for keeping companies small—blocking mergers and acquisitions—wasn’t working very well. Even without mergers, many companies grew big enough to dominate their industries anyway. Economists also observed that big companies were often efficient enough to deliver goods and services at low prices, so that no apparent harm to consumers occurred. Some argued that “economic efficiency should be the sole standard of antitrust policy, which…meant the government mostly should let corporations do as they pleased.”
During the 1970s and 80s, federal judges began to embrace these views. Antitrust laws were not so much repealed as permissively interpreted. President Nixon’s appointment of four conservative justices to the Supreme Court helped. Large corporations financed university seminars that paid judges to learn the new economic views. By 1990, 40 percent of federal judges had attended programs of this kind organized by Henry Manne, a founder of the discipline economics and law. (Another book I have reviewed, Nancy MacLean’s Democracy in Chains, tells that story in more detail.)
At the same time that economists were taking a more benign view of corporate power, they were taking a dimmer view of union power. They often argued that union wage demands discourage hiring by raising the price of labor. (One could also argue that corporate power discourages labor force participation by holding down wages.) Appelbaum reports that consolidation in the meatpacking industry didn’t appear to hurt consumers or the ranchers who raised the cattle. But real wages declined by 35 percent “as companies shuttered unionized plants and used the threat of closure to squeeze concessions from workers.” He concludes that the “concentration of the corporate sector is tilting the balance of power between employers and workers, allowing companies to demand more and pay less.”
In some industries, consumer protection had taken the form of regulating monopolies or oligopolies rather than antitrust measures. Sometimes, an industry consisting of many small firms wasn’t very practical.
Even as the United States sought to increase competition across much of the economy in the mid-twentieth century through invigorated enforcement of antitrust laws, it was widely accepted that some industries were “natural monopolies”—sectors in which healthy competition was impossible. Electric companies, for example, could compete only by running multiple lines into the same homes, and all but one of those lines would be wasted. The result would be either too much competition, which was bad for the companies, or too little competition, which was bad for consumers. So governments intervened.
Besides public utilities, transportation was another area in which a small number of large but highly regulated companies was considered an acceptable arrangement. For example, in 1938, the new Civil Aeronautics Authority “issued licenses to sixteen airlines and then refused to let anyone else enter the business for the next four decades.” But was that really good for consumers, or just for the favored companies?
In 1977, the eight largest trucking companies were twice as profitable as the average Fortune 500 company. It helped that trucking firms, unlike airlines, got to set their own prices. The industry’s milquetoast regulator, the Interstate Commerce Commission (ICC), allowed ten regional bureaus controlled by trucking firms to hold secret hearings and then issue binding prices. This system was also lucrative for the industry’s employees, represented by the belligerent Teamsters union….
By the 1970s, free-market economists were arguing that publicly regulated industries were worse for consumers than unregulated industries. Even consumer advocate Ralph Nader campaigned for less regulation. Some economists went so far as to argue that regulation in general was a waste of effort, since the regulators so often ended up serving the industries they were supposed to regulate.
The deregulation of the airline and trucking industries began with President Carter and continued under Reagan. The Civil Aeronautics Board closed down at the end of 1984. Consumers benefited, at least initially. Companies like Southwest Airlines and UPS lowered the cost of transporting people and goods. On the other hand, wages for truck drivers and flight attendants fell, while executive compensation skyrocketed. After eight airlines consolidated into four in the early 2000s, the price of airline tickets stopped falling.
Currency exchange and free trade
In 1944, the Bretton Woods conference set up a system of international monetary exchange rates based on the dollar. The American dollar anchored the system by having a constant value in gold, while other national currencies were valued in dollars. The American commitment to redeeming dollars in gold upon demand made the dollar the strongest and most desired currency in the world.
There was a downside for us, however. Suppose that another economy, say Japan, recovers from World War II and grows faster than the US economy. To be more specific, suppose their auto industry goes into high gear while ours is—well—stalling out. In a system of floating exchange rates, we might expect the Japanese yen to gain in value against the dollar, because people need yen to buy those great Japanese cars. But fixed exchange rates don’t allow that. So the dollar remains stronger than it deserves to be, allowing Americans to buy Japanese goods at a kind of discount. But the Japanese may be reluctant to use their dollars to buy American cars or other goods, which sell at a kind of premium. They would rather exchange their dollars for gold, producing a run on gold.
Friedman always took the position that free financial markets, not international agreements, should govern currency exchange rates. By the end of the 1960s, most economists agreed. In 1971, President Nixon announced that the United States would no longer guarantee the value of the dollar in gold. The Bretton Woods system fell apart, and floating exchange rates emerged.
Friedman expected that the floating rates would be relatively stable, since they would reflect slow changes in the relative strength of national economies. Others feared that floating rates would be so chaotic as to trigger a collapse in global trade. Neither side got it right. Exchange rates turned out to be more volatile than Friedman had expected, but global trade grew anyway. The new potential for currency trading and speculation did create new risks, however. Some banks failed because they lost their investors’ money in currency trading, and others were caught trying to manipulate exchange rates for their own advantage.
The dollar, which might have been expected to fall when the US left the gold standard, instead continued to rise. Because of the size and strength of the American economy, foreigners saw the dollar as a safe refuge in a world of volatile currencies. High US interest rates during the period of inflation-fighting also encouraged foreigners to invest in dollar-denominated assets. Treasury bonds were valued because the US government had never defaulted on an obligation. And China’s willingness to finance our deficit by holding treasury bonds—as opposed to converting dollars to Chinese yuan—kept their currency cheaper than ours, hurting their own consumers but helping their export-oriented, manufacturing economy. Our economy went in the opposite direction, shifting toward consumption and foreign debt at the expense of manufacturing production and exports. The areas in which jobs were created were those most sheltered from foreign competition, such as health care and retail sales.
Again, there were winners and losers. Consumers used their strong dollars to buy inexpensive foreign goods, and many workers were able to move into the industries that were sheltered from foreign competition. But many others became disenchanted with globalization:
The Georgetown economist Pietra Rivoli argues that opposition to trade is stronger in the United States, in comparison to other developed countries with higher levels of trade, because the social safety net is much weaker. The United States, for example, is the only developed nation that does not provide universal health care. If the people who lose jobs when a factory closes still have health insurance, if training is affordable, if they can find housing in the areas with new jobs and pay for child care, then transitions are manageable. If not, those people are likely to be angrier about globalization—and with ample justification.
These dynamics and their results are the legacy of “the economist’s hour,” and they provide the background for understanding the Great Recession that began in 2007.