Kochland

August 3, 2021

Previous | Next

Christopher Leonard. Kochland: The Secret History of Koch Industries and Corporate Power in America. New York: Simon & Schuster, 2019.

Christopher Leonard has done us all a favor by amassing and organizing a great volume of information about Koch Industries. The company has not usually gotten the attention its wealth and power deserve, because it is the family business of a highly secretive family. But it is a massive company with many subsidiaries producing a wide range of products, and it is the second largest privately held company in the United States. It branched out from its original focus on oil refining and petrochemicals into businesses like agricultural products and building materials.

Koch traders sell everything from fertilizer, to rare metals, to fuel, to abstract derivatives contracts. Koch Industries’ annual revenue is larger than that of Facebook, Goldman Sachs, and US Steel combined.

The profits from Koch’s activities are stunning. Charles Koch and his brother David own roughly 80 percent of Koch Industries. Together the two men are worth $120 billion. Their fortune is larger than that of Amazon CEO Jeff Bezos, or Microsoft founder Bill Gates [as of 2019]. Yet David and Charles Koch did not invent a major new product or revolutionize any industry.

The book is a detailed portrait of one company, but its subtitle reveals how the author sees that company—a prime example of growing corporate power in America over the past half century. Many of the book’s themes are central to recent economic history—the ascendancy of a free-market, antigovernment philosophy, the victories of capital over labor, the reliance on corporate acquisitions and asset speculation for corporate profits, the growing political muscle of big business, and its effective resistance to environmental legislation. The book gives life to these abstractions by showing how one company has accomplished them.

Origins of a corporate powerhouse

Fred Koch founded the company in 1940 in Wichita, Kansas. He had four sons, the oldest of whom, Freddie, was more interested in art than the family business. That made the next son, Charles, the heir apparent, and he took over the management shortly before his father died in 1967. The third son, David, served on the board of directors but devoted himself more to politics, eventually running for vice president as a Libertarian. David’s twin brother William was also on the board, but feuded with his brothers over corporate policy and dividend payments. Charles and David succeeded in getting him fired by the board, and the company bought out both Freddie and William’s ownership shares.

Charles Koch took the lead in consolidating his father’s holdings under the name of Koch Industries. Although it began as mainly a refiner and transporter of oil, the company became more diversified and flexible in order to adapt to increasing economic volatility, such as wild swings in oil prices. Charles created a development group specializing in new acquisitions, which the author compares to a private equity firm, but one within an existing company. “The group would come to embody modern American capitalism in the early twenty-first century, an era when private equity and hedge funds scoured the landscape in search of acquisitions.”

Charles Koch ran his company according to a philosophy he called “Market-Based Management” (MBM) which was formalized and taught to every manager. Although he was well-read in a variety of subjects, his philosophy seemed to rely mostly on what he had learned from his domineering father and from free-market economics in the Austrian tradition of Friedrich Hayek. Fred Koch had been a founding member of the John Birch Society, which was considered the far-right fringe of the Republican Party in the 1950s and 60s. Hayek was a well-known critic of the New Deal, and Leonard says that Hayek “was almost religious when it came to describing what the market could do when left to its own devices. He believed that the market was more important, and more beneficial, than the institution of democracy itself.” Charles Koch saw his company as a kind of internal market that should also reward personal initiative motivated by economic reward. He encouraged the manager of each unit to be on the lookout for new opportunities that could benefit the bottom line. He took his philosophy beyond the company as well, working with Wichita State University to establish the MBM Center there. And on Sunday mornings when many Wichita residents were in church, Charles was in the family library, schooling his own children in “a curriculum that taught them about his systematic view of human behavior and how best to organize human society.”

Growing pains

By the mid-1990s, Koch Industries had developed what Leonard calls a “bias toward acquisitions.” Like private equity firms, the company had discovered that they could make more money by buying existing companies then by developing new products on their own. When it worked best, the acquisition strategy could unlock hidden value in underperforming companies, managing them or reselling them for high enough returns to justify the costs of acquiring them. But Koch Industries wasn’t immediately or consistently successful at this.

In the case of Purina Mills, acquired in 1998, Koch paid much more than Purina appeared to be worth, financing most of the deal with bank loans. Koch tried to characterize the loans as “non-recourse” debt, so that the banks would have no claim on Koch’s assets, only those of the Purina subsidiary. But when a collapse in hog prices threw Purina into bankruptcy, the banks successfully sued, claiming that Koch was not independent enough of Purina to justify non-recourse debt. Koch lost its own $100 million investment in Purina, plus another $60 million.

Koch had many other legal problems. In 1989, a report by the Senate Select Committee on Indian Affairs described the company as “the largest purchaser of Indian oil in the country” but also “the most dramatic example of an oil company stealing by deliberate mismeasurement and fraudulent reporting.” The federal case against Koch was eventually dropped for lack of evidence, but a federal civil suit over the same allegations was more successful.

Violations of environmental regulations were another problem. The company employed environmental engineers, but Koch’s Market-Based Management assigned them only advisory roles, while decision-making authority was reserved for the organization’s profit centers. When too much ammonia showed up in the Pine Bend refinery’s wastewater, Koch maintained production by illegally discharging it into the river anyway or letting it spill onto surrounding land. “Koch industries racked up a shocking number of criminal charges and civil complaints throughout the 1990s, branding the company as a kind of corporate outlaw.”

Leonard attributes such failures to the downsides of Market-Based Management, especially its single-minded emphasis on corporate growth and profits.

The culture inside Koch industries…borrowed some of the worst impulses from Wall Street—a hunger for high-profile deals, a desire for giant paydays, short-term thinking—and combined them with Koch Industries’ mandate for growth.

Reinvention and mastery

In 2000, Koch Industries went back to the drawing board and revised its corporate structure and growth strategy, though without abandoning the MBM philosophy. Koch Industries became basically a holding company, owning many smaller firms. The company took pains to segregate those firms from the parent company, so that the latter could avoid the kind of liability that had arisen in the Purina case. Koch Industries would profit from a strategy similar to that of private equity firms—buy a struggling company using mostly borrowed money, but structure the deal so that the debt was the responsibility of the acquired company; then use the company’s cash flow to make the debt payments. If they came out ahead, Koch got the profits. If they didn’t, they could let the acquired company take the fall. “In a matter of just a few years, Koch Industries would execute some of the largest private equity deals in America, with acquisitions worth nearly $30 billion.”

Another lucrative activity was speculation in futures options. If Koch could acquire more accurate information about future conditions than other traders had, it could profit on the difference. Since the demand for energy depends heavily on weather conditions, Koch hired the best meteorologists it could find. One Koch trader speculating on insurance policies made almost five times as much in a year as Koch’s entire pipeline company.

Surprisingly, for those who thought of Koch Industries as a corporate outlaw, Charles Koch began to insist on what he called “10,000 percent compliance,” which meant that everyone in the company would obey 100% of the laws 100% of the time. Sounds great, but what it may also have meant is that the company learned how to shape the law and use the law to its advantage, rather than risk breaking the law. The Clean Air Act had grandfathered in the pollution standards for refineries, so that old companies like Koch could operate, but newer ones found it too expensive to compete. Deregulation of energy prices opened up new opportunities for profit. Proposed regulations to combat climate change were a threat, but the company could use its considerable lobbying and media campaigns to defeat them.

In the early twenty-first century, Koch Industries was thriving. It seemed to have mastered the financial and political challenges of a complex and volatile economy. The Koch brothers were not simply producing something of obvious value and getting paid for it, like a Henry Ford. As Leonard said, they “did not invent a major new product or revolutionize any industry.” One wonders how much of their vast fortune was earned by making real economic contributions, and how much was a matter of using financial, intellectual and political capital to take advantage of those with less capital. With that in mind, I turn to Koch’s labor practices and political operations in my next posts.

Continued


Ages of American Capitalism (part 5)

July 13, 2021

Previous | Next

Continuing with the last of Jonathan Levy’s four ages, the “Age of Chaos,” I turn now to the present century and the period including the Great Recession of 2007-2009, the worst economic crisis since the Great Depression. Someday, Americans may look back and see it as the start of a new era in economy and government. So far, however, Levy observes mostly continuity since 2009, and not the new “democratic politics of capital” he would like to see. More on that later.

The “Great Moderation”

In 2004, Ben Bernanke, a governor of the Federal Reserve who would later become its Chair, used this term to describe the economic stability he believed had been achieved. (If this sounds familiar, I recently described Binyamin Appelbaum’s take on the “Great Moderation” in my third post about The Economists’ Hour.) At the time Bernanke was speaking, there had been only 16 months of recession in the previous 21 years. He credited this achievement especially to sound monetary policy, tight enough to control inflation but flexible enough to alleviate recessions by lowering interest rates as needed. Bernanke’s views expressed the capitalist confidence of the time—not only in the stability of the currency, but in the continued growth of economic demand and corporate profits.

Profits were growing rapidly in the 2000s; the bad news was that few of the economic benefits were reaching the average worker. Labor’s share of the national income was plummeting. Levy attributes that to another “credit-fueled and asset-priced” expansion, “which distributed, logically enough, more money to the property owners of assets, rather than to working people.”

Levy then provides a global perspective on this uneven expansion. Much of the world was experiencing an economic boom, but with some noteworthy imbalances. Manufacturing was booming in developing countries with historically low wages, led by China. Other developing economies prospered by meeting the increasing global demand for commodities like oil or iron. Countries of the new European monetary union were expanding their global financial services. The United States contributed a boom in housing and consumption heavily fueled by debt.

The relationship between the U.S. and China was pivotal. China’s communist leaders chose to save and invest much of the revenue from manufacturing exports. While holding down wages and consumption at home, they invested heavily in the United States, in effect financing the soaring U.S. trade deficit. (The federal budget also went from surplus to deficit as the George W. Bush administration cut taxes but increased military spending after 9/11)

The Federal Reserve had lowered interest rates as the economy slowed in 2000-2001. Now the combination of lower-cost loans in the U.S. and capital reinvested by the Chinese produced a “liquidity glut.” That fueled a speculative bubble in U.S. assets, especially housing. The moderation described by Bernanke gave way to a period of speculation and volatility, leading in a few years to financial breakdown.

Sources of profit

Levy does not claim that the American economy of the new millennium was based on speculation alone. Real businesses produced real goods and services and earned real revenue. Internet companies—so many of which had failed in the dot-com bust of 2000—were finding ways to become profitable. One way was to collect massive amounts of user data and sell it to marketers, as Google and Facebook did. Another was to gain a huge advantage over the competition by developing an especially powerful marketing platform, as Amazon did. Levy notes that such business concentrations challenge the thinking of the Law and Economics movement, which had weakened antitrust enforcement on “the assumption that short-term, rational profit maximization among firms would always increase competition to the benefit of all consumers.” One reason why labor’s share of national income declined was that big companies in highly consolidated industries had more power to set wages.

The benefits of the new economy were distributed unevenly, not only because of the growing power imbalance between business and labor, but because of the increasing premium placed on education and skills. The wage gap between college-educated and non-college-educated workers widened. Geographical disparities were also evident, as centers of technological innovation like Silicon Valley flourished, while old manufacturing cities and many rural areas declined. Unemployment remained stubbornly high in what was called a “jobless recovery,” since high-tech industries didn’t employ enough people to compensate for the decline in manufacturing employment. What job growth did occur was more in low-wage services.

A serious underlying problem was that the growth in profits was outrunning the growth in investment and productivity. Levy says that “productivity growth in general disappointed because few potentially productivity-enhancing innovations appeared.” Economic rewards flowed to the owners of intellectual capital (“Big Data”) and human capital (education), but not to enough workers. The investments that might have enhanced the productivity of ordinary workers were not, for the most part, forthcoming. But consumption could still grow if those with stagnant wages could compensate by assuming more debt. That’s where the liquidity glut came in, making it easier to extend debt to people at many income levels. That’s how the expansion of the 2000s turned into the great housing boom of 2003-2006.

U.S. housing prices shot up. Through a “wealth effect,” capital gains on leveraged property ownership could translate into new incomes for American homeowners. The housing stock thus became a new personal income flow. The age’s capitalism of asset price appreciation had found a new asset class to concentrate on, as many ordinary homeowners were given the chance to participate in the game of credit-fueled asset price appreciation. As in credit cycles before, it only worked so long as confidence was maintained, and prices kept going up. That was what the Great Moderation had come to depend on.

In booming cities, residential construction and home prices surged because of increased demand and short supply. But they could flourish in more depressed areas too because of riskier subprime mortgage loans (often with initially low but potentially very high rates). President Bush boasted about the “ownership society,” where ownership of a wealth-producing asset was open to all. The financial services industry constructed a $4 trillion pyramid of mortgage-backed securities on shaky ground. The securities became farther and farther removed from the real financial state of the borrowers and the affordability of the loans. Big banks created various classes of mortgage-backed securities, combined them in complicated ways until rating agencies underestimated their true risk, and even had them backed by insurance companies that also misjudged them.

Levy regards the economic expansion of the 2000s as a “wasted opportunity to make broad-based investments in economic life.” Too many dollars flowed into speculative real estate investments, based on the assumption that home prices would continue rising and workers with stagnant wages could make the payments on their subprime, adjustable mortgages. Meanwhile, “the alarm kept sounding that man-made climate change required long-term fixed investments in a new energy system to capture and reduce carbon emissions.” And climate change was not the only pressing national need being neglected.

The Great Recession

I have discussed the 2008 financial crisis and the associated recession many times, most recently in my summary of The Economists’ Hour, part 3. Levy’s interpretation is based on his understanding of the dynamics of capitalism, including the long-term, linear trend of technological advance and the shorter-term cycles of confidence and credit. The crash of 2008 marked the end of a particularly speculative credit cycle, when a liquidity glut suddenly gave way to a liquidity shortage.

The expansion of the 2000s came to depend heavily on the housing boom, which depended in turn on the extension of credit to home buyers whose low incomes limited their ability to repay the kinds of subprime, adjustable-rate mortgages they were getting. The risks were disguised by complex and overrated securities based on those mortgages. As long as buyers kept buying and confidence in rising home values remained high, the boom could continue. When housing prices peaked in 2006 and loan defaults increased, the bubble burst. Mortgage-backed securities suddenly lost value, and investment banks whose balance sheets were loaded with them could no longer raise cash either by selling them or borrowing against them. The collapse of Lehman Brothers in September 2008 triggered a massive contraction of credit.

Nervous, precautionary hoarding among the global owners of capital broke out on a massive scale. Capitalism regressed back to where it was during the Great Depression of the 1930s—mired in a liquidity trap. Across the board, spending of all kinds, whether for investment or for consumption, dropped off. Employment collapsed.

The “ownership society” celebrated by President Bush, in which Americans would prosper together by owning rapidly appreciating assets, had failed. “Due to collapsed housing prices, between 2007 and 2010 median wealth declined 44 percent—back, adjusted for inflation, to where it had been in 1969.” (That large a drop may sound hard to believe, but a family with a $300,000 home and a $250,000 mortgage has only $50,000 in equity, which drops by 44% if the house loses just $22,000 in market value. Many families have little net worth outside of their home.)

What was different in this financial panic was that the federal government quickly intervened to restore liquidity. The standard procedure of cutting interest rates to ease borrowing and expand the money supply was not enough. In addition, the Federal Reserve arranged and subsidized the buyout of investment bank Bear Stearns by JP Morgan. It made a large loan to AIG, the largest insurer of mortgage-backed securities. The Troubled Asset Relief Program (TARP) authorized the Treasury to buy “toxic assets” that companies could not otherwise sell. (Actually, Treasury injected the cash mainly by purchasing non-voting stock in the companies.) In 2009, the new Obama administration got Congress to pass the American Recovery and Reinvestment Act, which stimulated the economy with tax cuts, aid to states, infrastructure projects and government research programs. In 2010. the Federal Reserve adopted its policy of “quantitative easing,” buying up long-term Treasury and mortgage bonds in order to bring down long-term interest rates. (The higher the demand of lenders for bonds, the easier it is for borrowers to borrow at low rates.)

Although these measures alleviated the immediate crisis, economic recovery was slow. Little was done to prevent ten million homeowners from losing their homes to foreclosure. Nevertheless, a new conservative movement, the Tea Party, formed around the complaint that the Obama administration was doing too much to help “freeloaders” and not enough for “hardworking Americans.” Rather than rallying around Obama as earlier generations of Americans had rallied around Roosevelt, a substantial segment of the electorate still wanted less government, not more.

The limits of reform

On the other hand, Barack Obama was no Franklin Roosevelt either. He filled his administration with leaders associated with the moderate Clinton administration—economic thinkers who had trouble thinking beyond the restoration of financial stability. The Wall Street Reform and Consumer Protection Act (“Dodd-Frank”) tightened regulation on the big banks and securities ratings agencies. But unlike Roosevelt, Obama didn’t have the benefit of many years of more sweeping proposals—in FDR’s case from the Populist and Progressive movements—and a public that was willing to try them. Even Obama’s rather moderate proposal for subsidized health insurance stirred up ferocious opposition.

Levy regards the Obama years as another lost opportunity. Because government could borrow money at near-zero interest rates, it was a great time to consider bigger public initiatives:

[M]any productive opportunities for spending cried out: to repair public infrastructure on dilapidated roads and bridges, to lay the foundations of a “green” energy grid, to invest in productivity-enhancing technology, or to support early childhood education to reverse the drastic effects of education gaps in the future labor market, to name some obvious candidates.

After Levy wrote those words, Joe Biden proposed such an ambitious agenda. But what Levy observed in the aftermath of the Great Recession was a “Great Repetition,” another expansion led more by asset speculation than by productive investment. This time it was an expansion of corporate debt that led the way.

Levy observes that each transition from one age of capitalism to another has required some form of state action. The victory of the Republican Party in 1860 and the Civil War ushered in the Age of Capital, as the nation transitioned from an agrarian economy based on land and slaves to a manufacturing economy based on steam-driven machinery. Then in 1932, the New Deal ushered in the Age of Control, as government tried to guide the economy with regulation, income supports, and fiscal or monetary policies to counter extremes of the business cycle. The “Reagan Revolution” of 1980 reacted against the reliance on government in the Age of Control and initiated the Age of Chaos.

A new age of capitalism—which Levy does not name and does not yet exist—would require more than an income politics focusing on the distribution of the benefits from capitalism. It requires a “democratic politics of capital,” giving citizens a voice in directing capitalist investments toward socially useful ends. Traditionally, government has directed investment primarily to wage war and maintain a military-industrial complex. Otherwise it has left major investment decisions to the private sphere and the owners of capital, who waste too much capital on short-term speculation. Now other urgent national needs may call for an expanded conception of public investment. Levy would probably like Biden’s concept of “human infrastructure.”

Readers who are not put off by the length of this book will find it historically informative and intellectually challenging. I highly recommend it.


Ages of American Capitalism (part 4)

July 5, 2021

Previous | Next

Now I turn to the last of Jonathan Levy’s four ages of capitalism, the “Age of Chaos” that began in 1980. Here is his overview:

What most distinguishes the Age of Chaos is a shift in what has always been capitalism’s core dynamic: the logic of investment, as it works through production, exchange, and consumption. Since 1980, a preference for liquidity over long-term commitment has dominated capital investment as never before. Fast-moving money, rapid investment and disinvestment, across various asset classes, as well as in and out of various companies, has not only overturned the old methods of production—its logic has often threatened to overwhelm other economic patterns. In short, the liquidity of capital has made for a chaotic age dominated by the vagaries of appreciating assets.

For Levy’s assumptions about liquidity, I refer the reader to my first post on his book, especially the section “Economic growth and the liquidity problem.” The gist of it is that the capitalist economy needs liquidity in some form, but too much of certain kinds of liquidity are a problem. Transactional liquidity—money to buy things—is essential, but too much speculative liquidity—money held for short-term trading—generates booms and busts in asset prices without increasing long-term investment, improving productivity, or generating sustainable prosperity.

This post discusses the two chapters that concern the 1980s and 1990s respectively, “Magic of the Market” and “The New Economy.”

The 1980s and the Reagan administration

Levy presents a balanced and insightful description of the interplay between Reaganomics and the economic developments of the 1980s. Reaganomics did not live up to either the hopes of its proponents or the fears of its detractors. Conservatives hoped that if the government would just get out of the way, the “magic of the market”—one of Reagan’s favorite phrases—would restore the postwar prosperity interrupted by the economic shocks and stagflation of the 1970s. But Reagan was unable to revive the manufacturing-based economy with its high productivity, broad-based income growth and trade surpluses. On the other hand, liberals feared that Reaganomics would undo New Deal accomplishments in regulation and income security, taking the country back to the bad old days of predatory robber barons and exploited workers. But many key elements of New Deal liberalism were here to stay, such as Social Security, securities regulation and progressive taxation. And Reagan himself wholeheartedly embraced one aspect of Big Government—higher spending on the military-industrial complex.

Arthur Laffer’s controversial “supply-side economics” had proposed that tax cuts, especially for the wealthy, would stimulate so much private investment and economic growth that they would pay for themselves by expanding the tax base. When that turned out not to be true, Reagan was forced to curb his tax cutting in order to deal with growing budget deficits. He did not succeed in ending the deficit spending that had become chronic in the “Age of Control.”

There was no going back to an earlier time, because a new kind of economy was emerging. Reaganomics didn’t create it, although it did influence it by favoring capital and weakening labor and government. But something else shaped it more decisively—the spike in interest rates engineered by the Federal Reserve under Jimmy Carter’s appointee, Paul Volcker, in order to tame inflation. It worked, but it also triggered the 1981-1982 recession and altered global investment patterns. As inflation fell and the economy recovered from recession, confidence in the value of the dollar returned. It was no longer based on the dollar’s fixed value in gold, however, but on the high interest lenders could earn on dollar-denominated assets. International capital flowed toward the United States, creating a strange new kind of global dominance—a “far more novel U.S. global hegemony.” Postwar America had been a net exporter of goods and capital, like powerful countries before it. Now capital “ran uphill,” and the world’s richest country became a debtor nation. The strong dollar helped Americans buy foreign goods, while making it harder for foreign consumers to buy our goods. But foreigners were happy to take the dollars they accumulated by selling to Americans and lend them back to us by buying American bonds earning high rates of interest. Foreigners helped finance both the trade deficit and the government deficit. As capital flowed in this direction, the U.S. economy had a surplus of credit, while poorer countries had a shortage of credit, reflected in the Latin American debt crisis of 1982.

Earlier, Levy discussed the highly speculative capitalism of the 1920s, which set the stage for the crash of 1929 and the Great Depression of the 1930s. Underlying causes included the manufacturing boom already in progress, the confidence in the currency after the return to the gold standard after World War I, and the quest for high profit margins to compensate for the high cost of borrowing. Levy sees a similar pattern in the 1980s:

Confidence high, to hurdle over the high interest rate, investors resorted to debt to leverage up short-term speculative profits in stocks, bonds, and commercial real estate especially. Speculative investment was back, as the dynamic factor in economic life, joining hands with an insatiable American consumerism.

In one respect though, the 1980s were very different from the 1920s. While the 1920s expansion was associated with the boom in manufacturing investment, the 1980s expansion occurred despite the disinvestment in manufacturing and the lack of any new surge in fixed investment overall. “As profit making shifted toward short-term finance, tellingly the macroexpansion of the 1980s remains the only one on record in which there was a declining share of fixed investment in GDP.” What did occur was a wave of speculative trading and investment in existing companies. There were takeovers by “corporate raiders” and “leveraged buyouts” financed by high-interest “junk bonds.” There were acquisitions of savings and loans by shady owners who took them into bankruptcy at public expense. And if the Ford Motor Company was the iconic firm of the early twentieth century, Donald Trump’s real estate company was a poster child for 1980s speculation:

Trump, leveraging his real estate assets and no less his celebrity, built his Manhattan real estate and Atlantic City casino empire on debt, funded by a “sprawling network of seventy-two banks”…. Trump was emblematic of a larger trend. He was a business concern with very little underlying income generation, relative to his assets, which he purchased through bank debt. When his assets increased in price, he used them as collateral for more loans, which became his income, given that his actual businesses usually lost money in the end. “Truthful hyperbole” was what Trump branded the business model in his ghostwritten autobiography The Art of the Deal (1987).

Reagan’s “capital-friendly policies,” especially tax cuts for the wealthy, contributed to the expansion, but maybe not as he intended. Much of the newly available capital went into speculation rather than productive investment. Some forms of deregulation, such as more permissive rules governing savings and loans and relaxed enforcement of antitrust laws, gave speculators more freedom to operate.

Sources of income also shifted in the 1980s, as income growth became less dependent on productivity gains and more dependent on asset appreciation. The people who gained the most were those who owned tangible assets like homes or financial assets like stocks and bonds. “The financial appreciation of the asset—through its sale (capital gains) or its capacity to be leveraged in credit markets—generated the pecuniary income.” After-tax income for the wealthy increased even more, because of tax cuts. Meanwhile, average hourly compensation for workers remained flat, but their households could increase their spending by taking on more debt.

The expansion that began after the 1981-1982 recession lasted until 1990. When the Federal Reserve raised interest rates in the late 1980s, fearing inflation, firms and households became more reluctant to borrow. Consumer spending declined, real estate values dropped, and the market for junk bonds collapsed. Donald Trump went bankrupt, although he would not stay down for long.

The 1990s and the Clinton administration

The collapse of the Soviet Union in 1991 was a “remarkably optimistic moment” for capitalism. There was even talk of the “end of history,” since capitalist democracy seemed to have emerged victorious over its greatest economic and political rival. Americans expected the United States to remain the strongest economy in the world, dominating what was becoming an increasingly global economy. Prosperity would depend more than ever on the free movement of capital, people and goods across national boundaries. All three, however, tended to flow toward the U.S., as the country continued becoming more a consumer of manufactured goods than a producer, and more a borrower of capital than a lender. The largest American company, Walmart, was not a producer, but a retailer, mostly selling goods made elsewhere.

The good news about the 1990s was that the total rate of fixed investment increased in spite of the continued disinvestment in old manufacturing industries. This was mainly due to new investments in information technology. Productivity growth accelerated and wages improved a bit, while inflation remained under control. The silicon microprocessor was a general-purpose technology adaptable to a wide variety of uses. Silicon Valley emerged as the leading center of the electronic revolution.

A silicon chip is a physical thing, much smaller but just as tangible as an industrial machine. Recall, however, Levy’s broader definition of capital—the “process through which a legal asset is invested with pecuniary value, in light of its capacity to yield a future pecuniary profit.” In the information age, intangible assets like data, social networks and technical skills meet that definition. Intellectual, social and human capital became more vital wealth-producing assets. Society became fascinated by the Internet as the “information highway,” the infrastructure for the free flow of ideas. In that respect, although the forms of capital were different, the emerging economy had something in common with the early manufacturing economy.

Back then, at the dawning auto-industrial society, financial activity sponsored new long-term fixed investments, just as the 1990s saw new long-term investments at the dawning of an Internet-based economy. (By contrast, the 1980s had been a moment of speculative disinvestment, with little creation.)

As promising as the “New Economy” was, it was emerging within a society with unresolved social issues, in particular the unfinished movements for racial, gender and economic equality. Some forms of human capital were highly valued, such as technical skills, while others were overlooked or devalued. Places in the vanguard of economic development, like Silicon Valley, had their divisions between hi-tech jobs for white men and lower-wage service jobs for women and minorities. Many other places didn’t have enough jobs of any kind, although they had increasing rates of incarceration.

President Clinton came into office in 1993 with a liberal agenda that included health care reform and pro-worker labor laws. But the following year, Republicans took control of Congress for the first time since 1954 and issued a very conservative manifesto, the “Contract with America.” Clinton couldn’t pass liberal legislation, but what he could do was embrace the wealth-creating potential of the “New Economy”:

During the 1990s, by contrast with the more improvisational Reagan administration, the “New Democrats” of President Bill Clinton articulated a coherent political-economic settlement for the new age. Clinton went all in on a finance- and technology-driven, center-left vision of “globalization.”

Clinton accepted the conservative proposition, “The era of Big Government is over,” and counted on the free flow of capital and trade to create prosperity for all. He approved the North American Free Trade Agreement (NAFTA) in 1994, and deregulation of telecommunications and banking in 1996. (The Glass-Steagall Act of 1933, which kept the same firm from engaging in both banking and investing, was repealed, mainly for the benefit of Citigroup.) Clinton also promoted the welfare reform bill that made public assistance more temporary and attached work requirements to aid for single parents.

President Clinton’s fiscal policy was also conservative, accepting the need for fiscal austerity and balanced budgets. The theory was that reduced government borrowing would “free up capital for long-term fixed investment.” In 1993, before Republicans took control of Congress, Democrats passed a deficit reduction measure that included both spending cuts and tax increases. As a result, by the late 1990s, the federal budget was in surplus for the first time since 1960. (Every Republican opposed Clinton’s deficit reduction plan, claiming that the budget could be balanced with spending cuts alone. But no Republican administration since Eisenhower has ever achieved that.)

Clouds on the horizon

The period from 1992 to 2000 was an unusually long period of sustained economic expansion. It was based on high confidence, often justified by long-term investments in emerging technologies and resulting productivity gains. However, assets like company stocks appreciated even faster than productivity and profits. In 1996, Fed chair Alan Greenspan warned that the stock market was beginning to show signs of “irrational exuberance.” Capital continued to flow into the U.S., and the quest of capital for high returns fed another speculative boom. As the federal government reduced its borrowing, more capital flowed into corporate stocks and bonds and less into Treasury bonds.

Although the Internet had great business potential, how it would actually fulfill that potential was not yet clear. In 1995, Netscape’s initial public offering sold for $3 billion, although it had no operating profits. So began the so-called “dot-com” boom, in which investors rushed to finance enterprises whose future returns were questionable, to say the least. “Capital had never before moved so quickly into a new asset class.” When E*Trade went online In 1996, stock-trading joined pornography and social networking as popular Internet pastimes. The Nasdaq exchange, which listed many of the Internet companies, reached a price-earnings ratio of 175, compared to a typical ratio of 10 to 20. Then it lost half of its value in 2000 when the speculative bubble burst. But that was nothing compared to the boom-and-bust cycle that would appear early in the new millennium.

Continued


Ages of American Capitalism (part 3)

July 1, 2021

Previous | Next

Here I discuss two chapters of Jonathan Levy’s book that describe the end of what he calls the “Age of Control” and the transition to the “Age of Chaos” that began with the election of Ronald Reagan in 1980. Chapter 17 is called “Ordeal of a Golden Age, and Chapter 18 is “Crisis of Industrial Capital.”

The culmination of postwar liberalism

Readers who are too young to remember the 1950s may not appreciate just how much more structured life was than it is now: A one-size-fits-all family structure predominated, with clearly defined gender roles linking a male breadwinner to an economically dependent female homemaker. Corporations were run by rigid bureaucracies staffed by conforming “organization men.” Management and labor had an understanding in which workers could bargain for better wages and working conditions, but management made the major business decisions. Strict boundaries separated the for-profit, nonprofit and public sectors of the economy. Racial segregation was deeply entrenched.

Underlying these structures were the fixed-capital investments of the mass-production manufacturing economy, which generated strong profits for capital and decent incomes for mostly white, male breadwinners. The system was also supported by the liberal state that regulated business, took action to counter recessions, and provided some measure of income security. All this structure came as a relief after the upheavals of Depression and war from 1929-1945. In contrast, the period from 1945-1970 was a time of relative peace and prosperity, although it was also a time of Cold-War jitters accentuated by the Cuban missile crisis and the Vietnam War.

However, the system—structured as it was—did not work equally well for everyone. It worked better for male breadwinners than working women, better for suburbanites than inner-city dwellers, better for residents of the industrialized Midwest than for residents of Appalachia. Industrial America played an especially cruel trick on Afro-descendant Americans, drawing them into the industrial inner cities only to become trapped there as capital and good jobs flowed to the segregated postwar suburbs.

Things came to a head in the 1960s, when a sustained economic boom raised expectations and highlighted the dissatisfactions of the relatively disadvantaged. Protest movements by minorities and women challenged the existing social structure. When the liberal Kennedy and Johnson administrations addressed the issues with civil rights legislation and antipoverty programs, they shattered the postwar consensus. Americans had broadly supported efforts to maintain full employment and support income security in the aggregate. Maintaining “aggregate demand” for mass-produced consumer goods was at the heart of Keynesian economics. But actions to assist economically distressed subpopulations were a bridge too far for many Americans. While liberals saw problems of inequality as defects of social structure, conservatives tended to blame them on personal deviations from conventional structure, such as the failure of poor men to marry their sexual partners and assume the breadwinner role. The rising urban crime and social unrest of the 1960s led to calls for “law and order” as well as calls for liberal social programs.

Despite its many economic successes, the postwar social system was now failing politically. And it was starting to falter economically as well.

The crisis of industrial capitalism

Postwar economic growth was built mainly on the strength of the manufacturing sector. By the late 1960s, however, manufacturing profits were falling, at least partly due to greater international competition. For the first time in the twentieth century, the U.S. imported more goods than it exported in 1971. That meant that foreigners were accumulating more dollars than they wanted to spend on American goods. The Federal Reserve’s reduction of interest rates to fight the recession of 1969-1970 also made it less rewarding to hold dollars. That posed a problem for the international monetary system created at Bretton Woods, since the fixed price of the dollar in relation to gold kept its value from fluctuating in response to its reduced demand. Facing a run on gold by foreigners who preferred gold to dollars, the U.S. abandoned the gold standard. A new era of floating exchange rates began.

Another problem for the U.S. economy was the rising cost of raw materials on world markets, especially the higher price of oil. This was exacerbated by the OPEC oil embargo in retaliation for American support for Israel in the 1973 war.

In the early 1970s, the rate of productivity growth slowed, suggesting that technological innovations in production were not occurring as rapidly as before. In addition, the productivity gains that were occurring were no longer translating into wage gains for workers.

The recession of 1973-1975 was, at the time, the worst since the Depression. In 1975, both the unemployment rate and the rate of inflation exceeded 8 percent, an unusual condition that came to be known as “stagflation.” Levy gives a number of reasons for the high inflation. Economists continue to debate their relative importance:

  • the Federal Reserve’s low interest rates, which encouraged borrowing and allowed too much money to chase too few goods
  • expansionary fiscal policy (including government spending on the Vietnam War and social programs), which raised aggregate demand relative to supply.
  • rising commodity prices caused by the pressure of demand on supply
  • the slow rate of productivity growth, especially in the expanding service sector
  • expectations of continued economic instability, which discouraged long-range planning and investment and undermined productivity growth

Stagflation created a difficult choice between measures to fight unemployment and measures to fight inflation. The conventional Keynesian way of fighting unemployment—deficit spending by government—could worsen inflation by overstimulating aggregate demand.

As great cities of the industrial era struggled, a new kind of economy was emerging in the Sunbelt. Levy focuses on Houston, whose economy was based primarily on oil, petrochemicals, and real estate. It was a booming, sprawling, rapidly suburbanizing city with no urban plan and no zoning ordinances. It’s manufacturing labor force was relatively small and nonunionized, but its expanding service economy created many jobs for women. This set up a “postindustrial positive feedback loop”—Employed women needed to buy services they might otherwise have provided themselves, and those services in turn provided employment for women. The 1950s marriage with the economically secure breadwinner and the non-employed homemaker was no longer the norm. Houston was a heavily “privatized” city, with low taxes and limited public services. Levy also calls it a “liquid city”—with the pun definitely intended:

Houston was a liquid city because it sat on wetlands and always flooded, and also because of its great economic premise, oil. But its pattern of development uncannily embodied some of the themes of speculative liquidity preference: an energetic restlessness, the convertibility of once seemingly unlike things, markets for everything, and a busy present with no heed for the long term.

In that sense, Houston was in the vanguard of the “Age of Chaos” to come. The more conservative economic ideas that would come to dominate that era were also born in the 1960s and 70s. The monetary school under the leadership of Milton Friedman rejected Keynesian economics and limited the government’s role in the economy mainly to managing the money supply. The Law and Economics movement questioned whether anything could be accomplished with government regulation that couldn’t be accomplished through market competition. The rational expectations school explained how government policies could be undone by people’s conscious reactions to those policies. For example, interest-rate reductions by the Fed could lead lenders to expect more inflation, which induced them to demand higher interest rates on their loans. The mathematical analysis often got complicated, but the conclusion was simple: “In the abstract, markets were efficient and just—which just happened to agree with what the CEOs of the Business Roundtable already knew in their guts rather than from any mathematical model.” Economists were undermining the intellectual rationale that had justified government influence over markets during the “Age of Control.”

The administration of Jimmie Carter was already moving away from liberal economic policies before he lost the 1980 election to Ronald Reagan. Congress had passed major new pieces of regulatory legislation—the Clean Air Act, the Consumer Product Safety Act, and the Occupational Safety and Health Act—as recently as 1970. But by 1978, “the drift now was not to write better market regulations: even among liberals, it was to throw in the towel on market regulation altogether.” Industries that experienced a relaxation of regulation were airlines, trucking, and banking. By the end of his administration, Carter was placing a higher priority on fighting inflation and balancing the budget than on spending to create jobs. “For the first time since the early 1930s, austerity was back in economic policy making.” Perhaps Carter’s most economically consequential act was to appoint Paul Volcker as chair of the Federal Reserve. His tight control over the money supply brought the rate of inflation down dramatically, but allowed interest rates to rise to astronomical levels, crushing borrowing and bringing on the 1981-1982 recession.

In retrospect, the Age of Control had succeeded, for a time, in creating conditions under which capital would remain productively invested in a mass-production economy. Government had taken many measures to inspire confidence—confidence that the currency would retain its value, that stock investors would have accurate information about a company they were buying, that workers could bargain for a share of the value added by their rising productivity, that retired employees could have a decent income, that depositors could put their money safely into a bank, and that government would borrow and spend to combat recessions. When faced with new economic conditions, like increased global competition for manufacturing jobs, rising oil prices, and demands for inclusion by historically disadvantaged groups, liberal government was overwhelmed. The Age of Control ended, and government increasingly left it to the capitalists and their free markets to make a new economy.

Continued


Ages of American Capitalism (part 2)

June 29, 2021

Previous | Next

Jonathan Levy divides American economic history into four ages:

  1. The Age of Commerce (1660-1860)
  2. The Age of Capital (1860-1932)
  3. The Age of Control (1932-1980)
  4. The Age of Chaos (1980- )

Here I discuss the Age of Control, which is the era of the New Deal, World War II, and the postwar prosperity.

The Great Depression

“Rarely if ever before had an industrial economy been so poised on the brink of a great leap forward in wealth-generating enterprise. But it had stalled in mid-leap.” The run-up to the economic crash of 1929 is a prime example of one of Levy’s general observations, that major investment booms involve both long-term fixed investment that drives real economic growth and speculative bubbles that end badly. In this case, much of the new investment was in “Fordist” mass production of consumer goods like cars and home appliances. The electric assembly line represented the “largest surge in labor productivity ever recorded.” Unfortunately, high productivity does not translate directly into sustained economic expansion and lasting prosperity. The story of economic history must include the cyclical fluctuations in confidence and credit as well as the linear trend in technological innovation and rising productivity.

At the beginning of the 1920s, investor confidence was high. But one reason for the high confidence also contained the potential for a boom-and-bust cycle. During World War I, European governments had gone off the gold standard and expanded the money supply in order to finance the war effort. Now they returned to the gold standard and restricted the money supply to fight inflation. The U.S. Federal Reserve went along by raising interest rates. These policies contributed to a temporary situation of price stability, confidence in the currency, and a willingness of investors to lend, but at relatively high rates. The high rates set a high bar for business investments, which only made sense as long as the returns exceeded the costs of borrowing.

All was well as long as confidence in future profits remained high but also realistic. But once the boom got going, speculators could easily borrow too much in order to pay too much for assets whose real prospects couldn’t justify their cost. When these unrealistic expectations went unfulfilled and profits didn’t materialize, confidence was shaken, credit dried up, and investment collapsed. In 1931, the Federal Reserve made things worse by further tightening the money supply. By the time Franklin Roosevelt was elected in 1932, the economy had entered a “liquidity trap.” Precautionary liquidity had taken over, and businesses were afraid to invest in production even when they could borrow at lower rates. The Fed brought interest rates down, but it was too late. The economy hit bottom in 1933, with economic output only half of what it had been in 1929 and unemployment over 20%. The most productive factories the world had ever seen couldn’t sustain prosperity if they were idle.

New Deal capitalism

One of the first things the new administration had to do was counter the collapse of confidence that kept the economy in the liquidity trap. Much economic activity had simply come to a halt, as lenders were afraid to lend money they might not get back, businesses were afraid to produce goods that wouldn’t sell, and consumers were afraid to spend what little money they had as incomes fell. Roosevelt’s famous declaration that “we have nothing to fear but fear itself” was more than just rhetoric. The crisis of confidence went beyond the economy to challenge government as well. As some European countries turned to authoritarian leaders to address the crisis, many Americans questioned whether liberal democracy was up to the task.

Roosevelt believed that it was, and he welcomed the characterization of his policies as “liberal”. In its early days around the time of the Civil War, the Republican Party had been the liberal party, but now Democrats earned that label, leaving many Republicans to play the part of conservative doubters of the New Deal.

Democratic efforts to get the economic crisis under control initiated the “Age of Control.” One of the first things Roosevelt did was adopt a policy he called “definitely controlled inflation” by taking the country off the gold standard. Rather than be inhibited by the supply of gold, the money supply could expand along with economic activity. In fact, monetary expansion could encourage economic activity—at least in the short run—by putting more dollars in the hands of spenders, including the government itself.

Levy distinguishes two different kinds of economic liberalism, regulatory and developmental. While the New Deal was strong on economic regulation, it was weaker on directing the nation’s investment, a weakness that Levy sees as a problem to this day.

The two main objects of New Deal regulation were business practices and income security. The Security Exchange Act created the SEC to regulate publicly traded corporations and curb the worst abuses associated with financial speculation. It banned insider trading, required regular financial reports, and contained many provisions to prevent fraud. The Social Security Act created not only social insurance for retirees, but unemployment compensation and aid to poor women and children. The National Labor Relations Act guaranteed the right of workers to organize and collectively bargain. The Fair Labor Standards Act set maximum hours and minimum wages. New agricultural programs supported commodity prices to provide more stable farm incomes. If Americans were more secure in their incomes, they would feel more comfortable buying the goods that the emerging mass-production economy was capable of producing.

Developmental liberalism tried to stimulate investment in two ways. It lent capital to private investors, especially in banking, real estate and agriculture. It also made massive public investments in infrastructure through projects like the Tennessee Valley Authority.

These programs were liberal but not radical. They did not overturn the fundamental assumptions or power structures of capitalism; nor did they bring the Depression to an end, although the economy did improve from 1933 to 1936. Levy’s assessment:

New Deal capitalism was a variety of capitalism because the discretionary power of when and where to invest remained in the hands of the owners of capital. During the 1930s, whether the investment was private (incentivized or not) or public, its combined magnitude was simply insufficient to draw out sufficient economic activity to end the Great Depression. A general lack of initiative and spending remained.

In 1937, the government contributed to a “recession within the Depression” by prematurely trying to balance the budget and tighten monetary policy. Levy also suggests that a new kind of liquidity preference played a role, one that he calls “political liquidity.” Industrial capitalists who opposed the New Deal “threatened not to invest unless their political demands were met, especially for lower taxation on their incomes.”

In 1938, Roosevelt accepted deficit spending as a way to stimulate the economy. John Maynard Keynes had presented the rationale for this in The General Theory of Employment, Interest, and Money (1936). But he also warned, in 1940, “It is, it seems, politically impossible for a capitalistic democracy to organize expenditure on the scale necessary to make the grand experiment which would prove my case—except in war conditions.”

World War II

As Keynes expected, the massive government spending required by World War II was what brought the economy back to full production. It also provided a powerful psychological stimulus, generating popular support for an all-out political and economic effort to win the war. Economic preferences shifted dramatically toward fixed investment and away from any kind of liquidity—precautionary, speculative, or political. Why be shy about investing, when the government provided a willing buyer for all the armaments a factory could turn out? By 1942, the U.S. was winning the “war of the factories,” surpassing both Germany and Japan in the production of munitions.

Big Government liberalism thrived in both its regulatory and developmental aspects. On the regulatory side, government raised taxes, rationed consumer goods like gasoline, and implemented wage and price controls to curb inflation. On the development side, military planners told industry what to invest in.

World War II also encouraged a spirit of shared sacrifice and shared rewards. In addition to winning the war itself, Americans could expect a more equal distribution of economic benefits, through measures such as a more progressive income tax, support for organized labor, and the GI Bill of Rights.

The American military-industrial machine not only won the war, but unlike the economies of other combatants, remained undamaged by the war. Now that we had a fully-functional mass-production system up and running, we just had to convert it to peacetime uses.

Postwar prosperity

Levy uses the term “postwar hinge” to refer to the unique connection between domestic politics and international politics at the end of World War II. “At the war’s close, Americans owned three-quarters of all invested capital in the world, and the U.S. economy accounted for nearly 35 percent of world GDP….” Big Government combined with capitalist industry to make the U.S. the most powerful country in the world, the biggest exporter of products, capital, democratic ideas and consumer culture. The U.S. was the newest hegemonic power, although its hegemony was challenged by its next-strongest rival, the Soviet Union.

As a result of the Bretton Woods conference of 1944, the American dollar became the anchor of the global financial system. The dollar was agreed to have a fixed value in relation to gold. Other currencies would have a value in dollars, but could be revalued under certain circumstances. This arrangement institutionalized the dollar as the world’s strongest currency and helped secure the value of investments denominated in dollars.

Although wartime military spending declined, government contributed in a number of ways to the continuation of the private investment boom—not only maintaining the strength of dollar, but continuing support for income security, maintaining a military establishment during the Cold War, and engaging in Keynesian deficit spending to counter recessions.

At the same time, postwar politics placed definite limitations on government’s role in the economy, especially with respect to developmental liberalism. The owners of capital reassumed control over investment decisions, choosing, for example, to direct investments toward single-family homes and shopping malls in all-white suburbs, while inner cities were allowed to decay. Government cooperated by providing highway construction and racially discriminatory housing loans. Once the Cold War began, conservatives could exploit the fear of communism to defeat liberal proposals for greater government influence. Among the casualties were Harry Truman’s call for national health insurance in his “Fair Deal,” the Taft-Ellender-Wagner public housing bill, and a provision within the Full Employment Act of 1946 that called for supplementing private investment with public investment in order to maintain full employment.

The federal government might tax and redistribute incomes, and it might regulate specific industries, but it remained incapable of acting autonomously and creatively in furtherance of a recognized public interest beyond “national security.” Cold War military spending was the most legitimate form of government expenditure to sustain economic growth…. That the government enjoyed an autonomous arena of action only when targeting benefits toward white male breadwinners, or invoking national security, warped state action at home and abroad…. Surely government planning for long-term economic development on behalf of the public interest was off the table.

The political economy of the postwar era was strong enough to produce a postwar economic boom and raise incomes for millions of white working families. The era became known as a “golden age” of capitalism. Yet it was not sustainable enough to last more than a few decades before things started to go seriously wrong again.

Continued