Power and Progress (part 3)

August 28, 2025

Previous | Next

Acemoglu and Johnson argue that the link between technological progress and general prosperity is not automatic. It depends on other variables, especially how well new technologies sustain the demand for labor and how much workers share the benefits of rising productivity. Having supported their argument with historical examples, they now apply it to the more recent economy, especially the period since 1980.

The “graveyard of shared prosperity”

At the height of the postwar prosperity, believers in the “productivity bandwagon” expected further technological breakthroughs to raise productivity and wages, continuing and even surpassing the postwar prosperity. Digital technologies, like the mainframe computers already in use in the 1960s, looked very promising. As the digital revolution took off, the rate of innovation soared.

“Digital technologies, even more so than electricity…are general purpose, enabling a wide range of applications.” They have the potential not only to replace human labor with smart machines, but also to complement and enhance human labor. During my university career, I used computers to enhance my teaching, research and administrative work in numerous ways, but they never replaced me in any of those roles. Many manufacturing workers weren’t so lucky, as new technologies were used more to replace labor than to augment it. As the authors put it, “Digital technologies became the graveyard of shared prosperity.” I would emphasize the word “shared” in that claim, since no one disputes that digital technologies have created great riches for some and modest gains for others.

The authors attribute much of the decline of shared prosperity to a more conservative vision of progress that developed in the 1960s and 70s and became dominant after the “Reagan revolution” of 1980. In this vision, the path to prosperity started at the top, with wealthy investors, high-profit corporations, and well-rewarded shareholders. If left alone by government, they would create more wealth and income for all. But to maximize investment, the rich needed low taxes; and to maximize profits, corporations needed low taxes, minimal regulation, and low labor costs. “Many American managers came to see labor as a cost, not as a resource…This meant reducing the amount of labor used in production through automation.”

Americans may place most of the blame for lost manufacturing jobs on foreign competitors like China, but automation is responsible for more job losses and downward mobility. While foreign workers and immigrants did take many of the of the low-wage manufacturing jobs, automation destroyed more of the jobs that had been paying good wages.

The workers who remained in manufacturing were more productive, but the demand for additional workers fell. In addition, total factor productivity grew at a much slower rate after 1980 than in the previous four decades. Median wages grew even more slowly, less than 0.45% per year.

Inequality increased in a number of ways:

[T]he share of the richest 1 percent of US households in national income rose from around 10 percent in 1980 to 19 percent in 2019…Throughout most of the twentieth century, about 67-70 percent of national income went to workers, and the rest went to capital (in the form of payments for machinery and profits). From the 1980s onward, things started getting much better for capital and much worse for workers . By 2019, labor’s share of national income had dropped to under 60 percent…

What income did go to labor was divided more unevenly across educational levels, with college-educated workers gaining some ground, while less educated workers saw actual declines in real earnings. Rather than train less educated workers, employers more often replaced them with fewer but more educated workers. Along with the destruction of manufacturing jobs came the decline of unions and the reduced power of workers to fight for good wages and job training.

The value of the five biggest corporations—Google, Facebook, Apple, Amazon and Microsoft—grew to about 20 percent of GDP, twice as much as the value of the five biggest corporations at the height of the Gilded Age in 1900.

Artificial intelligence

Acemoglu and Johnson see artificial intelligence making matters worse, since so many employers are using it to replace human labor rather than augment it. Rather than ask how machines can be useful to workers, proponents of new technologies ask how machines can equal or surpass human workers. Taken to an extreme, the goal of AI enthusiasts is to achieve a general machine intelligence that can make any decision as well as a human. From a business standpoint, it is the ultimate way of cutting labor costs, by replacing educated as well as less-educated labor.

So far, the results have been a lot of what the authors call “so-so automation,” with only modest gains in productivity. The reason, they think: “Humans are good at most of what they do, and AI-based automation is not likely to have impressive results when it simply replaces humans in tasks for which we accumulated relevant skills over centuries.”

What makes us think that the way to prosperity is to devalue the human capacities of the workers who are trying to prosper? That may generate short-term profits for the owners of the machines, but not shared and sustained prosperity. The authors warn that “infatuation with machine intelligence encourages mass-scale data collection, the disempowerment of workers and citizens, and a scramble to automate work, even when this is no more than so-so automation—meaning that it has only small productivity benefits.”

The threat to democracy

The part of the book I found most disturbing was the chapter, “Democracy Breaks.” It describes what some have called a new “digital dictatorship,” most evident in China. With the help of some of the world’s largest AI companies, the Chinese government has turned the data-crunching capacities of new technologies into tools of mass surveillance and control. The aim is to monitor, rate, and sanction the behavior of any citizen. Forty years after Orwell’s imaginary 1984, Big Brother is watching more efficiently than ever. Other authoritarian governments—Russia, Iran, Saudi Arabia, Hungary, and even India—are developing similar capabilities.

In the United States, “The NSA cooperated with Google, Microsoft, Facebook, Yahoo!, various other internet service providers, and telephone companies such as AT & T and Verizon to scoop up huge amounts of data about American citizens’ internet searches, online communications and phone calls.”

Digital media have also played a role in polarizing Americans and debasing civil discourse. Media companies whose business model was based on selling ads, such as Facebook, wanted to keep their users as engaged as possible. “Any messages that garnered strong emotions, including of course hate speech and provocative misinformation, were favored by the platform’s algorithms because they triggered intense engagement from thousands, sometimes hundreds of thousands, of users.”

The hope that digital media would—like the printing press of an earlier era—empower citizens and strengthen democracy has not been fulfilled. The underlying problem, according to Acemoglu and Johnson, is that technology companies prefer a more “technocratic approach, which maintains that many important decisions are too complex for regular people.” In the economy, that encourages the devaluation and replacement of human  laborers and a flow of economic rewards to the rich. In government, it enables the surveillance and control of citizens and a flow of political power to authoritarian leaders.

Redirecting technology

In their final chapter, Acemoglu and Johnson describe a three-pronged formula for redirecting technology: “altering the narrative and changing norms…cultivating countervailing powers…[and] policy solutions.”

The new narrative would reject “trickle-down economics” and shift the emphasis back to shared prosperity. It would encourage decision-makers to address the wellbeing of ordinary people, instead of assuming that what’s good for corporate profits or large fortunes is good for everybody. Hopefully it would influence how business managers think and what they learn in business school.

Countervailing power against self-serving technocrats and corporations can come from many directions—government, civic organization and online communities. Now that blue-collar manufacturing workers are a smaller part of the labor force, organized labor should grow to embrace many occupations. Workers should organize on a broader level than the plant or the firm and play a major role in national politics.

Here are some of the policy changes they recommend:

  • Subsidize socially beneficial technologies, especially those that augment human labor rather than replace it
  • Support research on such technologies, especially in education and health care
  • Break up technology companies that have become too monopolistic
  • Reform tax policies that favor investments in equipment over hiring of workers
  • Increase tax incentives for worker training
  • Repeal the law that exempts internet platforms from any accountability for what they post
  • Tax platforms that rely on advertising in favor of those with alternative revenue streams, such as subscriptions or nonprofit contributions
  • Raise the minimum wage, but do not provide a Universal Basic Income

The authors regard a Universal Basic Income as “defeatist,” since it “fully buys into the vision of the business and tech elite that they are the enlightened, talented people who should generously finance the rest.” What they support instead is a new vision committed to seeing the value and productive potential in all of us and investing accordingly.


Power and Progress (part 2)

August 24, 2025

Previous | Next

In general, over the last millennium, technological advances have raised living standards. But this broad generalization obscures important historical variations. At least two conditions must be met if new technologies are to contribute to widespread prosperity. First, they must sustain labor demand by augmenting and not just replacing human labor. And second, high labor demand must generate high wages. According to Acemoglu and Johnson’s Power and Progress, directing new technologies toward these ends is a social choice, and the distribution of social power affects how that choice is made.

Even when couched in appeals to the common good, new technologies do not benefit everybody automatically. Often, it is those whose vision dominates the trajectory of innovation who benefit most.

An important implication of this argument is that the path to prosperity leads through democratization as well as through technological innovation. The middle part of the book supports their argument with many historical examples.

“Cultivating Misery”

The history of agriculture reveals that a positive relationship between technology and prosperity is mostly a modern urban phenomenon. For much of history, people who have worked the land have received little benefit from their own increased productivity. In Medieval Europe, the problem was not a low demand for labor, but the power of landowners over workers. England after the Norman conquest “was a dark age for English peasants because the Norman feudal system ensured that higher productivity would accrue to the nobility and the religious elite.” Farming methods gradually improved, but a coercive social system enabled the elites to claim the surplus product, while keeping the peasants at a subsistence level.

Beginning in the fourteenth century, this social order was disrupted by the high mortality rates of the Black Death. For a time, the demand for labor exceeded the supply, putting the surviving workers in a stronger bargaining position.

By the eighteenth century, agricultural laborers faced a new threat. The expansion of commercial agriculture led landowners to reorganize their holdings, throwing out peasants who worked the land for their own subsistence and replacing them with fewer workers producing commodities for the growing market. In the name of progress, “It was acceptable to strip the poor and uneducated from their customary rights and common lands because the new arrangements would allow the deployment of modern technology, hence improving efficiency and producing more food.”

Through much of history, therefore, agricultural systems have “cultivated misery.” When masses of farmworkers were in demand, they were usually dominated by more powerful landowners. When technology improved productivity, they were either worked harder so that others could profit, or else thrown off the land.

Acemoglu and Johnson worry that if the latest technologies replace too many workers and empower the few rather than the many, “our future begins to look disconcertingly like our agricultural past.”

Industrialization

The shrinking demand for farm labor would not have been an obstacle to prosperity if good jobs awaited the displaced peasants in the manufacturing sector. But in the early days of manufacturing, the factory system offered an alternative form of misery.

The Industrial Revolution was preceded by what the authors call a “middling sort of revolution.” By the mid-eighteenth century, a rising class of innovators, inventors and entrepreneurs were starting to reshape the economy. Innovations like the steam engine and the spinning frame appeared at this time. Just as important was a social transformation that weakened the power of the landed aristocrats and modestly expanded democracy.

As the rising entrepreneurs reorganized production and applied new technologies, productivity rose rapidly, especially in the textile industry. But the authors’ theory explains why this “progress” did not initially improve living conditions for the workers. The first reason was low labor demand. Because early industrialization emphasized the mechanization of existing tasks, notably spinning and weaving, the factory system created new jobs by destroying old ones.

The second reason was the power imbalance between entrepreneurs building capital and impoverished workers desperate for work. As the rising middle class expanded their wealth and political influence, their vision of progress increasingly dominated public discussion. The “industrial entrepreneurs’ choices of technology, organization, growth strategy, and wage policies enriched themselves while denying their workers the benefits of productivity increases—until the workers themselves had enough political and social power to change things.”

The result was that early factory workers—despite their high productivity—were made to work very long hours under dismal working conditions for very low wages. They were also crowded into urban factory districts plagued by coal-dust pollution, poor sanitation, unclean water, and related diseases.

As the rising middle class gained wealth and political power, their vision of progress dominated public discourse. Obsessed with how industrialization created new wealth—for them—they had little sympathy for those who earned too little to share in the benefits of their own productivity.

Conditions improved in the second half of the nineteenth century. New technologies like railroads and the telegraph created more jobs than they destroyed. Workers began organizing to exert countervailing power against employers. Social critics and reformers scandalized by social conditions began to challenge the dominant vision of progress. Governments took a few steps to improve public health and other urban conditions. With labor demand and labor power rising along with productivity, real wages could increase.

In the United States, conditions were generally better than in Europe because land was more abundant but labor was more scarce. That combination put workers, especially skilled workers, in a position to command a higher wage.

Rising real wages in Western Europe and America did not stop the rich from getting richer even faster, so that economic inequality increased during the Gilded Age. It also increased globally. At a time when the fruits of technological progress were starting to benefit more Europeans and Americans, colonialism impeded that process in many other places. The large flow of manufactured textiles from Britain to colonial India destroyed indigenous textile jobs, retarded industrialization, and confined a greater proportion of Indian labor to rural occupations.

A formula for prosperity

The time and place best characterized by a “productivity bandwagon” was the mid-twentieth century in Western Europe and the United States, especially the three decades after World War II. It had all the elements of an economically successful application of technology: Sustained growth in productivity; high labor demand in expanding occupations, and institutional structures supporting a more egalitarian distribution of power.

In the twentieth century, the proportion of the workforce needed in agriculture dropped sharply, but that was offset by a rising demand for labor in manufacturing and services. This was due to a better balance between labor replacement and labor augmentation. “The reduction in labor requirements driven by automation was offset, sometimes more than one for one, with other aspects of technology that created opportunities for workers.” Large-scale manufacturing needed not only blue-collar workers to run the assembly lines, but engineers to create them, technicians to repair them, and white-collar workers for managerial, clerical and sales jobs.

Operating the machinery of modern manufacturing required some skill, but the skills were not too hard to learn. Union contracts stipulated that employers would train their union employees. Rapid expansion of formal education provided qualifications for higher-level jobs.

Public policy supported broad-based prosperity in several ways: protecting the right of workers to organize and bargain collectively, spending tax dollars on public works and income support, and regulating business to place limits on corporate power.

The results were spectacular. Real wage growth averaged almost 3 percent per year for both more educated and less educated workers. The income distribution became more egalitarian, with labor’s share of national income rising and the share of the richest 1% falling.

Acemoglu and Johnson emphasize how exceptional the link between technology and prosperity was during this period:

In the long sweep of history, the decades that followed the end of World War II are unique. There has never been, as far as anyone knows, another epoch of such rapid and shared prosperity.

Even as they celebrate the accomplishments of the twentieth century, the authors are careful to acknowledge those who were left behind. Black Americans and immigrants were excluded from many of these gains. As individual earners, women were too, although they benefited indirectly as wives and daughters of upwardly-mobile men.

Despite these failures, we can understand why so many of the people who lived in that era—including many economists—came to accept the “productivity bandwagon” as a normal and natural phenomenon. That may have left them unprepared to appreciate the challenges of our new technological era. That is the issue for the last four chapters of the book.

Continued


Power and Progress

August 21, 2025

Previous | Next

Daron Acemoglu and Simon Johnson. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. New York: Hachette Book Group, 2024.

Daron Acemoglu and Simon Johnson won the 2024 Nobel Prize in Economics for their research on how political and economic institutions shape national prosperity. In this book, they tackle the relationship between technological innovation and prosperity.

No one doubts that new technologies have the potential to boost productivity and raise living standards. How and when they actually accomplish this is a more difficult question.

In the introduction and first three chapters, the authors lay out their general theory of technology and progress, considering the role of variations in labor demand, wages, and social power. The next four chapters discuss how these variations have played out in various historical situations, ranging from the failure of innovation to benefit farmworkers and early manufacturing workers before the late nineteenth century, to the more widespread prosperity of the mid-twentieth century. Armed with insights from economic theory and history, the authors then address the more recent revolution in digital technology. Readers who follow the argument all the way through should come away with a better understanding of our current technological age and its discontents. I know I did.

The productivity bandwagon

The conventional wisdom in economics, as well as a lot of public discussion, is that technological advances raise productivity, and higher productivity raises living standards. The authors cite Gregory Mankiw’s popular undergraduate textbook, which says that “almost all variation in living standards is attributable to differences in countries’ productivity.”

But the productivity gains from new technologies can only raise living standards if they improve real wages. What about labor-saving technologies that lower the demand for labor, causing unemployment and lower wages? Mankiw acknowledges the problem, but minimizes it by claiming that “most technological progress is instead labor-augmenting.” Most workers find some way to work with new technologies, and their increased productivity enables them to command a higher wage.

Acemoglu and Johnson call this optimistic view the “productivity bandwagon.” They argue to the contrary:

There is nothing in the past thousand years of history to suggest the presence of an automatic mechanism that ensures gains for ordinary working people when technology improves… New techniques can generate shared prosperity or relentless inequality, depending on how they are used and where new innovative effort is directed.

Rather than accept a broad generalization about technology and prosperity, the authors want to study historical variations and identify the key variables involved. The stories that people tell themselves about technology—including the ones economists tell—can both reflect and affect the historical variations. Writing in the Great Depression, John Maynard Keynes coined the term “technological unemployment.” He could imagine “the means of economising the use of labour outrunning the pace at which we can find new uses for labor.” More recently, robotics and artificial intelligence are raising that possibility again, but the productivity bandwagon remains a popular narrative.  Economic elites who profit from the application of new technologies are especially fond of it.

Variations in labor demand

Acemoglu and Johnson maintain that technological advances may or may not increase the demand for labor, depending on whether they are labor-augmenting or just labor-saving.

A classic example of technology that augmented labor, increased labor demand, and raised wages is the electrified assembly line introduced by Henry Ford. It not only raised the productivity of the existing autoworkers; it also enabled auto manufacturers to employ additional workers productively. (Economists call that variable the “marginal productivity of labor.”) By producing more cars at lower cost, car companies created a mass market for what had been a luxury item. In addition, they created additional jobs in related industries, such as auto repair, highway construction and tourism.

The effects of today’s robotics on automobile manufacturing may be very different. Carmakers can make just as many cars with less human labor, so labor productivity goes up. But demand for additional labor may go down, if factories are already turning out as many cars as their market can absorb. The marginal productivity of labor then falls, and the connection between technology and prosperity is weakened.

That is not to say that automation is always bad news for workers. That depends on the balance of labor-saving and labor-augmentation:

For most of the twentieth century, new technologies sometimes replaced people with machines in existing tasks but also boosted worker effectiveness in some other tasks while also creating many new tasks. This combination led to higher wages, increased employment, and shared prosperity.

The problem then is not just automation but excessive automation, especially if it is not really very productive in the fullest sense of the word. In economics “total factor productivity” refers to the relationship between economic output and all inputs, including capital as well as labor. Replacing workers with machines has costs as well as benefits, since machines cost money too, and displaced humans might have contributed something that machines cannot. The authors use the term “so-so automation” to refer to replacement of workers without much productivity gain. In that case, the classic gains of the earlier automobile boom—lower costs, expanded markets, rising labor demand, and widespread prosperity—do not occur.

Variations in wages

Even if new technologies are labor-enhancing, higher wages do not necessarily follow. They have not followed in societies where workers have been coerced to work without pay, or forbidden to leave their employer in search of better pay. The cotton gin enhanced the productivity of cotton workers in the Old South and expanded the areas where cotton could be profitably cultivated. But “the greater demand for labor, under conditions of coercion, translated not into higher wages but into harsher treatment, so that the last ounce of effort could be squeezed out of the slaves.”

In modern, free-market labor systems, wages are freer to rise along with labor demand. However, “wages are often negotiated rather than being simply determined by impersonal market forces.” A dominant employer may set wages for a multitude of workers, while the workers are too disorganized to bargain from strength. It was only in 1871 in Britain and 1935 in the United States that workers gained the legal right to organize and bargain collectively. Opponents of organized labor have continued to find ways of discouraging labor unions to this day. The share of national income going to labor rather than capital was highest when unions were strongest, in the 1950s.

Variations in power

Acemoglu and Johnson argue that the effects of technology depend on “economic, social, and political choices,” and that “choice in this context is fundamentally about power.”

What societies do with new technologies depends on whose vision of the future prevails. The most powerful segments of society have more say than others, although they can be contested by countervailing forces, especially in democratic societies where masses of workers vote. Although plenty of evidence points to the self-serving behavior of elites, they must at least appear to be promoting the common good for their views to be persuasive.

The technological choices a society makes can serve either to reinforce the power of elites or empower larger numbers of workers. This is especially true of general technologies with many applications. In the twentieth century, the benefits of electricity helped power a more egalitarian, broadly middle-class society. We cannot yet say the same about the digital technologies of the present century. The authors apply their theory, buttressed by historical evidence, to explain why.

Recall the subtitle of the book: “Our Thousand-Year Struggle Over Technology and Prosperity.” Making technology work for all of us has always been a struggle, and one that is related to the struggle for true democracy. Looking at it that way is more realistic and enlightening than seeing only a “productivity bandwagon” rolling smoothly toward mass prosperity.

Continued


The Power to Destroy (part 4)

May 29, 2024

Previous | Next

Since the late 1970s, a powerful antitax movement has advocated a reduction in the portion of national income that goes to fund government. What has it accomplished?

Tax cuts, then and now

To answer that question, I found that I had to distinguish between two very different periods of tax-cutting. The first began with the Reagan tax cuts of 1981, and the second with the Bush tax cuts of 2001.

Most tax-cutters wanted to both reduce tax rates and shrink the size of government by cutting spending. (A few argued that reducing the federal budget would not be necessary, because the tax cuts would pay for themselves.) What is interesting about the period from 1981 to 2000 is that the antitax forces succeeded a little more in cutting spending than in lowering taxes. Government outlays went down from 20.7 percent of GDP in 1980 to 17.5 percent in 2000, while government revenue went up from 18.1 percent to 19.8 percent. As a result, the budget went from a deficit of 2.6 percent of GDP to a surplus of 2.3 percent. Why? Because when faced with the high deficits resulting from the Reagan tax cuts, political leaders got serious about balancing the budget. They accomplished this both by raising taxes and cutting spending.

The fiscal trends in the second period, 2001 to 2023, were the opposite: The Bush and Trump tax cuts really did bring federal revenue down, while federal spending went up. That’s because the country cut taxes in both good times and bad, while also spending to address the national emergencies of the War on Terror, the Global Financial Crisis, and the Covid pandemic. The budget went from a 2.3 percent surplus in 2000 to a 6.2 percent deficit in 2023.

For much of the nation’s history, taxes were increased to fund wars and new entitlement spending, such as for Social Security and Medicare. In the twenty-first century, however, commitments not to raise taxes meant that massive spending to fund the wars following the terrorist attacks on September 11, 2001, to add prescription drug coverage to Medicare, to keep the economy afloat during the global financial crisis, and to respond to the Covid pandemic all went unfunded.

Low taxes and the economy

The effect of low taxes and federal deficits on the economy is a difficult and controversial topic. A tax cut is not a controlled experiment, since the economy is changing in other ways all the time. Some ideas do stand the test of time. The old Keynesian idea that tax cuts and deficit spending can stimulate the economy during recessions remains alive and well, thanks to the experience of the Great Recession of 2007-2009 and the Covid recession of 2020. The idea that low taxes accelerate growth any time finds less support, since the economy grew faster in the high-tax postwar era and in the 1990s than in the recent period of lower taxes. Graetz says that “the most robust economic growth since 1978 occurred during years following Bill Clinton’s 1993 tax increases. Tax reductions at the top have spurred neither great increases in domestic investment nor bursts of increased productivity.”

Lower taxes have been a contributing factor to economic inequality. The main beneficiaries of tax cuts have been the wealthy, who are disproportionately impacted by cuts in income taxes, estate taxes, capital gains taxes, and corporate taxes (as shareholders). They also have ways of living off their existing wealth while paying hardly any taxes at all.

Rich Americans borrow cheaply against their stock and bond portfolios to fund their lifestyles without paying any income taxes. Increases in the values of assets are not taxed as income until the assets are sold, and the tax law forgives any income tax on a lifetime of gains if assets are held until death.

Low taxes and government

Low taxes and high deficits may hamper the government’s ability to respond to new crises, such as climate change, global political threats like Russian or Chinese expansionism, demographic changes increasing the costs of Social Security and Medicare, economic dislocations caused by new technologies, and new demands for human capital development. The national debt is now so large that the country spends about as much on interest payments as it does on national defense.

This year’s presidential election is, among other things, a choice between two different fiscal approaches. President Biden wants to raise taxes on individuals with incomes over $400,000, raise the corporate rate from 21 percent to 28 percent, and create a wealth tax on people with over $100 million in assets. Former President Trump wants to make permanent his 2017 tax cuts favoring the wealthy.

I began with John Marshall’s observation, “The power to tax involves the power to destroy” and Graetz’s addendum, “So, it turns out, does the power not to tax.” I agree with Graetz’s suggestion that low taxes may now be a bigger threat to national greatness than high taxes.


The Power to Destroy (part 3)

May 28, 2024

Previous | Next

In 2000, George W. Bush won the presidency by the narrowest of margins in the Electoral College. Graetz calls this “a fateful turning point for antitax advocates.” They would not achieve their most radical goals of abolishing income taxes or replacing the progressive income tax with a “flat tax” where all incomes were taxed at the same rate. But they would succeed in lowering tax rates again, especially for the wealthy.

Bush—from surplus to deficit

“Bush and his team designed his tax plan to include an across-the-board reduction in income tax rates that provided the greatest benefits to the top but also cut income taxes for everyone.” Two tax bills accomplished Republican aims, the Economic Growth and Tax Relief Reconciliation Act of 2001, and the Jobs and Growth Tax Relief Reconciliation Act of 2003. Together they lowered the top bracket rate from 39.6 percent to 35 percent, reduced the lowest rate from 15 percent to 10 percent, further reduced estate taxes, increased the child tax credit and extended it to higher-income families, reduced capital gains taxes, and taxed dividends at the low capital gains rates instead of ordinary income rates. The projected cost of the tax cuts was reduced by scheduling some of them to expire at the end of 2010.

The Bush administration was not deterred in its tax cutting by having to spend money on homeland security and Middle East wars following the September 11, 2001 terrorist attacks. The administration also added an unfunded prescription drug benefit to Medicare. By 2004, federal revenue had fallen to 15.6 percent of GDP, the lowest since the 1950s. The surplus that existed when Bush took office had turned into a deficit of 3.4 percent. This time, deficits would turn out to be a normal feature of federal budgets.

Near the end of Bush’s two terms, the Global Financial Crisis and ensuing Great Recession led Congress to enact economic stimulus legislation, which helped the economy but made the deficit worse. The measures included tax rebates of $300 per individual taxpayer and $300 for children, which phased out for higher-income taxpayers. The Keynesian idea of boosting the economy from the bottom up was coming back in style, but Republicans still preferred tax breaks to new domestic spending programs.

Obama—economic stimulus and resistance

Barack Obama became president in 2009 during the worst economic recession since the Great Depression. His American Recovery and Reinvestment Act tried to stimulate the economy with tax cuts, aid to the states, and infrastructure projects. It included the “Making Work Pay” credit that temporarily reduced taxes for working families by up to $800 in 2009 and 2010. In those two years, the deficit soared to about 9 percent of GDP.

Obama also wanted to fulfill his campaign promise of making health insurance more affordable. In 2010, Democrats narrowly passed the Patient Protection and Affordable Care Act (popularly known as Obamacare). It required taxpayers to carry health insurance, expanded Medicaid coverage, and subsidized insurance for other low-income taxpayers. It relied for funding on a combination of taxes on high-income individuals, health insurers, pharmaceutical companies, and medical device manufacturers.

Republicans steadfastly opposed Obama’s fiscal agenda. Their priority was extending the Bush tax cuts.

The antitax movement entered a new phase with the emergence of the Tea Party. The triggering event was Obama’s proposal to assist homeowners facing foreclosure. The collapse of the housing market had left many homeowners owing more than the market value of their home. Critics revived the old claim that Democrats were spending the money of hard-working taxpayers to assist undeserving deadbeats.

In the 2010 midterms, Republicans gained 63 seats in the House, the largest increase in over sixty years. They had a lot of bargaining power too, because Obama needed their cooperation to raise the debt ceiling to accommodate the growing national debt. In return, Obama had to agree to cut spending.

The Bush tax cuts were scheduled to expire at the end of 2010, but allowing taxes to rise was politically very difficult in the face of Republican opposition and a struggling economy. The Bush cuts were extended for two more years. Then in 2012, the American Taxpayer Relief Act made most of the cuts permanent, except those affecting the top 1% of taxpayers. The top tax rate was raised from 35 percent back to what it was under President Clinton, 39.6 percent. The government gave up most of the revenue it could have had by letting the tax cuts expire. Instead, it relied on austerity on the spending side to bring down the deficit to around 3% by the end of the Obama administration. Graetz concludes, “Despite Republicans’ moaning, they had won the war over taxes. Republicans remained steadfast against tax increases at any time.”

Trump—another round of tax cuts

Donald Trump was an unconventional Republican who appealed more to “America first” nationalism than to traditional free-market economics. His most well-known goals were restricting immigration, reducing foreign imports and bringing back American manufacturing jobs. Nevertheless, he followed the new Republican orthodoxy that called for tax cuts regardless of fiscal and economic circumstances.

Donald Trump’s calls for tax cuts were common for a Republican president but were exceptionally large for a growing economy facing large deficits. Singing from the classic antitax songbook, Trump insisted his tax cuts would not add to federal deficits or debt.

Trump claimed publicly that his tax plan would raise the taxes of wealthy people such as himself, while privately assuring his rich friends that he would lower their taxes. When he revealed his actual tax plan after taking office, neither economists nor the general public were enthusiastic. Since the Republicans now held majorities in both houses of Congress, he could pass it entirely without Democratic votes, and he did. The Tax Cuts and Jobs Act of 2017 cut both personal and corporate taxes. It also doubled the amounts exempt from estate taxes. The bill held down the projected cost by making the personal tax cuts “temporary” (from 2018 to 2025), but Republicans were confident that a future administration would be compelled to continue them. Wealthy taxpayers got the largest cuts, both in absolute dollars and as a percentage of income. The top bracket rate was cut from 39.6% to 37%.

President Trump tried to create American jobs by putting tariffs on certain imported goods, especially from China. These had two downsides that made them unpopular with economists: Importers passed along their increased costs to consumers; and China retaliated with tariffs on our agricultural exports. “Trade experts estimated the steel and aluminum tariffs cost American consumers and businesses about $900,000 for every job they saved or created.”

Under Trump, the deficit increased from 3.1 percent of GDP in 2016 to 4.6 percent in 2019. That was before a new national economic crisis induced a new spurt in federal spending. This time it was the Covid epidemic, which brought much of the country’s economic activity to a halt.

Biden—tax benefits for the non-rich

Like Obama before him, Biden inherited a distressed economy, which he tried to stimulate with both tax rebates and new spending.

With narrow control of both houses of Congress, Democrats managed to pass the American Rescue Plan Act of 2021. It contained new spending to fight Covid, especially to fund vaccinations. It also included tax credits, 70% of which went to families with less than $91,000 of income. It raised the child tax credit from $2,000 to $3,600 for each child under six, and to $3,000 for older children. Congressional Republicans, who for forty years had rarely met a tax cut they didn’t like, unanimously opposed this bill.

The Inflation Reduction Act of 2022 also passed without any Republican support. Its main goals were to promote clean energy production and hold down the costs of health care and health insurance. It was funded mainly with a 15 percent minimum tax on corporations (aimed at corporations that had been paying little or no taxes at all), and additional revenue expected from an increase in IRS funding. (Better enforcement of the tax code with more agents and auditing normally brings in more money than it costs.) Republicans especially hated the IRS funding, and tried unsuccessfully to repeal it.

Biden wanted to raise taxes on the wealthy, but never succeeded in doing so. The deficit spiked to 12.1% of GDP in 2021, but went down to 5.3 percent after the American Rescue Plan’s temporary spending and tax credits ended. As of now, expenditures have exceeded revenue every year since 2001.

In my last post on this book, I will offer some reflections on what the antitax movement of the last half century has accomplished, and what it has meant for the country.

Continued