The Distribution of National Income

February 20, 2017

Previous | Next

The Washington Center for Equitable Growth has issued a new, very informative report on income inequality. Its authors, Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, are trying to improve on the way economists have measured inequality in the past:

One major problem is the disconnect between maceoeconomics and the study of economic inequality. Macroeconomics relies on national accounts data to study the growth of national income while the study of inequality relies on individual or household income, survey and tax data. Ideally all three sets of data should be consistent, but they are not. The total flow of income reported by households in survey or tax data adds up to barely 60 percent of the national income recorded in the national accounts, with this gap increasing over the past several decades.

Why is there such a discrepancy between the national income accounting and the personal reporting? The main reason is that when people report their income on a survey or a tax return, they are thinking of income actually received in cash. But some forms of national income accrue to individuals whether they see cash from them or not. Employers contribute to workers’ pension plans or subsidize their health insurance. Corporations make money on behalf of shareholders that they retain for investment rather than distribute as dividends. This report aims to apportion the entire national income among individuals. It tries to account for all forms of compensation for workers and all returns on capital assets, whether taken in cash or not.

For purposes of analysis and discussion, the researchers divided the US population into three broad groups, the top tenth, the next two-fifths, and the bottom half. The unit of analysis was the adult individual 20 or older. Most of the analysis split marital income equally between spouses, for example assigning each of them $40,000 if one earned $50,000 and the other $30,000. That makes sense if couples are sharing their purchasing power. The authors also did a separate analysis of gender inequality using individual earnings. There they found that overall, men had 1.75 times as much work income as women, without controlling for hours worked or types of jobs. That ratio has been falling steadily since the 1960s, when it was over 3.00.

Pre-tax income

To appreciate the degree of income inequality the researchers found, consider the familiar analogy of dividing a pie. Imagine that you bake a large pie for a party of ten, dividing it into ten equal slices. But the first guest to dig in takes five slices! The next four guests take one slice each, leaving only one slice to be divided among the remaining five diners. In percentage terms, one-tenth of the people got 50% of the pie, the next two-fifths got 40%, and the remaining half got only 10%.

The real numbers for 2014 (the last year reported) are not far from that. The top tenth got 47.0% of the national income; the next two-fifths got 40.5%, and the bottom half got 12.5%. The average (mean) income for the groups was $304,000 per person for the top 10%, $65,400 for the next 40%, and $16,200 for the bottom 50%. (If some of the numbers sound large, remember that income is being defined very inclusively.)

One advantage of these particular dividing points is that they clearly distinguish between one group whose share of national income is roughly proportional to its size (the two-fifths) and two groups whose share is either disproportionately large (the top tenth) or small (the bottom half).

In addition to the enormous differences in shares, the three groups differed in how much of their income they derived from returns on capital as opposed to their own labor. The top tenth got 43.0% of their income from capital, compared to 17.9% for the next two-fifths and 5.1% for the bottom half. Ironically, in a country that prides itself on its work ethic, the most meager rewards go to those who have to rely the most on their labor.

Trends in inequality

In order to study trends over time, the researchers compared two 34-year periods, 1946-1980 and 1980-2014. The first period includes the postwar economic boom. The second period begins with the year Ronald Reagan was elected president, although I don’t know how much that affected its selection as a dividing point. The authors do suggest that changes in public policy were at least partly responsible for the increase in inequality that has occurred since 1980.

The period after World War II was a time of rapid economic growth and broad-based increases in income. Pre-tax income (adjusted for inflation) increased 79% for the top tenth, 105% for the next two-fifths, and 102% for the bottom half over those 34 years. Because the increase was less for the top tenth than the other groups, the distribution became a little more egalitarian. The share of national income going to the top tenth declined from 37.2% to 34.2%.

The period since 1980 has been a time of both slower economic growth and very unevenly distributed gains. Pre-tax income increased 121% for the top tenth, 42% for the next two-fifths, and only 1% (!) for the bottom half. The rich got richer and the poor got left behind. As a result, the distribution of national income became noticeably less egalitarian. The share of the top tenth rose from 34.2% to 47.0%, but the share of the lower half dropped from 19.9% to 12.5%. That top share is similar to what rich people were getting back in the 1920s, before the Great Depression. Over the course of the past century, income inequality has gone down but then gone back up. At the highest levels of income, the return to inequality has been even more dramatic. Average income for the top 1% increased only 47% during the postwar era, lagging well behind general economic growth; but it rose 205% after 1980, far exceeding general growth. For the top 0.01%, where the average income is over $28 million, the increase has been 454%.

Although global trends such as outsourcing and automation have produced gains for capital at the expense of workers, the authors point out that not all countries have experienced the same extremes of inequality as the United States has. Although economic growth has been slower in France, the lower half of the French population has shared in the national growth as the American lower half has not. As a result, “While the bottom 50 percent of incomes were 11 percent lower in France than in the United States in 1980, they are now 16 percent higher.” America’s self-image as a unique land of opportunity is no longer secure.

Income redistribution?

Pre-tax income does not tell the whole story, however. The taxation of income provides some potential for redistribution, as those with higher incomes are taxed in order to provide some benefits to those with lower incomes. In my next post, I will discuss the report’s comparison of pre- and post-tax income to see how taxes and government benefits are distributed, and what effect they have on income inequality.

To be continued


A Measure of Fairness (part 2)

February 26, 2015

Previous | Next

In A Measure of Fairness, Pollin, Brenner, Wicks-Lim and Luce report their research on two kinds of wage laws: state minimum wage laws, and municipal laws that set a living wage higher than the federal and state minimums.

In 2007, Congress mandated that the federal minimum wage rise to $7.25 an hour by 2009. Twenty-nine states and the District of Columbia have raised their minimum wages higher than that; seven states and D.C. have a minimum of at least $9.00 (see map and data)

Municipal laws that set a wage higher than both the federal and state minimums are usually narrow in scope, applying only to businesses with municipal contracts. San Francisco and Santa Fe are two cities with broader living-wage laws.

The authors identify two different ways of defining a reasonable living wage, one focusing more on benefits and the other on costs:

First, what is a wage rate that is minimally adequate in various communities, in the sense that it enables workers earning that minimum wage and the family members depending on the income produced by this worker to lead lives that are at least minimally secure in a material sense? What wage rate, correspondingly, can allow for a minimally decent level of dignity for such workers and their families?
The second, equally legitimate, question…asks, How high can a minimum wage threshold be set before it creates excessive cost burdens for businesses, such that the “law of unintended consequences” becomes operative?

High on the list of unintended consequences would be job losses if businesses chose to lay off workers or leave a city or state rather than accept higher wage costs.

The authors also identify two ways of studying these issues: prospective research that tries to anticipate the consequences of proposed laws, and retrospective research assessing the actual consequences of existing laws. Except for the last section, the findings described below are from prospective studies.

Benefits to workers and families

Who benefits from wage laws? The answer might seem to be obvious, but some critics have questioned the need for such laws on the grounds that the lowest-wage workers are rarely major breadwinners, but are often younger workers whose wages will probably go up before long anyway. The authors find that the laws primarily benefit the people they are intended to benefit: low-income workers who are “well into their long-term employment trajectories,” with a high proportion of primary breadwinners and other major contributors to family income. In addition, the laws have important ripple effects, tending to raise the wages of workers who are already a little above the legal minimum. For example, the authors estimated that 20% of the people of Arizona would receive some income benefit from a proposed minimum-wage increase, including workers and members of their families.

Several of the research reports are from studies of a proposed city-wide minimum of $10.75 for Santa Monica. It was passed by the city council in 2001 but repealed by the voters in 2002. In order to evaluate its probable effect on incomes, the authors gave careful consideration to poverty thresholds and basic economic needs. First, they drew on research by the National Research Council on more realistic poverty thresholds than those established by the federal government. “The commission’s report…presented eight separate studies using different methodologies for coming up with alternative poverty measures. If we simply calculate the average of these eight alternative poverty lines, this average is 42 percent above the official poverty line.” Considering that the cost of living in the Los Angeles area is about 25% above the national average, they decided to use 160% of the federal poverty line as the poverty threshold for their research.

By that standard, a family consisting of one adult and two children would need an income over $21,475 to escape poverty, which corresponds to a full-time hourly wage of $10.32. A family with two adults and two children would need an income of $27,030, corresponding to a full-time hourly wage of $13.00 with only one adult employed. (All figures were in 1999 dollars, so would have to be somewhat higher today.)

The authors also drew on research by the California Budget Project, which constructed a “basic needs” budget for Los Angeles and other California regions. The CBP described this as “more than a ‘bare bones’ existence, yet covers only basic expenses, allowing little room for ‘extras’ such as college savings or vacations.” By that standard, a family with one adult and two children would need an income of $37,589, or a wage of $18.07 an hour. A family with two adults and two children would need a little less, $31,298, or a wage of $15.05, if one adult stayed home and provided child care. With both adults employed full-time, however, they would need $45,683 because of child care and other costs, but each job would only have to pay $10.98 an hour to generate that income.

To assess the impact of the proposed $10.75 hourly wage, the authors construct two very specific “prototypical family types.” The first is a three-person family whose primary breadwinner earns $8.00 an hour and contributes 70% of the family income. A raise to $10.75 increases the family income from $19,430 to $24,105, an increase of 24.1%. This takes the family from 10% below the adjusted Los Angeles poverty line to 12% above it. It also takes the family from 48% below the CBP “basic needs” budget to only 36% below it.

The second prototypical family is a four-person family with a low-wage worker earning $8.30 an hour and contributing 50% of the family income. A raise to $10.75 increases the family income from $29,880 to $34,290, an increase of 14.8%. (The other adult earner is not assumed to have an hourly rate low enough to be covered by the minimum-wage increase.) This takes the family from 12% above the adjusted Los Angeles poverty line to 29% above it. It also takes the family from 35% below the CBP “basic needs” budget to only 25% below it.

However, some of the increased income from higher wages would be offset by higher taxes and lost tax credits. (It wouldn’t be offset by loss of food stamps or medical benefits, since neither prototypical family was poor enough to qualify for those in the first place.) The authors calculate that the offsets amount to 40% of the income gains for the first family and 27% of the income gains for the second family.

Costs to business

Most legally mandated wage increases are not dramatic, and their impact is limited by the number of workers whose wages are already at or near the new minimum. Typical of the research reported here is the authors’ finding that a Santa Fe living-wage ordinance would increase average costs relative to business revenue by about 1%. The impact is often two or three times greater for businesses with more low-wage workers, especially in the food service and hotel industries.

Affected businesses can handle the added labor cost in many different ways. Perhaps the most obvious is to raise prices. Although that poses some risk of lost business, the damage is limited if the price increases are small, competitors are also raising their prices, consumers are interested in quality more than price, and possibly that consumers prefer to patronize businesses that treat their employees well, as some research indicates. In addition, some businesses, especially retail businesses operating in poor neighborhoods, may gain business because better-paid workers have more money to spend.

Another way that businesses absorb higher labor costs is through increased productivity. Higher wages tend to reduce turnover, which reduces the costs incurred in recruiting, selecting, hiring and training new workers. Based on their research in Santa Fe, the authors suggest that 40% of the cost of higher wages can be recovered in higher productivity.

Businesses can also absorb higher labor costs by redistributing income within the firm. This can be done in a rather subtle fashion, simply by letting low-wage workers have a larger share of productivity gains, while holding higher incomes steadier. Perhaps that is only fair, considering that the country has been doing the opposite for some time: “The fact that the minimum wage has been falling in inflation-adjusted collars while productivity has been rising means that profit opportunities have soared while low-wage workers have gotten nothing from the country’s productivity bounty.” If paying a higher wage forces a business to accept slightly lower profits, the damage to its competitive position is limited by the fact that its competitors may be facing the same problem.

Two more drastic responses to increased costs are to lay off workers or relocate to another city or state. The businesses most likely to relocate are those with a customer base that is not tied to a specific location, and with a substantial increase in labor costs. But many of the businesses that rely on low-income labor also have strong ties to a particular place, such as many restaurants and hotels.

The authors’ summary of their New Orleans research is typical of their conclusions:

Our results suggest that the New Orleans firms should be able to absorb most, if not all, of the increased costs of the proposed minimum wage ordinance through some combination of price and productivity increases or redistribution within the firm. This result flows most basically from the main finding of our survey research–that minimum wage cost increases will amount to about 0.9 percent of operating budgets for average firms in New Orleans and no more than 2.2 percent of operating budgets for the city’s restaurant industry, which is the industry with the highest cost increase.  This also suggests that the incentive for covered firms to lay off low-wage employees or relocate outside the New Orleans city limits should be correspondingly weak.

 Retrospective studies

In a few cases, the researchers were able to evaluate the effects of wage increases that had already been in effect for some time. Mark Brenner and Stephanie Luce studied the effects of wage ordinances in Boston, Hartford and New Haven covering businesses with city contracts. Critics had predicted that fewer companies would bid on city contracts, and the reduction in competition would result in higher costs for the city. In fact, there wasn’t much difference: The number of bidders went down in New Haven, but went up in Hartford and stayed the same in Boston. Businesses did not lay off workers, but adjusted to the higher wages mainly by accepting lower profit margins.

Brenner, Wicks-Lim and Pollin did a study comparing states with and without minimum-wage laws higher than the federal minimum. They found no adverse effects of higher minimum wages on employment.

Wicks-Lim and Pollin studied the effects of Santa Fe’s citywide minimum wage on job opportunities for low-wage workers. Aaron Yelowitz had reported that unemployment rose once other factors were statistically controlled. Wicks-Lim and Pollin found that employment actually held steady, but that the rate of unemployment was higher than expected only because more people came into the labor market looking for work. They came “precisely because there were more jobs and better jobs in Santa Fe than elsewhere.” Pollin also reminds us that the United States used to have a higher minimum wage (in inflation-adjusted dollars) in the 1960s than it has today, with no apparent damage to employment or productivity.

In general, this book supports the conclusion that raising wages for low-income workers brings at least modest benefits to workers, while imposing modest costs on employers and consumers. For workers, the benefits are partly offset by higher taxes and reduced benefits for the poor. For employers, the costs are partly offset by price increases, higher productivity, and redistribution of compensation among different levels of workers. Living-wage initiatives are one effective way of addressing extreme income inequality and poverty. They are not a cure-all, however, and other measures like progressive taxation and direct public assistance remain important as well.


A Living Wage (Glickman, part 2)

February 11, 2015

Previous | Next

The previous post discussed how Lawrence Glickman links the demand for a living wage to the historical transformation of the US economy. As independently owned and managed farms and businesses became less common, American workers had to rethink their hostility to working for wages. Increasingly they pinned their hopes for freedom and independence on better wages instead of on control over their own labor. The labor movement called for a high American standard of consumption supported by a living wage, at least for white males.

An idea whose time had come

By the end of the nineteenth century, this idea was gaining support beyond the labor movement itself. “Religious reformers were the first group outside of the labor movement to call for a living wage, beginning with Pope Leo XIII’s encyclical of 1891.” The Pope declared it a “dictate of nature more imperious and more ancient than any bargain between man and man.” In 1906, the Catholic priest and social activist John Ryan, published A Living Wage, which also made a distinction between prevailing market wages and ethical wages based on natural moral law. Prevailing wages were partly determined by the relative power of capital and labor, so were unlikely to reflect the true value of workers or their work. [On a personal note, my father told me that when he studied economics at a Catholic college in the 1930s, his ethics professor argued for the moral responsibility of employers to pay a living wage.]

States began passing minimum-wage laws in 1912, and the platform of the Democratic Party began calling for a federal minimum wage in 1916. From organized labor’s point of view, the minimum wage was exactly that, only the lowest point on the “spectrum of conceivable living wages,” but it was a start.

Opposition to higher wages was still intense in the early 1900s. The opposing arguments were partly economic, based on the idea that the wage set by the market represented the real value of labor. But this easily became a moral argument: since the worker’s labor was only worth what the market said it was worth, any demand for more was an immoral attempt to get something for nothing. Another argument was that any interference with the labor market violated the principle of freedom of contracts. A market wage represented an agreement between two free parties, but a legally mandated minimum deprived employers of their right to bargain freely with workers. It was this last argument that most impressed the Supreme Court when it declared minimum-wage laws unconstitutional in 1923. [It probably didn’t occur to justices to worry about whether the individual worker really had much freedom to bargain with a powerful employer.]

By the 1930s, support for a living wage became stronger, as economists, politicians and the general public came to associate low wages with economic depression. President Roosevelt stated it bluntly in 1938: “We suffer primarily from a failure of consumer demand because of lack of buying power.” With the productive capacity of the nation expanding, not to raise wages made it impossible for workers to buy enough goods to keep the factories humming and the labor force employed.

I don’t recall Glickman telling this part of the story, but the Supreme Court reversed itself in 1937 and upheld a state minimum-wage law. Interestingly, this happened because a certain Justice named Roberts departed from his usual practice of voting with the four most conservatives justices (perhaps setting a precedent for the Affordable Care Act!). This decision marked the end of an era in which the Court had generally resisted government efforts to regulate industry.

In 1938, Congress passed the Fair Labor Standards Act, which set a national minimum wage and a standard work week of forty hours beyond which overtime must be paid. Perhaps just as important, although not discussed by Glickman, the National Labor Relations Act of 1935 gave workers the right to organize and bargain collectively. A combination of union organizing and government support helped millions of workers achieve a higher standard of living and move into the middle class.

An idea whose time has gone?

Because Glickman is concerned primarily with the rise of the living wage as an idea, he ends his story in the early twentieth century with its partial implementation. He does not address the question of why the struggle for higher wages became so much harder in the latter part of the century, or why the very term “living wage” went out of fashion. I’ll just mention a few of the many developments that impeded or even reversed the progress that workers had been making:

  1. Runaway inflation not only eroded the buying power of wages, but it also reduced popular support for wage increases, since high wages could be blamed for inflation.
  2. Globalization undermined the argument for a distinctly “American standard” of wages. Employers could justify low wages in order to keep their companies globally competitive, or replace well-paid US workers with lower-paid foreign workers.
  3. New technologies reduced the demand for unskilled labor and the market price of that labor.
  4. Globalization and automation led to job losses in the highly unionized manufacturing sector, while employment in the less unionized service sector expanded. The decline of unions reduced the workers’ political clout too, since the labor-friendly Democratic Party had relied heavily on unions for its grass-roots organizing.
  5. The decline of the patriarchal, male-breadwinner family undermined the argument for a “family wage.” In theory, wages could be lower if families were accustomed to relying on multiple earners, but households with only one earner lost ground.
  6. The struggle for higher wages focused increasingly on racial minorities and women, who had been largely excluded from high wages in the past. Many white males felt threatened, however, and became more interested in holding on to what they had than advancing the cause of labor in general. They abandoned their traditional Democratic allegiance and voted Republican in large numbers, especially in the South, helping insure that public policy would tilt toward business and away from labor. The labor movement eventually paid a price for once thinking of the living wage as only a white man’s wage.

A new interest?

The recent attention focused on economic inequality suggests that the issue of just wages may once again move center stage. At the low end of the income scale, increases in the minimum wage have failed to keep up with inflation. In today’s dollars, the original minimum of about $4 an hour increased to over $10 in the 1960s before falling back to $7.25 since then. Meanwhile, incomes in the middle have largely stagnated while incomes at the top have increased dramatically (especially after-tax incomes because of large tax cuts for the wealthy).

Thomas Piketty observes that in the US recently, “income from labor is about as unequally distributed as has ever been observed anywhere,” with the top tenth of workers receiving 35% of the total and bottom half of workers only 25%. Income from investments is even more uneven, since the share of wealth controlled by the richest tenth has risen to over 70% in recent decades. The distribution of total income is a combination of the distributions of both labor income and investment income. The share of total income going to the top tenth fluctuated in the range of 30-35% between 1950 and 1980, but has gone up to 45-50% since 2000. So 15% of the national income has been transferred to the top tenth from everybody else. (See my discussion of Piketty’s Capital in the Twenty-First Century, especially part 3.)

In our new Gilded Age, with the rich living in increasing luxury while so many others can barely scrape by at all, the stage may be set for a new national discussion of a living wage.


Good Jobs, Bad Jobs (part 2)

September 27, 2012

Previous | Next

In my first post about Arne Kalleberg’s Good Jobs, Bad Jobs, I described the author’s general framework for understanding the increasing polarization of work. He considers many factors: the social and economic forces that have transformed the economy (globalization, new technologies), the composition of the labor force (by education, race, gender, etc.), the mediating role of other institutions (government, financial institutions, unions), and the organization of work itself (“high-road” and “low-road” employment strategies). Now I want to take a closer look at work polarization, starting with changes in the distribution of occupations.

Kalleberg ranks broad occupational groups according to measures of job quality. He confirms that occupations in the middle range have lost workers, while occupations at the high and low ends have added workers. For example, a lot of the losses have involved semiskilled machine operators and administrative support workers, where new technologies have reduced the need for labor. My Dad’s first job after college was as a statistical clerk, armed with only a slide rule and an adding machine. When I taught statistics a generation later, my students and I had computers to do the busywork. When I later became an independent financial planner, my desktop computer provided all the administrative support I needed.

Remember the old IBM slogan, “Machines should work; people should think”? Many optimistic social scientists of the 1960s and 70s expected technological change to improve the quality of work, eliminating the drudgery and expanding the creativity. Workers could take the benefits of their high productivity as higher wages and/or more time off the job. Part of this vision has come true. High-end occupations–managerial, professional and technical–have expanded. But they haven’t expanded enough to employ all the workers displaced by technological obsolescence or outsourcing. In the long run, new technologies may create as many jobs as they destroy; in the short run, destroying a good job may be easier than creating one. Capitalists are job destroyers as much as job creators, as the current political debate over Bain Capital illustrates. Getting the same work done using fewer American workers can be easy with new technologies and a global supply of labor. Doing something new and paying someone a good wage to do it is more challenging. It may involve taking more risk, making an investment in worker training, and finding a market for a new good or service. Much of the work worth doing has social–not just individual–benefits, so the demand may not be there without some financial commitment from the taxpayers.

The phenomenal growth of low-wage service jobs results partly from the growth in the number of workers who are available for such jobs. This in turn results from a combination of labor-force characteristics (so many workers who lack the skills required for higher-level work) and work organization (too many companies adopting “low-road,” cost-cutting approaches to labor instead of “high-road” investments in human capital). These factors are reinforced by the demand for cheap services in a society with so many low-income households. Of the ten occupations with the largest projected job growth from 2006 to 2016, seven are low-wage sales or service jobs (such as retail salespersons, food preparation and service workers, home health care aides, and janitors).

Kalleberg’s findings on wages are consistent with those of other researchers: More workers at the high and low ends and fewer in the middle. Between 1973 and 2009, real (inflation-adjusted) wages for workers in the 95th percentile have increased from $39 to $55 per hour for men, and from $24 to $41 per hour for women. At the median (50th percentile), wages have increased slightly for women ($11 to $14) but have stagnated for men (around $18). In the 20th percentile, wages have also increased a little for women ($8 to $9), but have fallen for men ($12 to $10). And at the bottom, the minimum wage of $7.25 has not been adjusted for inflation, and so it has declined in real value since its peak in the 1960s.

Wage disparities have increased the correlation between educational level and income. The economic advantage of a college education has increased, although the cost and the debt one incurs has too. The United States ranks relatively high on average education and other measures of skill, but it also ranks high on the proportion of workers in very low-paying jobs. Other countries have had more success in maintaining decent wages. Kalleberg says,

Institutions matter for wage-setting: inequality tended to be greater in liberal market economies such as the United States, Britain, and Canada, which have relatively weak unions and decentralized patterns of wage-setting, whereas inequality was relatively low in countries such as France and Germany, which have relatively centralized wage-setting mechanisms and stronger unions.

This more sociological view contrasts with the economic theory of skill-based technical change (SBTC), which attributes wage disparities primarily to individual worker qualifications, without much regard for institutional variables.

Kalleberg also finds increasing inequality in the availability of benefits, especially health insurance and defined-benefit pension plans. The United States is unusual in its reliance on employers to provide such benefits at their discretion, rather than making them rights of citizenship. As stable employment relations and the social contract between management and labor have broken down, this system provides fewer reliable benefits, especially for workers at the low end. The proportion of workers covered by defined-benefit pension plans has been cut in half since 1980. Defined contribution plans like 401(k)s shift the risk of poor investment performance to the individual worker.

One particularly discouraging aspect of job polarization is the effect it has had on racial inequality. Whatever progress we have made in combating racial discrimination has been counteracted by the economic forces and decisions that have tended to polarize the labor force. Occupational segregation by race declined a bit in the 1970s, but stopped declining after that. Many of the jobs that had helped other ethnic groups move into the middle class–especially manufacturing jobs with modest skill requirements but good wages–are no longer available, and what good jobs there are have higher educational requirements. To make matters worse, minorities (and women) who do have college educations still don’t obtain their proportional share of the good jobs, suggesting that discrimination remains a problem at the high end of the occupational spectrum.


The Lost Decade of the Middle Class

August 27, 2012

Previous | Next

The Pew Research Center has just released their report, “The Lost Decade of the MiddleClass: Fewer, Poorer, Gloomier.” It shows that the American middle class has not only been seriously hurt by the recent recession, but that it has been losing ground for some time.

The report divides households into three tiers: upper, middle and lower. The middle-income tier includes all adults whose annual household income is at least two-thirds of the national median income, but no more than twice that median. (Since larger households need more money to live a middle-class lifestyle, incomes were adjusted for family size before assigning households to tiers.) By that definition, 51% of households are middle-income, while 20% are upper-income and 29% are lower-income.

Between 2000 and 2010, the median income of the middle-income tier fell by 5%, and its median wealth (assets minus liabilities) fell 28%. Households at this economic level have a lot of their wealth in housing, so they were hit pretty hard by the bust in home prices.

But the story of the middle class’s changing fortunes involves more than the recent recession. Deeper changes have been at work for a long time. The report describes these changes in the four decades since 1971:

  • The percentage of households classified as middle-income has dropped from 61% to 51%. Meanwhile, lower-income households have gone from 25% to 29% of the total, and upper-income households have gone from 14% to 20%. This is consistent with the often-noted increase in economic inequality and the “hollowing out of the middle class.”
  • The share of the national income going to middle-income households has dropped from 62% to 45%. Meanwhile, the share going to lower-income households has declined slightly, from 10% to 9%, while the share going to upper-income households has increased from 29% to 46%. We know from other studies that the share going to the super-rich has increased the most.
  • Prior to the 1980s, income gains were much more similar for different income groups than they have been since then. The gains were much larger in the 1950s and 1960s than in the 1970s, but in both cases they were experienced by households at many different levels. In the 1980s and 90s however, gains for middle- and lower-income households were much smaller than those for upper-income households. The 2000-2010 decade was the first decade since World War II in which median income actually fell. So in general, income gains have been slowing down for a long time, except in the upper-income tier.

The report also studied the opinions of a large sample of adults, focusing especially on the 49% who classified themselves as “middle class.” (That percentage is similar to the 51% classified as middle-income by the researchers, but note that the 49% doesn’t include those who classified themselves as “lower middle class” or “upper middle class.”) 85% of these self-described middle-class people said that it’s harder to maintain a middle-class standard of living than it was ten years ago. Although 60% of them said that they were better off than their parents were at the same age, only 43% expected their children to be better off than they are. They were most likely to place the blame for the nation’s economic problems on Congress, banks and financial institutions, large corporations, and the Bush administration, in that order.

The Pew report is more descriptive than explanatory, but we may well ask why the middle class is becoming smaller and more financially insecure. The problems obviously go deeper than the recent financial crisis. I suspect that they also go deeper than some of the most common explanations, such as the downward pressure of globalization on wages because corporations can easily seek out the cheapest labor on the planet. We shouldn’t be so preoccupied with impersonal global forces that we forget the many human decisions that shape investment and employment. Among the issues I hope to explore in future posts are how much we’re willing to invest in education and training, and how well we utilize the talents of our population to create things of greater economic value.