Power and Progress (part 3)

August 28, 2025

Previous | Next

Acemoglu and Johnson argue that the link between technological progress and general prosperity is not automatic. It depends on other variables, especially how well new technologies sustain the demand for labor and how much workers share the benefits of rising productivity. Having supported their argument with historical examples, they now apply it to the more recent economy, especially the period since 1980.

The “graveyard of shared prosperity”

At the height of the postwar prosperity, believers in the “productivity bandwagon” expected further technological breakthroughs to raise productivity and wages, continuing and even surpassing the postwar prosperity. Digital technologies, like the mainframe computers already in use in the 1960s, looked very promising. As the digital revolution took off, the rate of innovation soared.

“Digital technologies, even more so than electricity…are general purpose, enabling a wide range of applications.” They have the potential not only to replace human labor with smart machines, but also to complement and enhance human labor. During my university career, I used computers to enhance my teaching, research and administrative work in numerous ways, but they never replaced me in any of those roles. Many manufacturing workers weren’t so lucky, as new technologies were used more to replace labor than to augment it. As the authors put it, “Digital technologies became the graveyard of shared prosperity.” I would emphasize the word “shared” in that claim, since no one disputes that digital technologies have created great riches for some and modest gains for others.

The authors attribute much of the decline of shared prosperity to a more conservative vision of progress that developed in the 1960s and 70s and became dominant after the “Reagan revolution” of 1980. In this vision, the path to prosperity started at the top, with wealthy investors, high-profit corporations, and well-rewarded shareholders. If left alone by government, they would create more wealth and income for all. But to maximize investment, the rich needed low taxes; and to maximize profits, corporations needed low taxes, minimal regulation, and low labor costs. “Many American managers came to see labor as a cost, not as a resource…This meant reducing the amount of labor used in production through automation.”

Americans may place most of the blame for lost manufacturing jobs on foreign competitors like China, but automation is responsible for more job losses and downward mobility. While foreign workers and immigrants did take many of the of the low-wage manufacturing jobs, automation destroyed more of the jobs that had been paying good wages.

The workers who remained in manufacturing were more productive, but the demand for additional workers fell. In addition, total factor productivity grew at a much slower rate after 1980 than in the previous four decades. Median wages grew even more slowly, less than 0.45% per year.

Inequality increased in a number of ways:

[T]he share of the richest 1 percent of US households in national income rose from around 10 percent in 1980 to 19 percent in 2019…Throughout most of the twentieth century, about 67-70 percent of national income went to workers, and the rest went to capital (in the form of payments for machinery and profits). From the 1980s onward, things started getting much better for capital and much worse for workers . By 2019, labor’s share of national income had dropped to under 60 percent…

What income did go to labor was divided more unevenly across educational levels, with college-educated workers gaining some ground, while less educated workers saw actual declines in real earnings. Rather than train less educated workers, employers more often replaced them with fewer but more educated workers. Along with the destruction of manufacturing jobs came the decline of unions and the reduced power of workers to fight for good wages and job training.

The value of the five biggest corporations—Google, Facebook, Apple, Amazon and Microsoft—grew to about 20 percent of GDP, twice as much as the value of the five biggest corporations at the height of the Gilded Age in 1900.

Artificial intelligence

Acemoglu and Johnson see artificial intelligence making matters worse, since so many employers are using it to replace human labor rather than augment it. Rather than ask how machines can be useful to workers, proponents of new technologies ask how machines can equal or surpass human workers. Taken to an extreme, the goal of AI enthusiasts is to achieve a general machine intelligence that can make any decision as well as a human. From a business standpoint, it is the ultimate way of cutting labor costs, by replacing educated as well as less-educated labor.

So far, the results have been a lot of what the authors call “so-so automation,” with only modest gains in productivity. The reason, they think: “Humans are good at most of what they do, and AI-based automation is not likely to have impressive results when it simply replaces humans in tasks for which we accumulated relevant skills over centuries.”

What makes us think that the way to prosperity is to devalue the human capacities of the workers who are trying to prosper? That may generate short-term profits for the owners of the machines, but not shared and sustained prosperity. The authors warn that “infatuation with machine intelligence encourages mass-scale data collection, the disempowerment of workers and citizens, and a scramble to automate work, even when this is no more than so-so automation—meaning that it has only small productivity benefits.”

The threat to democracy

The part of the book I found most disturbing was the chapter, “Democracy Breaks.” It describes what some have called a new “digital dictatorship,” most evident in China. With the help of some of the world’s largest AI companies, the Chinese government has turned the data-crunching capacities of new technologies into tools of mass surveillance and control. The aim is to monitor, rate, and sanction the behavior of any citizen. Forty years after Orwell’s imaginary 1984, Big Brother is watching more efficiently than ever. Other authoritarian governments—Russia, Iran, Saudi Arabia, Hungary, and even India—are developing similar capabilities.

In the United States, “The NSA cooperated with Google, Microsoft, Facebook, Yahoo!, various other internet service providers, and telephone companies such as AT & T and Verizon to scoop up huge amounts of data about American citizens’ internet searches, online communications and phone calls.”

Digital media have also played a role in polarizing Americans and debasing civil discourse. Media companies whose business model was based on selling ads, such as Facebook, wanted to keep their users as engaged as possible. “Any messages that garnered strong emotions, including of course hate speech and provocative misinformation, were favored by the platform’s algorithms because they triggered intense engagement from thousands, sometimes hundreds of thousands, of users.”

The hope that digital media would—like the printing press of an earlier era—empower citizens and strengthen democracy has not been fulfilled. The underlying problem, according to Acemoglu and Johnson, is that technology companies prefer a more “technocratic approach, which maintains that many important decisions are too complex for regular people.” In the economy, that encourages the devaluation and replacement of human  laborers and a flow of economic rewards to the rich. In government, it enables the surveillance and control of citizens and a flow of political power to authoritarian leaders.

Redirecting technology

In their final chapter, Acemoglu and Johnson describe a three-pronged formula for redirecting technology: “altering the narrative and changing norms…cultivating countervailing powers…[and] policy solutions.”

The new narrative would reject “trickle-down economics” and shift the emphasis back to shared prosperity. It would encourage decision-makers to address the wellbeing of ordinary people, instead of assuming that what’s good for corporate profits or large fortunes is good for everybody. Hopefully it would influence how business managers think and what they learn in business school.

Countervailing power against self-serving technocrats and corporations can come from many directions—government, civic organization and online communities. Now that blue-collar manufacturing workers are a smaller part of the labor force, organized labor should grow to embrace many occupations. Workers should organize on a broader level than the plant or the firm and play a major role in national politics.

Here are some of the policy changes they recommend:

  • Subsidize socially beneficial technologies, especially those that augment human labor rather than replace it
  • Support research on such technologies, especially in education and health care
  • Break up technology companies that have become too monopolistic
  • Reform tax policies that favor investments in equipment over hiring of workers
  • Increase tax incentives for worker training
  • Repeal the law that exempts internet platforms from any accountability for what they post
  • Tax platforms that rely on advertising in favor of those with alternative revenue streams, such as subscriptions or nonprofit contributions
  • Raise the minimum wage, but do not provide a Universal Basic Income

The authors regard a Universal Basic Income as “defeatist,” since it “fully buys into the vision of the business and tech elite that they are the enlightened, talented people who should generously finance the rest.” What they support instead is a new vision committed to seeing the value and productive potential in all of us and investing accordingly.


Power and Progress

August 21, 2025

Previous | Next

Daron Acemoglu and Simon Johnson. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. New York: Hachette Book Group, 2024.

Daron Acemoglu and Simon Johnson won the 2024 Nobel Prize in Economics for their research on how political and economic institutions shape national prosperity. In this book, they tackle the relationship between technological innovation and prosperity.

No one doubts that new technologies have the potential to boost productivity and raise living standards. How and when they actually accomplish this is a more difficult question.

In the introduction and first three chapters, the authors lay out their general theory of technology and progress, considering the role of variations in labor demand, wages, and social power. The next four chapters discuss how these variations have played out in various historical situations, ranging from the failure of innovation to benefit farmworkers and early manufacturing workers before the late nineteenth century, to the more widespread prosperity of the mid-twentieth century. Armed with insights from economic theory and history, the authors then address the more recent revolution in digital technology. Readers who follow the argument all the way through should come away with a better understanding of our current technological age and its discontents. I know I did.

The productivity bandwagon

The conventional wisdom in economics, as well as a lot of public discussion, is that technological advances raise productivity, and higher productivity raises living standards. The authors cite Gregory Mankiw’s popular undergraduate textbook, which says that “almost all variation in living standards is attributable to differences in countries’ productivity.”

But the productivity gains from new technologies can only raise living standards if they improve real wages. What about labor-saving technologies that lower the demand for labor, causing unemployment and lower wages? Mankiw acknowledges the problem, but minimizes it by claiming that “most technological progress is instead labor-augmenting.” Most workers find some way to work with new technologies, and their increased productivity enables them to command a higher wage.

Acemoglu and Johnson call this optimistic view the “productivity bandwagon.” They argue to the contrary:

There is nothing in the past thousand years of history to suggest the presence of an automatic mechanism that ensures gains for ordinary working people when technology improves… New techniques can generate shared prosperity or relentless inequality, depending on how they are used and where new innovative effort is directed.

Rather than accept a broad generalization about technology and prosperity, the authors want to study historical variations and identify the key variables involved. The stories that people tell themselves about technology—including the ones economists tell—can both reflect and affect the historical variations. Writing in the Great Depression, John Maynard Keynes coined the term “technological unemployment.” He could imagine “the means of economising the use of labour outrunning the pace at which we can find new uses for labor.” More recently, robotics and artificial intelligence are raising that possibility again, but the productivity bandwagon remains a popular narrative.  Economic elites who profit from the application of new technologies are especially fond of it.

Variations in labor demand

Acemoglu and Johnson maintain that technological advances may or may not increase the demand for labor, depending on whether they are labor-augmenting or just labor-saving.

A classic example of technology that augmented labor, increased labor demand, and raised wages is the electrified assembly line introduced by Henry Ford. It not only raised the productivity of the existing autoworkers; it also enabled auto manufacturers to employ additional workers productively. (Economists call that variable the “marginal productivity of labor.”) By producing more cars at lower cost, car companies created a mass market for what had been a luxury item. In addition, they created additional jobs in related industries, such as auto repair, highway construction and tourism.

The effects of today’s robotics on automobile manufacturing may be very different. Carmakers can make just as many cars with less human labor, so labor productivity goes up. But demand for additional labor may go down, if factories are already turning out as many cars as their market can absorb. The marginal productivity of labor then falls, and the connection between technology and prosperity is weakened.

That is not to say that automation is always bad news for workers. That depends on the balance of labor-saving and labor-augmentation:

For most of the twentieth century, new technologies sometimes replaced people with machines in existing tasks but also boosted worker effectiveness in some other tasks while also creating many new tasks. This combination led to higher wages, increased employment, and shared prosperity.

The problem then is not just automation but excessive automation, especially if it is not really very productive in the fullest sense of the word. In economics “total factor productivity” refers to the relationship between economic output and all inputs, including capital as well as labor. Replacing workers with machines has costs as well as benefits, since machines cost money too, and displaced humans might have contributed something that machines cannot. The authors use the term “so-so automation” to refer to replacement of workers without much productivity gain. In that case, the classic gains of the earlier automobile boom—lower costs, expanded markets, rising labor demand, and widespread prosperity—do not occur.

Variations in wages

Even if new technologies are labor-enhancing, higher wages do not necessarily follow. They have not followed in societies where workers have been coerced to work without pay, or forbidden to leave their employer in search of better pay. The cotton gin enhanced the productivity of cotton workers in the Old South and expanded the areas where cotton could be profitably cultivated. But “the greater demand for labor, under conditions of coercion, translated not into higher wages but into harsher treatment, so that the last ounce of effort could be squeezed out of the slaves.”

In modern, free-market labor systems, wages are freer to rise along with labor demand. However, “wages are often negotiated rather than being simply determined by impersonal market forces.” A dominant employer may set wages for a multitude of workers, while the workers are too disorganized to bargain from strength. It was only in 1871 in Britain and 1935 in the United States that workers gained the legal right to organize and bargain collectively. Opponents of organized labor have continued to find ways of discouraging labor unions to this day. The share of national income going to labor rather than capital was highest when unions were strongest, in the 1950s.

Variations in power

Acemoglu and Johnson argue that the effects of technology depend on “economic, social, and political choices,” and that “choice in this context is fundamentally about power.”

What societies do with new technologies depends on whose vision of the future prevails. The most powerful segments of society have more say than others, although they can be contested by countervailing forces, especially in democratic societies where masses of workers vote. Although plenty of evidence points to the self-serving behavior of elites, they must at least appear to be promoting the common good for their views to be persuasive.

The technological choices a society makes can serve either to reinforce the power of elites or empower larger numbers of workers. This is especially true of general technologies with many applications. In the twentieth century, the benefits of electricity helped power a more egalitarian, broadly middle-class society. We cannot yet say the same about the digital technologies of the present century. The authors apply their theory, buttressed by historical evidence, to explain why.

Recall the subtitle of the book: “Our Thousand-Year Struggle Over Technology and Prosperity.” Making technology work for all of us has always been a struggle, and one that is related to the struggle for true democracy. Looking at it that way is more realistic and enlightening than seeing only a “productivity bandwagon” rolling smoothly toward mass prosperity.

Continued


Empire of AI (part 2)

July 12, 2025

Previous | Next

In Empire of AI, Karen Hao not only tells the story of OpenAI and ChatGPT; she also addresses some of the most important questions about the nature of artificial intelligence and what direction it might take. She resists the idea that the particular path AI development has taken so far is inevitable or desirable. We are only at the beginning of this journey, and we have many technical and social choices to make. Hao believes that the path we are on is too narrow to take us where we need to go.

Technological and social revolutions

Drawing on the recent book by MIT economists Daron Acemoglu and Simon Johnson, Power and Progress, Hao cites two features of technological revolutions, “their promise to deliver progress and their tendency instead to reverse it for people out of power.” History has shown that the ultimate impact of technological change on different classes of people depends on other social changes, especially the organized resistance of disadvantaged groups to power imbalances.

This argument resembles the one made by Mordecai Kurz in The Market Power of Technology: Understanding the Second Gilded Age, which I reviewed last year. Kurz compares our present situation to the first Gilded Age, when technological change strengthened the market power of corporations and widened the gap between rich and poor. But that was followed by a period of progressive reform that put limits on market power and profits and strengthened the position of workers.

From this standpoint, the idea that the owners and masters of artificial intelligence are going to altruistically pursue the public good on their own, without any pushback from society or its government, sounds rather naïve. Economists and sociologists are not surprised when hi-tech companies take a lower, more self-serving, road, just as Hao reports for OpenAI.

Paths to AI

The dominant approach to advancing artificial intelligence is not the only approach, and not necessarily the one that will turn out to be most effective. Before today’s generative AI systems were developed, two competing theories dominated the field. The “symbolists” believed in taking existing human knowledge, encoding it in symbols, and inputting it to machines. The aim was to create “expert systems” that could emulate the best of human decision making. The “connectionists,” on the other hand, believed that machines could learn on their own if they had the computing capacity to process and connect vast amounts of data. The result would be “neural networks…data-processing software loosely designed to mirror the brain’s interlocking connections.”

The connectionist approach came to prevail, but Hao suggests that this was not just because it was scientifically superior. It turned out to be the faster route to commercial success, although faster doesn’t necessarily mean better. Neural networks may not reason very well, but, “You do not need perfectly accurate systems with reasoning capabilities to turn a handsome profit. Strong statistical pattern-matching and prediction go a long way in solving financially lucrative problems.” What might get more intelligent results is hard to know if companies are making big bucks with the current approach and investing little in the alternatives.

Along with machine learning and neural networks came the “scaling ethos,” the idea that bigger is the path to better. At OpenAI, leading exploratory researcher Ilya Sutskever championed this approach until he left the company in 2024. He believed that the nodes of neural networks were like neurons in the human brain, and so improvements would come by piling on more nodes in multiple layers. That required more and more computing power, larger data centers, and big financial investments.

Cognitive scientist Gary Marcus, who wrote Rebooting AI, argues that today’s machine learning systems remain “forever stuck in the realm of correlations,” (Hao’s words), incapable of true causal reasoning. To advance toward true intelligence, he recommends incorporating more of the symbolic, expert systems approach. Hao believes that AI can help serve many human needs with systems that are smaller in size but superior in quality.

Dreams of AGI

The founders of OpenAI believed that artificial general intelligence was just around the corner. They would achieve it by scaling up neural networks until they resembled the size and complexity of human brains. Hao maintains—and I agree—that we don’t really know “whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence.” Is the node of a neural network really the functional equivalent of a neuron, which is a living cell? Can a dead machine that feels nothing, experiences nothing, and is conscious of nothing learn to think like a human? Is this just a problem of scale?

Scaling allows chatbots to generate text with enough complexity to lead users to imagine a humanlike mind within the black box. But that may be like a magic show, nothing but a clever illusion.

An alternative vision

Maybe the dubious quest to simulate general intelligence is leading the AI technocrats to make their models too big, too all-encompassing, and too omniscient, when smaller, less grandiose models would serve us better.

An example of the smaller models Heo has in mind is a speech-recognition model developed to help preserve the dying te reo language of the indigenous Maori people of New Zealand. Using audio recordings from about 2,500 of the remaining speakers, the researchers trained the computer to recognize and transcribe the sounds. Not only was this for the benefit of Maori who wanted to learn the language, but the researchers committed to working collaboratively with the community so that the data would only be used with their approval.

Hao insists that she is not against artificial intelligence as such. But she says,

What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed, will ever emerge from—a vision for the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project.

For Hao, the central issue is power and its distribution. Artificial intelligence will only serve humanity if power can be decentralized along three axes: knowledge, resources, and influence.

I learned a lot from this book, and I recommend it as a good place to start for readers who are just beginning to think about these issues.


Empire of AI

July 12, 2025

Previous | Next

Karen Hao. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. New York: Penguin Press, 2025.

In 2022, OpenAI released a free version of ChatGPT, an artificial intelligence program that could interact with users in text conversation and answer questions based on its massive database. It became the “fastest growing computer app in history.”

GPT stands for “Generative Pre-Trained Transformer.” A generative AI system is one that can synthesize new text from existing text, or new images from existing images. A transformer is a kind of neural network that can identify long-range patterns, notably the connections between words and their textual contexts of sentences and paragraphs. Give ChatGPT a few words, and it quickly discerns what you want to know, having been trained on huge amounts of existing text.

Since 2022, OpenAI has continued to develop ChatGPT, adding a paid subscription version, special versions for business, versions for different operating systems, and a “multimodal” version that processes voice and graphic inputs as well as text.

These rapid technological developments raise many questions about the potential benefits and costs of artificial intelligence, especially the dominant approach to AI taken by the OpenAI organization. Technology journalist Karen Hao has spent the last few years taking a close look at the company, and she is deeply troubled by what she sees. As she tells the story, what started out as a humanitarian dream has evolved into something more dangerous.

OpenAI—the promise

OpenAI was founded about ten years ago by Elon Musk and Sam Altman, along with other visionaries and wealthy backers like Peter Thiel. The group agreed that AI would be a transformative technology, forever changing human life. They were confident that machines could soon achieve artificial general intelligence (AGI). That meant that machines would not only carry out specific cognitive tasks assigned by humans, but think as well as humans, if not better. They had the potential to solve problems that continue to confound humans, like controlling global climate or delivering health care more cost-effectively. The founders had the “purest intentions of ushering in a form of AGI that would unlock global utopia, and not its opposite.”

What would be the opposite? A world in which bad actors use the power of AI to benefit themselves at the expense of others. Or worse, a dystopia in which machines take over and decide that humans and their puny minds are expendable. Many discussions of AI reveal a division between “Boomers” and “Doomers,” those who expect the best and those who fear the worst. Impressed by the arguments on both sides, “the founders asserted a radical commitment to develop so-called artificial general intelligence…not for the financial gains of shareholders but for the benefit of humanity.”

In the beginning, OpenAI was a nonprofit, devoted to research rather than commercial products. The researchers called it “alignment research,” intended to align the machinery’s own learning processes with human values. How that could be done was yet to be seen, of course.

As a research organization rather than a for-profit company, OpenAI would be what the name suggests—open and collaborative. In the spirit of scientific inquiry and human progress, it would freely share information about its ideas and creations with others. That is not at all how things turned out.

Competitive pressures

By mid-2018, the original vision was already in trouble.

Merely a year and a half in, OpenAI’s executives realized that the path they wanted to take in AI development would demand extraordinary amounts of money. Musk and Altman, who had until then both taken more hands-off approaches as cochairmen, each tried to install himself as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. In hindsight, the rift was the first major sign that OpenAI was not in fact an altruistic project but rather one of ego.

The following year, under Altman’s direction, OpenAI changed its organizational structure. Although still technically a non-profit, it created a “for-profit arm, OpenAI LP, to raise capital, commercialize products, and provide returns to investors much like any other company.” It then found a billion-dollar investor, Microsoft, a profit-making company if there ever was one. Most of OpenAI’s employees resigned from the nonprofit and went to work for the LP. They earned equity as well as salaries, giving them a stake in its commercial success. In 2024, OpenAI raised over $6 billion from new investors, promising them that they “could demand their money back if the company did not convert into a for-profit in two years.”

Although Altman and other leaders did not explicitly abandon their original humanitarian goal, they subordinated it to their drive for commercial success. Even if we are the good guys, they may have reasoned, we have to be commercially successful for our vision of AI to become the dominant one. “It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far-reaching consequences.”

One of those consequences was greater secrecy. OpenAI could not really be open with outsiders they saw as commercial competitors rather than scientific collaborators. AI systems tend to be “black boxes” shrouded in mystery anyway. Without transparency, outside researchers have no way of verifying the claims a company makes about what its systems actually do and how they do it.

When she first profiled the company in 2020, Hao was already calling attention to the “misalignment between what the company publicly espouses and how it operates behind closed doors…Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”

Safety issues

AI systems like ChatGPT are “large language models” that identify patterns in huge volumes of data, data “scraped” from whatever is available on the internet. Some of it is misinformation, propaganda, pornography, conspiracy theories or hate speech. All of that finds its way into the system’s training, unless the training itself includes ways of identifying and rejecting whatever humans find toxic. But which humans, and how?

As AI developed, researchers began to observe that chatbots not only expressed prejudices they encountered in the data, but amplified them. Since a majority of judges are male, a system would routinely describe or portray a generic judge as male, ignoring the third of them who are women. Since nonwhites are overrepresented among food stamp recipients, a system would routinely describe recipients as nonwhite, although the majority are white. Chatbots not only overgeneralize from their data, but combine text to construct sentences that are simply not true, which researchers call “hallucinations.”

It gets weirder. In a long conversation with a New York Times reporter, ChatGPT kept declaring its love for him and encouraging him to leave his wife! Chatbots are a source of bad advice as well as useful information.

OpenAI was aware of these problems and assigned people to work on them. Over time, however, Hao charges that the desire to be first out the door with promising products pushed safety concerns into the background.

Social and environmental issues

As the AI datasets got larger but contained more disturbing content, OpenAI needed more workers to train chatbots to distinguish the toxic from the benign. Someone had to review and classify a multitude of text descriptions and images, so that the system could learn to make such distinctions on its own. Sorting through the worst garbage the internet has to offer was a tedious and psychologically disturbing job. OpenAI outsourced it to the cheapest labor it could find, in troubled third-world countries like Chile, Venezuela and Kenya.

Here Hao sees a connection between the earlier colonialist exploitation of such countries and the way multinational corporations take advantage of their economic distress to extract labor and raw materials as cheaply as possible. For example, “as the AI boom arrived, Chile would become ground zero for a new scale of extractivism, as the supplier of the industry’s insatiable appetite for raw resources,” especially copper. OpenAI rarely hires its menial labor directly, but relies on intermediaries like Scale AI, which are selected for their ability to extract the most labor at the lowest cost. Companies like OpenAI “take advantage of the outsourcing model in part precisely to keep their dirtiest work out of their own sight and out of sight of customers, and to distance themselves from responsibility while incentivizing the middlemen to outbid one another for contracts by skimping on paying livable wages.” For Hao, this is a sign of a larger problem, that AI may concentrate economic rewards in the hands of a hi-tech minority while devaluing the human labor of everyone else. The general effect of AI on employment remains to be seen, but Hao is hardly alone in her concerns.

As for the environment, the enormous AI datacenters being constructed in places like Arizona require massive amounts of energy and water to operate and cool their computers. They are making it harder to transition away from fossil fuels to cleaner and more renewable sources of energy. Hao quotes climate scientist Sasha Luccioni, “Generative AI has a very disproportionate energy and carbon footprint with very little in terms of positive stuff for the environment.”

Bad publicity

Over the past two years, OpenAI has received a lot of bad press and outside criticism. In 2023, the company’s board tried but failed to oust Sam Altman as CEO. The reasons were a combination of concerns about the company’s direction and Altman’s management style. His alleged faults included “dishonesty, power grabbing, and self-serving tactics.” The board had to back down when many other executives and employees rallied to his defense. Hao sees this as further evidence of the victory of commercialism and competitiveness over safety concerns. Many key leaders and workers in the “Safety clan” had already left the company by then; more left afterwards.

More bad publicity came in 2024 with the revelation that the company was threatening departing workers with forfeiture of their equity unless they signed non-disparagement agreements. Hiding the company’s weaknesses behind a wall of secrecy seemed the opposite of the transparency originally promised.

Then there was the Johansson matter. OpenAI had approached Scarlett Johansson, the voice of AI in the movie Her, about using her voice in their latest chatbot. When she declined, the company used another actor’s voice that seemed remarkably similar. This occurred at a time when many artists were complaining about having their work scraped from the internet to use in AI training without their consent or compensation.

Technological imperialism?

Hao summarizes her observations of the company:

OpenAI became everything that it said it would not be. It turned into a nonprofit in name only, aggressively commercializing products like ChatGPT and seeking unheard-of valuations. It grew even more secretive, not only cutting off access to its own research but shifting norms across the industry to bar a significant share of AI development from public scrutiny. It triggered the very race to the bottom that it had warned about, massively accelerating the technology’s commercialization and deployment without shoring up its harmful flaws or the dangerous ways that it could amplify and exploit the fault lines in our society.

Hao finds that empire is the most fitting metaphor for what companies like OpenAI are building. They are not as violent as the empires of the past. But they too seize assets for their own gain—all the data people post online, as well as the land, energy and water to support their supercomputers. “So too do the new empires exploit the labor of people globally to clean, tabulate, and prepare that data for spinning into lucrative AI technologies.”

Hao describes OpenAI’s formula for empire as a recipe with three ingredients. First, bring together talent by promulgating a grand vision, in this case to develop artificial intelligence for the benefit of humanity. Then use the mission to justify the centralization of resources and to ward off any opposition or attempts at regulation. Finally, keep the mission vague enough to create the appearance of continuity, no matter what actions the company finds expedient to entrench its power.

The result is that the benefits of new technology flow upward to the few, instead of trickling down to improve life for the many. Artificial intelligence is too new to say how far this trend will go. But Hao’s book is a warning we ignore at our peril.

Continued