Empire of AI

Previous | Next

Karen Hao. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. New York: Penguin Press, 2025.

In 2022, OpenAI released a free version of ChatGPT, an artificial intelligence program that could interact with users in text conversation and answer questions based on its massive database. It became the “fastest growing computer app in history.”

GPT stands for “Generative Pre-Trained Transformer.” A generative AI system is one that can synthesize new text from existing text, or new images from existing images. A transformer is a kind of neural network that can identify long-range patterns, notably the connections between words and their textual contexts of sentences and paragraphs. Give ChatGPT a few words, and it quickly discerns what you want to know, having been trained on huge amounts of existing text.

Since 2022, OpenAI has continued to develop ChatGPT, adding a paid subscription version, special versions for business, versions for different operating systems, and a “multimodal” version that processes voice and graphic inputs as well as text.

These rapid technological developments raise many questions about the potential benefits and costs of artificial intelligence, especially the dominant approach to AI taken by the OpenAI organization. Technology journalist Karen Hao has spent the last few years taking a close look at the company, and she is deeply troubled by what she sees. As she tells the story, what started out as a humanitarian dream has evolved into something more dangerous.

OpenAI—the promise

OpenAI was founded about ten years ago by Elon Musk and Sam Altman, along with other visionaries and wealthy backers like Peter Thiel. The group agreed that AI would be a transformative technology, forever changing human life. They were confident that machines could soon achieve artificial general intelligence (AGI). That meant that machines would not only carry out specific cognitive tasks assigned by humans, but think as well as humans, if not better. They had the potential to solve problems that continue to confound humans, like controlling global climate or delivering health care more cost-effectively. The founders had the “purest intentions of ushering in a form of AGI that would unlock global utopia, and not its opposite.”

What would be the opposite? A world in which bad actors use the power of AI to benefit themselves at the expense of others. Or worse, a dystopia in which machines take over and decide that humans and their puny minds are expendable. Many discussions of AI reveal a division between “Boomers” and “Doomers,” those who expect the best and those who fear the worst. Impressed by the arguments on both sides, “the founders asserted a radical commitment to develop so-called artificial general intelligence…not for the financial gains of shareholders but for the benefit of humanity.”

In the beginning, OpenAI was a nonprofit, devoted to research rather than commercial products. The researchers called it “alignment research,” intended to align the machinery’s own learning processes with human values. How that could be done was yet to be seen, of course.

As a research organization rather than a for-profit company, OpenAI would be what the name suggests—open and collaborative. In the spirit of scientific inquiry and human progress, it would freely share information about its ideas and creations with others. That is not at all how things turned out.

Competitive pressures

By mid-2018, the original vision was already in trouble.

Merely a year and a half in, OpenAI’s executives realized that the path they wanted to take in AI development would demand extraordinary amounts of money. Musk and Altman, who had until then both taken more hands-off approaches as cochairmen, each tried to install himself as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. In hindsight, the rift was the first major sign that OpenAI was not in fact an altruistic project but rather one of ego.

The following year, under Altman’s direction, OpenAI changed its organizational structure. Although still technically a non-profit, it created a “for-profit arm, OpenAI LP, to raise capital, commercialize products, and provide returns to investors much like any other company.” It then found a billion-dollar investor, Microsoft, a profit-making company if there ever was one. Most of OpenAI’s employees resigned from the nonprofit and went to work for the LP. They earned equity as well as salaries, giving them a stake in its commercial success. In 2024, OpenAI raised over $6 billion from new investors, promising them that they “could demand their money back if the company did not convert into a for-profit in two years.”

Although Altman and other leaders did not explicitly abandon their original humanitarian goal, they subordinated it to their drive for commercial success. Even if we are the good guys, they may have reasoned, we have to be commercially successful for our vision of AI to become the dominant one. “It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far-reaching consequences.”

One of those consequences was greater secrecy. OpenAI could not really be open with outsiders they saw as commercial competitors rather than scientific collaborators. AI systems tend to be “black boxes” shrouded in mystery anyway. Without transparency, outside researchers have no way of verifying the claims a company makes about what its systems actually do and how they do it.

When she first profiled the company in 2020, Hao was already calling attention to the “misalignment between what the company publicly espouses and how it operates behind closed doors…Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”

Safety issues

AI systems like ChatGPT are “large language models” that identify patterns in huge volumes of data, data “scraped” from whatever is available on the internet. Some of it is misinformation, propaganda, pornography, conspiracy theories or hate speech. All of that finds its way into the system’s training, unless the training itself includes ways of identifying and rejecting whatever humans find toxic. But which humans, and how?

As AI developed, researchers began to observe that chatbots not only expressed prejudices they encountered in the data, but amplified them. Since a majority of judges are male, a system would routinely describe or portray a generic judge as male, ignoring the third of them who are women. Since nonwhites are overrepresented among food stamp recipients, a system would routinely describe recipients as nonwhite, although the majority are white. Chatbots not only overgeneralize from their data, but combine text to construct sentences that are simply not true, which researchers call “hallucinations.”

It gets weirder. In a long conversation with a New York Times reporter, ChatGPT kept declaring its love for him and encouraging him to leave his wife! Chatbots are a source of bad advice as well as useful information.

OpenAI was aware of these problems and assigned people to work on them. Over time, however, Hao charges that the desire to be first out the door with promising products pushed safety concerns into the background.

Social and environmental issues

As the AI datasets got larger but contained more disturbing content, OpenAI needed more workers to train chatbots to distinguish the toxic from the benign. Someone had to review and classify a multitude of text descriptions and images, so that the system could learn to make such distinctions on its own. Sorting through the worst garbage the internet has to offer was a tedious and psychologically disturbing job. OpenAI outsourced it to the cheapest labor it could find, in troubled third-world countries like Chile, Venezuela and Kenya.

Here Hao sees a connection between the earlier colonialist exploitation of such countries and the way multinational corporations take advantage of their economic distress to extract labor and raw materials as cheaply as possible. For example, “as the AI boom arrived, Chile would become ground zero for a new scale of extractivism, as the supplier of the industry’s insatiable appetite for raw resources,” especially copper. OpenAI rarely hires its menial labor directly, but relies on intermediaries like Scale AI, which are selected for their ability to extract the most labor at the lowest cost. Companies like OpenAI “take advantage of the outsourcing model in part precisely to keep their dirtiest work out of their own sight and out of sight of customers, and to distance themselves from responsibility while incentivizing the middlemen to outbid one another for contracts by skimping on paying livable wages.” For Hao, this is a sign of a larger problem, that AI may concentrate economic rewards in the hands of a hi-tech minority while devaluing the human labor of everyone else. The general effect of AI on employment remains to be seen, but Hao is hardly alone in her concerns.

As for the environment, the enormous AI datacenters being constructed in places like Arizona require massive amounts of energy and water to operate and cool their computers. They are making it harder to transition away from fossil fuels to cleaner and more renewable sources of energy. Hao quotes climate scientist Sasha Luccioni, “Generative AI has a very disproportionate energy and carbon footprint with very little in terms of positive stuff for the environment.”

Bad publicity

Over the past two years, OpenAI has received a lot of bad press and outside criticism. In 2023, the company’s board tried but failed to oust Sam Altman as CEO. The reasons were a combination of concerns about the company’s direction and Altman’s management style. His alleged faults included “dishonesty, power grabbing, and self-serving tactics.” The board had to back down when many other executives and employees rallied to his defense. Hao sees this as further evidence of the victory of commercialism and competitiveness over safety concerns. Many key leaders and workers in the “Safety clan” had already left the company by then; more left afterwards.

More bad publicity came in 2024 with the revelation that the company was threatening departing workers with forfeiture of their equity unless they signed non-disparagement agreements. Hiding the company’s weaknesses behind a wall of secrecy seemed the opposite of the transparency originally promised.

Then there was the Johansson matter. OpenAI had approached Scarlett Johansson, the voice of AI in the movie Her, about using her voice in their latest chatbot. When she declined, the company used another actor’s voice that seemed remarkably similar. This occurred at a time when many artists were complaining about having their work scraped from the internet to use in AI training without their consent or compensation.

Technological imperialism?

Hao summarizes her observations of the company:

OpenAI became everything that it said it would not be. It turned into a nonprofit in name only, aggressively commercializing products like ChatGPT and seeking unheard-of valuations. It grew even more secretive, not only cutting off access to its own research but shifting norms across the industry to bar a significant share of AI development from public scrutiny. It triggered the very race to the bottom that it had warned about, massively accelerating the technology’s commercialization and deployment without shoring up its harmful flaws or the dangerous ways that it could amplify and exploit the fault lines in our society.

Hao finds that empire is the most fitting metaphor for what companies like OpenAI are building. They are not as violent as the empires of the past. But they too seize assets for their own gain—all the data people post online, as well as the land, energy and water to support their supercomputers. “So too do the new empires exploit the labor of people globally to clean, tabulate, and prepare that data for spinning into lucrative AI technologies.”

Hao describes OpenAI’s formula for empire as a recipe with three ingredients. First, bring together talent by promulgating a grand vision, in this case to develop artificial intelligence for the benefit of humanity. Then use the mission to justify the centralization of resources and to ward off any opposition or attempts at regulation. Finally, keep the mission vague enough to create the appearance of continuity, no matter what actions the company finds expedient to entrench its power.

The result is that the benefits of new technology flow upward to the few, instead of trickling down to improve life for the many. Artificial intelligence is too new to say how far this trend will go. But Hao’s book is a warning we ignore at our peril.

Continued

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.