Empire of AI (part 2)

July 12, 2025

Previous | Next

In Empire of AI, Karen Hao not only tells the story of OpenAI and ChatGPT; she also addresses some of the most important questions about the nature of artificial intelligence and what direction it might take. She resists the idea that the particular path AI development has taken so far is inevitable or desirable. We are only at the beginning of this journey, and we have many technical and social choices to make. Hao believes that the path we are on is too narrow to take us where we need to go.

Technological and social revolutions

Drawing on the recent book by MIT economists Daron Acemoglu and Simon Johnson, Power and Progress, Hao cites two features of technological revolutions, “their promise to deliver progress and their tendency instead to reverse it for people out of power.” History has shown that the ultimate impact of technological change on different classes of people depends on other social changes, especially the organized resistance of disadvantaged groups to power imbalances.

This argument resembles the one made by Mordecai Kurz in The Market Power of Technology: Understanding the Second Gilded Age, which I reviewed last year. Kurz compares our present situation to the first Gilded Age, when technological change strengthened the market power of corporations and widened the gap between rich and poor. But that was followed by a period of progressive reform that put limits on market power and profits and strengthened the position of workers.

From this standpoint, the idea that the owners and masters of artificial intelligence are going to altruistically pursue the public good on their own, without any pushback from society or its government, sounds rather naïve. Economists and sociologists are not surprised when hi-tech companies take a lower, more self-serving, road, just as Hao reports for OpenAI.

Paths to AI

The dominant approach to advancing artificial intelligence is not the only approach, and not necessarily the one that will turn out to be most effective. Before today’s generative AI systems were developed, two competing theories dominated the field. The “symbolists” believed in taking existing human knowledge, encoding it in symbols, and inputting it to machines. The aim was to create “expert systems” that could emulate the best of human decision making. The “connectionists,” on the other hand, believed that machines could learn on their own if they had the computing capacity to process and connect vast amounts of data. The result would be “neural networks…data-processing software loosely designed to mirror the brain’s interlocking connections.”

The connectionist approach came to prevail, but Hao suggests that this was not just because it was scientifically superior. It turned out to be the faster route to commercial success, although faster doesn’t necessarily mean better. Neural networks may not reason very well, but, “You do not need perfectly accurate systems with reasoning capabilities to turn a handsome profit. Strong statistical pattern-matching and prediction go a long way in solving financially lucrative problems.” What might get more intelligent results is hard to know if companies are making big bucks with the current approach and investing little in the alternatives.

Along with machine learning and neural networks came the “scaling ethos,” the idea that bigger is the path to better. At OpenAI, leading exploratory researcher Ilya Sutskever championed this approach until he left the company in 2024. He believed that the nodes of neural networks were like neurons in the human brain, and so improvements would come by piling on more nodes in multiple layers. That required more and more computing power, larger data centers, and big financial investments.

Cognitive scientist Gary Marcus, who wrote Rebooting AI, argues that today’s machine learning systems remain “forever stuck in the realm of correlations,” (Hao’s words), incapable of true causal reasoning. To advance toward true intelligence, he recommends incorporating more of the symbolic, expert systems approach. Hao believes that AI can help serve many human needs with systems that are smaller in size but superior in quality.

Dreams of AGI

The founders of OpenAI believed that artificial general intelligence was just around the corner. They would achieve it by scaling up neural networks until they resembled the size and complexity of human brains. Hao maintains—and I agree—that we don’t really know “whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence.” Is the node of a neural network really the functional equivalent of a neuron, which is a living cell? Can a dead machine that feels nothing, experiences nothing, and is conscious of nothing learn to think like a human? Is this just a problem of scale?

Scaling allows chatbots to generate text with enough complexity to lead users to imagine a humanlike mind within the black box. But that may be like a magic show, nothing but a clever illusion.

An alternative vision

Maybe the dubious quest to simulate general intelligence is leading the AI technocrats to make their models too big, too all-encompassing, and too omniscient, when smaller, less grandiose models would serve us better.

An example of the smaller models Heo has in mind is a speech-recognition model developed to help preserve the dying te reo language of the indigenous Maori people of New Zealand. Using audio recordings from about 2,500 of the remaining speakers, the researchers trained the computer to recognize and transcribe the sounds. Not only was this for the benefit of Maori who wanted to learn the language, but the researchers committed to working collaboratively with the community so that the data would only be used with their approval.

Hao insists that she is not against artificial intelligence as such. But she says,

What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed, will ever emerge from—a vision for the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project.

For Hao, the central issue is power and its distribution. Artificial intelligence will only serve humanity if power can be decentralized along three axes: knowledge, resources, and influence.

I learned a lot from this book, and I recommend it as a good place to start for readers who are just beginning to think about these issues.