
Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling
Author
:
Karen Hao
ISBN-13
:
978-0241678923
Publisher
:
Allen Lane
Guideline Price
:
£25
Fewer than three years ago, almost nobody outside of Silicon Valley, excepting perhaps science fiction enthusiasts, was talking about
artificial intelligence
or throwing the snappy short form, AI, into household conversations.
But then came ChatGPT, a chatbot quietly released for public online access by the San Francisco AI research company
OpenAI
in late November 2022.
ChatGPT
– GPT stands for Generative Pre-training Transformer, the underlying architecture for the chatbot – was to be made available as a 'low-key research preview' and employees took bets on how many might try it out in the coming days – maybe thousands? Possibly even tens of thousands?
They figured that, like OpenAI's previous release in 2021, the visual art-generating AI called Dall-E (a play on the names of the surrealist artist Dali and the Pixar film of eponymous robot, Wall-E),it would get a swift blast of attention, then interest would wane.
[
From The Terminator to Frankenstein, 12 of the best portrayals of AI from the past two centuries
Opens in new window
]
To prepare, OpenAI's infrastructure team decided that configuring the company servers to handle 100,000 users at once would be over-optimistically sufficient. Instead, the servers started to crash as waves of users spiked in country after country. People woke up, read about ChatGPT in their news feeds and rushed to try it out. Within just five days, ChatGPT had a million users; within two months, that number had swelled to 100 million.
READ MORE
No one in OpenAI 'truly fathomed the societal phase shift they were about to unleash', says Karen Hao in Empire of AI, her meticulously detailed profile of the company and its controversial leader
Sam Altman
. Hao, an accomplished journalist long on the AI beat, says that even now, company engineers are baffled at ChatGPT's snap ascendancy.
[
OpenAI chief Sam Altman: 'This is genius-level intelligence'
Opens in new window
]
But why should it be so inexplicable? While Dall-E also amazed, it was fundamentally a tool for making art. Although it could construct bizarre and beautiful things (while exploiting the work of actual artists it was trained on), it wasn't chatty. ChatGPT, in thrilling contrast, hovered on the edge of embodying what people largely think a futuristic computer should be. You could converse with it, have it write an essay or code a piece of software, ask for advice, even joke with it, and it responded in an amiably conversational and, most of the time, usefully productive way.
Dall-E felt like a computer programme. ChatGPT teased the possibility of the kind of sentient, thoughtful artificial intelligence that we easily recognise, given that this presentation has been honed over decades of films, TV series and science fiction novels. We've been trained to expect it – and to create it. While ChatGPT is definitely not sentient, it astonished because it seemed as if it might be, and OpenAI has continued to ramp up the expectation that an AI model might soon be, if not fully sentient, then smarter than human. No surprise, really, that Hao writes that 'ChatGPT catapulted OpenAI from a hot start-up well known within the tech industry into a household name overnight'.
As big as that moment was, there's so much significant backstory for the 'hot start-up' that the tale of the game-changing release of ChatGPT doesn't materialise until a third of the way into Empire of AI.
With precision and insight, Hao documents the challenges and decisions faced and resolved – or often more crucially, not resolved – in the years before ChatGPT turned OpenAI into one of the most disturbingly powerful companies in the world. Then, she takes us up to the end of 2024, as valid concerns have further ballooned over OpenAI and Altman's bossy and ruthless championing of a costly, risky, environmentally devastating and billionaire-enriching version of AI.
In this convincing telling, AI is evolving into the design and control of an exclusive and dangerous club to which very few belong, but for which many, especially the world's poorest and most vulnerable, are materially exploited and economically capitalised. Hence, truly, the 'empire' of AI.
OpenAI, which leads in this space, was founded in 2015 by Altman – who then ran the storied Valley start-up incubator Y Combinator – and by
Elon Musk
. Both (apparently) shared a deep concern that AI could prove an existential risk, but recognised it could also be a transformative, world-changing breakthrough for humanity (take your pick), and therefore should be developed cautiously and ethically within the framework of a non-profit company with a strong board. (This split between 'doomers', who see AI as an existential risk, and 'boomers', who think it so beneficial we should let development rip, still divides the AI community.)
Now that the world knows Altman and Musk quite a bit better, their heart-warming regard for humanity seems improbable, and so it's turned out to be. Hao says that fissures appeared from the start between those in OpenAI prioritising safety and caution and those eager to develop and, eventually, commercialise products so powerful they perhaps heralded the pending arrival of AI that will outthink and outperform humans, called AGI or artificial general intelligence.
Altman increasingly chose the 'move fast, break things' approach even as he withdrew OpenAI from outside scrutiny. Interestingly, several of OpenAI's earliest and problematical top-level hires were former employees of
Stripe
, the fintech firm founded by Ireland's Collison brothers. Despite having such top industry people, OpenAI 'struggled to find a coherent strategy' and 'had no idea what it was doing'.
[
John Collison of Stripe: 'I am baffled by companies doing an about-face on social initiatives'
Opens in new window
]
What it did decide to do was to travel down a particular AI development path that emphasised scale, using breathtakingly expensive chips and computing power and requiring huge water-cooled
data centres
. Costs soared, and OpenAI needed to raise billions in funding, a serious problem for a non-profit since investors want a commercial return.
Cue the restructuring of the company in 2019 into a bizarre, two-part vehicle with a largely meaningless 'capped profit' and a non-profit side, and the need for a CEO, a job that went to Altman and not Musk.
Microsoft
came on board as a major partner too;
Bill Gates
was wowed by OpenAI's latest AI model months before the release of ChatGPT.
As dramatic as the ChatGPT launch turned out to be, Hao makes the strategic choice to open the book with a zoom-in on OpenAI's other big drama, the sudden firing in November 2023 of Altman by its tiny board of directors. The board said Altman had lied to them at times and was untrustworthy. After a number of twists and turns, Altman returned, the board departed, and OpenAI has since become increasingly defined as a profit-focused behemoth that has stumbled into numerous controversies while tirelessly pushing a version of AI development that maintains its staggeringly pricey leadership position.
This, then, is Hao's framing device for looking at a company headed by an undoubtedly charismatic and gifted individual but one who has trailed controversy and whose documented non-transparency raises serious concerns. In tracing the company's early history, Hao sets out its many conflicts and problems, and Altman's willingness to drive development and growth in ways that veer far from its original ethical founding.
For example, at first OpenAI adhered to a principle of using only clean data for training its models – that is, vast data sets that exclude the viler pits of internet discussion, racism, conspiracy rabbit holes, pornography or child sexual abuse material (CSAM). But as OpenAI scaled up its models, it needed ever more data, any data, and rowed back, using what noted Irish-based cognitive scientist Abeba Birhane – referenced several times in the book – has exposed as 'data swamps'. That's even before you consider AI's inaccuracies, 'hallucinations' of made-up certainty, and data privacy and protection encroachments.
For a time, Hao veers away from a strict OpenAI pathway to draw on her strong past travel research and reporting to reveal how AI is built off appallingly cheap labour drawn from some of the poorest parts of the world, because AI isn't all digital wizardry. It's people being paid pennies in Kenya to identify objects in video or perform gruelling content moderation to remove CSAM. It's gigantic, water use-intensive data centres built in poorer communities despite years-long droughts, and environmentally damaging mining and construction. It's cultural loss, as data training sets valorise dominant languages and experiences.
In the face of these data colonialism realities, using an AI chatbot to answer a frivolous question – requiring 10 times the computing energy and resources of an old-style search – is increasingly grotesque.
Unfortunately, the book went to print before Hao could consider the groundbreaking impact of new Chinese AI DeepSeek. Its lower cost, and challenge to OpenAI and the massive scale mantra, has rocked AI, its largely Valley-based development and global politics. It would have been fascinating to get her take. But never mind. Hao knits all her threads here into a persuasive argument that AI doesn't have to be the Valley version of AI, and OpenAI's way shouldn't be the AI default, or perhaps, pursued at all.
The truth is, no one understands how AI works, or why, or what it might do, especially if it does reach AGI. Humanity has major decisions to make, and Empire of AI is convincing on why we should not allow companies such as OpenAI and Microsoft, or people such as Altman or Musk, to make those decisions for us, or without us.
Further reading
Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass
by Mary L Gray and Siddarth Suri (Harper Business, 2019). What looks like technology – AI, web services – often only works due to the task-based, uncredited labour of an invisible, poorly paid, easily-exploited global 'ghost' workforce.
Supremacy: AI, ChatGPT and the Race that Changed the World
by Parmy Olson (Macmillan Business, 2024). A different angle on the startling debut of OpenAI's ChatGPT, with the focus here on the emerging race between Microsoft and Google to capitalise on generative AI and dominate the market.
The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil (Duckworth reissue, 2024). The hugely influential 2005 classic that predicts a coming 'singularity' when humans will be powerfully enhanced by AI. Kurzweil also published a follow-up last year, The Singularity is Nearer: When We Merge with AI.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


RTÉ News
6 hours ago
- RTÉ News
Meta says working to thwart WhatsApp scammers
Meta has said it shut nearly seven million WhatsApp accounts linked to scammers in the first half of this year and is ramping up safeguards against such schemes. "Our team identified the accounts and disabled them before the criminal organisations that created them could use them," WhatsApp external affairs director Clair Deevy said. Often run by organised gangs, the scams range from bogus cryptocurrency investments to get-rich-quick pyramid schemes, WhatsApp executives said in a briefing. "There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings," Meta-owned WhatsApp said in a blog post. WhatsApp detected and banned more than 6.8 million accounts linked to scam centres, most of them in Southeast Asia, according to Meta. WhatsApp and Meta worked with OpenAI to disrupt a scam traced to Cambodia that used ChatGPT to generate text messages containing a link to a WhatsApp chat to hook victims, according to the tech firms. Meta began prompting WhatsApp users to be wary when added to unfamiliar chat groups by people they do not know. New "safety overviews" provide information about the group and tips on spotting scams, along with the option of making a quick exit. "We've all been there: someone you don't know attempting to message you, or add you to a group chat, promising low-risk investment opportunities or easy money, or saying you have an unpaid bill that's overdue," Meta said in a blog post. "The reality is, these are often scammers trying to prey on people's kindness, trust and willingness to help - or, their fears that they could be in trouble if they don't send money fast."


Irish Times
18 hours ago
- Irish Times
Dublin's planned MetroLink will be obsolete because of artificial intelligence, says Dermot Desmond
Artificial intelligence (AI) will make Dublin's planned MetroLink obsolete and the Government should abandon the €10 billion project, according to businessman Dermot Desmond . Instead, Mr Desmond has urged the Government to concentrate on the coming advantages of AI , where autonomously-driven vehicles (AVs) will cut car ownership dramatically in the decades ahead. If approved, construction of the 18.8 km mostly-underground MetroLink should begin between 2028 and 2031, with services between Swords, Dublin Airport, Dublin city centre, and on to Charlemont in south Dublin city opening in early 2035. Equipped with driverless trains running every 3 minutes during peak hours, the MetroLink is proposed to carry 20,000 passengers each hour, each way when it opens. READ MORE Dermot Desmond has said he believes MetroLink will be out of date in 10 or 15 years' time. Photograph: Cyril Byrne However, Mr Desmond is scathing of the plan, believing that public and private transportation is on the cusp globally of the biggest changes for a century or more on the back of the growth of AI. The billionaire said he had believed that the Department of Finance should veto the MetroLink. 'I think it will be useless, out of date in 10 or 15 years' time. This is something that is not going to be required, it shouldn't be planned.' AI and autonomous vehicles will cut the numbers of vehicles on the roads dramatically, he predicted: 'I think you need to look at what's going to happen in the future and then plan backwards.' The billionaire investor has become increasingly interested in the subject of AI, sponsoring a conference in Belfast last month with Queen's University, which heard from major speakers from the United States and elsewhere. 'Where the change that's going to make a big difference to everybody in the world, not alone Ireland. I think that change is going to come out of transport,' Mr Desmond declared then. 'Within 15 to 25 years, I think it will be mandated that there will be autonomous vehicles. People will not be allowed to drive anything,' he said, adding that AI is already cutting travel times and saving energy. 'Public transport systems in the future will become much more efficient. Buses will know what and where the demand is and will organise themselves accordingly,' said Mr Desmond, who urged the Government to plan for wide-scale AV bus services. The changes to come will overturn every conception held today about transport in cities, with faster journeys, less pollution and far less demand for parking because there will be fewer private vehicles, replaced by robotaxis. Most cars today lie idle for 80 per cent of their lives. 'The most optimistic case for Dublin is a reduction of 98% in vehicle numbers,' he said, though he put most realistic reduction between 20% and 60%. 'AV cars will require less space on roads as they will be better able to travel efficiently if the margin for human error is removed,' he said, 'We already live in a world where our phones anticipate when we will leave for work and tell us how long it will take. '


Irish Independent
18 hours ago
- Irish Independent
Nine out of 10 small businesses ‘have embraced AI in some form', says survey
The study of 357 small firms, conducted by Amárach Research, says the most common applications of AI are automating simple tasks (66pc) and data analytics (44pc). Adoption is high in professional services and finance, where firms report a growth in enthusiasm for AI's potential to improve accuracy, speed up processes and, most importantly, cut costs. However, the report reveals that the adoption of AI remains largely surface level, with the majority of small businesses using AI for basic functions such as content generation and reporting rather than innovation, product development or key decision making. The main barriers to a deeper sense of AI integration, according to the respondents in the report, are a lack of expertise, time constraints, and the absence of a concise business strategy. SFA director David Broderick said: 'AI is the defining technology of our time, and it will fundamentally shift how business is done. 'While the survey shows small businesses are interested and curious about it, AI adoption remains shallow among small firms as it is mostly confined to content generation and simple data analysis, rather than innovation, product development or decision-making. 'Therefore, many businesses have not yet explored its full potential.' The SFA is calling on the Government to unlock the National Trading Fund (NTF) to support upskilling in digital and AI capability. Mr Broderick also urged enhancements to the Grow Digital Voucher scheme and R&D tax credit access to encourage more firms to venture into the more advanced AI functions. Three-quarters of the respondents reported having either implemented or having a plan to implement AI further, apart from the retail sector which produced the highest number of firms that show no interest in any additional use of AI. As AI continues to change the global economical landscape, the SFA warns that Ireland's small business sector must not be left behind, and must work in conjunction with the Government and training bodies.