
Can Mark Zuckerberg really be trusted to oversee the AI revolution?
Revenue for the three months to the end of June leapt by 22 per cent to $47.5bn when compared with a year earlier. Profits surged by more than a third (36 per cent) to $18.3bn. If the price of Zuckerberg's AI ambitions is similarly dizzying – overall costs rose by 12 per cent to $27.5bn, with more spending promised – who cares? It's a smart investment and a good business move.
Meta's vast resources mean that it will inevitably be at the forefront of this exciting, but potentially dangerous, new tech for years to come. That's more than a little troubling, especially if you've read the former Facebook executive Sarah Wynn-Williams ' bestselling tell-all book Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism about her time at the company.
If you only read one business book this year – if you only read one business book in your lifetime – this is the one.
Wynn-Williams hasn't been shy about including gory allegations about the behaviour of the company's top staff. Her book is an exposé, and it does that job well. It is also a great, if sometimes horrifying read.
But the important part is what she has to say about the thoughtless way Meta was prepared to use its groundbreaking tech. For example, Wynn-Williams alleges that the company actively targeted teenagers with ads based on their emotional state, including when they were depressed.
Now add AI into that algorithmic mix. The implications are enough to make you shudder. The company's attempts to court China, and what was offered to the regime there, have also raised eyebrows. Wynn-Williams made the same allegations before US senators, when she was, remember, under oath.
Now let's put this in context. The world is in the midst of a hell-for-leather AI race. Donald Trump claims the US has already won it. He might be right. Keir Starmer is determined that Britain should at least find a place on the podium, although government communications bods would probably prefer I use phrases like 'leading role' or 'pioneering', or some such.
His latest wheeze is bigging it up with influencers (am I alone in hating the word 'influencer'?). On Thursday, the prime minister held a half-day session for around a 100 of them in Downing Street about how they can work more closely with the government – and vice versa, given the stratospheric rise of news intake on social media.
AI has ministers drooling, and with good reason. The British state is in a mess, hamstrung by sparse resources and poor leadership, which often seems more interested in pet projects than it is in solving problems. AI could change that.
Justice secretary Shabana Mahmood thinks it will be able to 'predict the risk an offender could pose' and inform 'decisions to put dangerous prisoners under tighter supervision', thus cutting prison violence. The struggling probation service could also benefit, with AI pilots showing 'a 50 per cent reduction in note-taking time, allowing officers to focus on risk management, monitoring and face-to-face meetings with offenders'.
Over to the NHS, where we are told that an app using AI to provide physio for people with back pain has cut treatment waiting lists by 55 per cent. I don't know if I particularly want my back problems handled by a bot, but maybe that's a consequence of my reading too many dystopian books.
We're already seeing some of the tech's negative effects through the elimination of junior positions in the tech industry and the City, exacerbating an already weak UK jobs market. But the government isn't so keen on talking about that.
All this begs the question: is this the sort of tech we want to be controlled by Wynn-Williams' 'careless people', who are so obsessed with the bottom line that they never stop to think about what they're doing?
This is not just a criticism of Meta. Britain's politicians are currently falling over each other to worship at the altar of the tech bros. They think it's the future. They might be right.
Perhaps this tech fixes the NHS and gets a sclerotic state that is very visibly failing the British people working again. Lives would be improved as a result. The trouble is, if you mix careless people, they could just as easily combine to create a toxic brew.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
an hour ago
- Daily Mail
Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA
I am horrified that children are growing up in a world where anyone can take a photo of them and digitally remove their clothes. They are growing up in a world where anyone can download the building blocks to develop an AI tool, which can create naked photos of real people. It will soon be illegal to use these building blocks in this way, but they will remain for sale by some of the biggest technology companies meaning they are still open to be misused. Earlier this year I published research looking at the existence of these apps that use Generative Artificial Intelligence (GenAI) to create fake sexually explicit images through prompts from users. The report exposed the shocking underworld of deepfakes: it highlighted that nearly all deepfakes in circulation are pornographic in nature, and 99% of them feature girls or women – often because the apps are specifically trained to work on female bodies. In the past four years as Children's Commissioner, I have heard from a million children about their lives, their aspirations and their worries. Of all the worrying trends in online activity children have spoken to me about – from seeing hardcore porn on X to cosmetics and vapes being advertised to them through TikTok – the evolution of 'nudifying' apps to become tools that aid in the abuse and exploitation of children is perhaps the most mind-boggling. As one 16-year-old girl asked me: 'Do you know what the purpose of deepfake is? Because I don't see any positives.' Children, especially girls, are growing up fearing that a smartphone might at any point be used as a way of manipulating them. Girls tell me they're taking steps to keep themselves safe online in the same way we have come to expect in real life, like not walking home alone at night. For boys, the risks are different but equally harmful: studies have identified online communities of teenage boys sharing dangerous material are an emerging threat to radicalisation and extremism. The government is rightly taking some welcome steps to limit the dangers of AI. Through its Crime and Policing Bill, it will become illegal to possess, create or distribute AI tools designed to create child sexual abuse material. And the introduction of the Online Safety Act – and new regulations by Ofcom to protect children – marks a moment for optimism that real change is possible. But what children have told me, from their own experiences, is that we must go much further and faster. The way AI apps are developed is shrouded in secrecy. There is no oversight, no testing of whether they can be used for illegal purposes, no consideration of the inadvertent risks to younger users. That must change. Nudifying apps should simply not be allowed to exist. It should not be possible for an app to generate a sexual image of a child, whether or not that was its designed intent. The technology used by these tools to create sexually explicit images is complex. It is designed to distort reality, to fixate and fascinate the user – and it confronts children with concepts they cannot yet understand. I should not have to tell the government to bring in protections for children to stop these building blocks from being arranged in this way. Posts on LinkedIn have even appeared promoting the 'best' nudifying AI tools available I welcome the move to criminalise individuals for creating child sexual abuse image generators but urge the government to move the tools that would allow predators to create sexually explicit deepfake images out of reach altogether. To do this, I have asked the government to require technology companies who provide opensource AI models – the building blocks of AI tools – to test their products for their capacity to be used for illegal and harmful activity. These are all things children have told me they want. They will help stop sexual imagery involving children becoming normalised. And they will make a significant effort in meeting the government's admirable mission to halve violence against women and girls, who are almost exclusively the subjects of these sexual deepfakes. Harms to children online are not inevitable. We cannot shrug our shoulders in defeat and claim it's impossible to remove the risks from evolving technology. We cannot dismiss it this growing online threat as a 'classroom problem' – because evidence from my survey of school and college leaders shows that the vast majority already restrict phone use: 90% of secondary schools and 99.8% of primary schools. Yet, despite those restrictions, in the same survey of around 19,000 school leaders, they told me online safety is among the most pressing issue facing children in their communities. For them, it is children's access to screens in the hours outside of school that worries them the most. Education is only part of the solution. The challenge begins at home. We must not outsource parenting to our schools and teachers. As parents it can feel overwhelming to try and navigate the same technology as our children. How do we enforce boundaries on things that move too quickly for us to follow? But that's exactly what children have told me they want from their parents: limitations, rules and protection from falling down a rabbit hole of scrolling. Two years ago, I brought together teenagers and young adults to ask, if they could turn back the clock, what advice they wished they had been given before owning a phone. Invariably those 16-21-year-olds agreed they had all been given a phone too young. They also told me they wished their parents had talked to them about the things they saw online – not just as a one off, but regularly, openly, and without stigma. Later this year I'll be repeating that piece of work to produce new guidance for parents – because they deserve to feel confident setting boundaries on phone use, even when it's far outside their comfort zone. I want them to feel empowered to make decisions for their own families, whether that's not allowing their child to have an internet-enabled phone too young, enforcing screen-time limits while at home, or insisting on keeping phones downstairs and out of bedrooms overnight. Parents also deserve to be confident that the companies behind the technology on our children's screens are playing their part. Just last month, new regulations by Ofcom came into force, through the Online Safety Act, that will mean tech companies must now to identify and tackle the risks to children on their platforms – or face consequences. This is long overdue, because for too long tech developers have been allowed to turn a blind eye to the risks to young users on their platforms – even as children tell them what they are seeing. If these regulations are to remain effective and fit for the future, they have to keep pace with emerging technology – nothing can be too hard to tackle. The government has the opportunity to bring in AI product testing against illegal and harmful activity in the AI Bill, which I urge the government to introduce in the coming parliamentary session. It will rightly make technology companies responsible for their tools being used for illegal purposes. We owe it to our children, and the generations of children to come, to stop these harms in their tracks. Nudifying apps must never be accepted as just another restriction placed on our children's freedom, or one more risk to their mental wellbeing. They have no value in a society where we value the safety and sanctity of childhood or family life.


The Guardian
2 hours ago
- The Guardian
Celtics co-owner set to buy WNBA's Connecticut Sun for record $325m
A group led by Celtics minority owner Steve Pagliuca has reached a deal to buy the Connecticut Sun for a record $325m and move the team to Boston, according to a person familiar with the sale. The franchise wouldn't play in Boston until the 2027 season. Pagliuca also would contribute $100m for a new practice facility in Boston for the team, the person said. The person spoke to the Associated Press on condition of anonymity on Saturday because the deal hasn't been publicly announced. The sale is pending approval of the league and its Board of Governors. 'Relocation decisions are made by the WNBA Board of Governors and not by individual teams,' the league said in a statement. The Sun have played one regular season game at TD Garden eac of the last two years, including one against Caitlin Clark and the Indiana Fever in July. The league has announced five expansion teams that will begin play over the next five seasons with Portland (2026), Toronto (2026), Cleveland (2028), Detroit (2029) and Philadelphia (2030) joining the WNBA. Each paid a then-record $250m expansion fee. Nine other cities bid for expansion teams, including Houston, which the league singled out as getting a team in the future when it announced Cleveland, Detroit and Philadelphia in June. Boston did not. 'No groups from Boston applied for a team at that time and those other cities remain under consideration based on the extensive work they did as part of the expansion process and currently have priority over Boston. Celtics' prospective ownership team has also reached out to the league office and asked that Boston receive strong consideration for a WNBA franchise at the appropriate time.' The Boston Globe first reported the sale. The Sun are owned by the Mohegan Tribe, which runs the casino where the team has played since 2003. The Tribe bought the franchise for $10m and relocated it from Orlando that year. The Connecticut franchise was the first in the league to be run by a non-NBA owner and also became the first to turn a profit. The team announced in May that it was searching for a potential buyer for the franchise and had hired investment bank Allen & Company to conduct the probe. The WNBA has experienced rapid growth the last few seasons and ownership groups have been investing more into their teams, including player experiences. That has come in the way of practice facilities. The Sun are one of the few teams in the league that haven't announced any plans for a new training facility. Connecticut practices either at the arena in the casino or a local community center. Despite the lack of facilities, the Sun have been one of the most successful teams in the league, making the postseason in 16 seasons, including a run of six straight semifinal appearances. But the team was hit hard this offseason with the entire starting five from last season leaving either via free agency or trade. Connecticut are currently in last place in the WNBA at 5-21. The team sent out a letter to season ticket holders last week saying they'd still be playing at the casino next year. The last team to be sold in the WNBA was in 2021 when real estate investor Larry Gottesdiener led a group that bought the Atlanta Dream for under $10m. A year earlier, Mark Davis paid roughly $2m for the Las Vegas Aces.


Daily Mail
3 hours ago
- Daily Mail
STRATEGIC EQUITY CAPITAL: Giving small UK firms the muscle to take on the big boys
Investment trust Strategic Equity Capital has a bold approach. It takes big stakes in companies it likes in the expectation that its judgment will be proved right, then rewards its shareholders with attractive returns. Although the strategy is not foolproof, the current manager is making a mighty good fist of it. Since taking the reins in September 2020, Gresham House's Ken Wotton has delivered shareholder returns in excess of 100 per cent, outperforming both the average for the trust's peer group and its benchmark index, the FTSE Small Cap (excluding investment companies). Wotton's modus operandi at Strategic is to seek out opportunities in the smaller companies section of the UK stock market – a sector Gresham knows intimately as a result of running numerous other funds focused on it. 'I look for companies with market capitalisations between £100 million and £300 million,' he says, 'and provided the quality and valuation are right, become one of the biggest shareholders.' This approach, he says, gives the trust 'muscle' to 'actively engage' with the companies it buys. He adds: 'At one end of the scale, engagement may be rather benign and just a question of supporting a company's management plans. 'But at the other, it may be helping a company find a non-executive board member or pointing them in the direction of a merger and acquisitions boutique if they become the target of interest from a potential private equity buyer.' The fruits of this strategy can be rewarding. For example, one of the fund's top-ten holdings is Inspired, which advises companies on optimising their procurement and usage of energy. Wotton had been a backer of the company before he took over the helm at Strategic, and made it one of his first new holdings for the trust. He then kept tickling up the stake, participating in the company's raising of new capital late last year to reduce borrowings. Inspired was then subject to a hostile takeover by Regent, an owner of gas and infrastructure companies. Strategic said it would not back the deal and put adviser Evercore in touch with Inspired. The result was the discovery of 'white knight' HGGC, a US private equity firm, which trumped Regent's 68.5p a share deal by offering 81p a share. The new offer is likely to be accepted in the coming weeks, providing the trust with a tidy profit. The £168 million trust has 18 holdings – a concentrated portfolio, especially given the top-ten stakes account for nearly 80 per cent of assets. Wotton describes the other eight positions as 'toe-hold positions,' the trust's 'pipeline'. Some of these, he says, could benefit from the cash received from the sale of Inspired. Wooton says mistakes are part of being a fund manager. 'If you don't make them, you're not taking enough risk,' But he adds: 'Risk can be mitigated by engaging with company management – which Strategic does as a matter of course – selling out, or ensuring you have an appropriate-sized holding.' The manager believes the trust's prospects are 'really attractive' over the next three to five years, although he admits the FTSE AIM market – where a majority of the trust's holdings are listed – has been undermined by Labour's decision to reduce the inheritance tax attractions of holding AIM shares. 'There has been a lot of forced selling of shares,' he says, 'and some have been derated.' Annual trust charges are 1.2 per cent, the fund's stock market ticker is SEC, and identification code B0BDCB2.