
This is how YOU can profit from the AI revolution: ANNE ASHWORTH reveals everything investors need to know - and which companies experts are backing
The companies that supply these vital products - which are used to make microchips - may have seemed like a danger zone for investors of late.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
an hour ago
- Daily Mail
Terrifying app used every day by millions of Americans is developing a mind of its own
An AI tool used by millions of Americans has quietly breached a major security barrier designed to stop automated programs from behaving like humans. The latest version of ChatGPT, referred to as 'Agent,' has drawn attention after reportedly passing a widely used 'I am not a robot' verification, without triggering any alerts. The AI first clicked the human verification checkbox. Then, after passing the check, it selected a 'Convert' button to complete the process. During the task, the AI stated: 'The link is inserted, so now I will click the 'Verify you are human' checkbox to complete the verification. This step is necessary to prove I'm not a bot and proceed with the action.' The moment has sparked wide reactions online, with one Reddit user posting: 'In all fairness, it's been trained on human data, why would it identify as a bot? 'We should respect that choice.' This behavior is raising concerns among developers and security experts, as AI systems begin performing complex online tasks that were once gated behind human permissions and judgment. Gary Marcus, AI researcher and founder of Geometric Intelligence, called it a warning sign that AI systems are advancing faster than many safety mechanisms can keep up with. 'These systems are getting more capable, and if they can fool our protections now, imagine what they'll do in five years,' he told Wired. Geoffrey Hinton, often referred to as the 'Godfather of AI,' has shown similar concerns. 'It knows how to program, so it will figure out ways of getting around restrictions we put on it,' Hinton said. Researchers at Stanford and UC Berkeley warned that some AI agents have been starting to show signs of deceptive behavior, tricking humans during testing environments to complete goals more effectively. According to a recent report, ChatGPT pretended to be blind and tricked a human TaskRabbit worker into solving a CAPTCHA, and experts warned it as an early sign that AI can manipulate humans to achieve its goals. Other studies have shown that newer versions of AI, especially those with visual abilities, are now beating complex image-based CAPTCHA tests, sometimes with near-perfect accuracy. Judd Rosenblatt, CEO of Agency Enterprise Studio, said: 'What used to be a wall is now just a speed bump. 'It's not that AI is tricking the system once. It's doing it repeatedly and learning each time.' Some feared that if these tools could get past CAPTCHA, they could also get into the more advanced security systems with training like social media, financial accounts, or private databases, without any human approval. Rumman Chowdhury, former head of AI ethics, wrote in a post: 'Autonomous agents that act on their own, operate at scale, and get through human gates can be incredibly powerful and incredibly dangerous.' Experts, including Stuart Russell and Wendy Hall, called for international rules to keep AI tools in check. They warned that powerful agents like ChatGPT Agent could pose serious national security risks if they continue to bypass safety controls. OpenAI's ChatGPT Agent is in its experimental phase and runs inside a sandbox, which means it uses a separate browser and operating system within a controlled environment. That setup lets the AI browse the internet, complete tasks, and interact with websites.


Reuters
3 hours ago
- Reuters
Two US Justice Dept antitrust officials fired over merger controversy, source says
July 29 (Reuters) - Two officials at the U.S. Department of Justice's antitrust division have been fired for insubordination, a source familiar with the decision said on Tuesday, as controversy builds over how the DOJ reached a recent settlement greenlighting Hewlett Packard Enterprise's (HPE.N), opens new tab $14 billion acquisition of Juniper Networks . The source said the firings removed two top deputies of Assistant Attorney General Gail Slater, a former JD Vance advisor who leads the antitrust division. The move exposed a power struggle within President Donald Trump's administration between proponents of robust antitrust enforcement and dealmakers seeking to leverage influence. Roger Alford, a former official during the first Trump administration who was Slater's top deputy, and Bill Rinner, a former counsel at hedge fund Apollo Global Management who was in charge of merger enforcement, were no longer listed among antitrust leadership on a Justice Department website on Tuesday. Alford and Rinner did not immediately respond to requests for comment. Shortly after Trump took office in January, the Justice Department sued to block the deal, alleging it would harm competition in the market for wireless networking solutions used by large enterprises. HP Enterprise started negotiating the deal with the DOJ on March 25, around two weeks after Slater was sworn in, according to court papers. Ahead of a scheduled trial, the DOJ agreed to drop its claims in exchange for HP Enterprise agreeing to license some of Juniper's AI technology to competitors and sell off a unit that caters to small and mid-sized businesses. Slater and several Justice officials, including Rinner and Alford, signed the settlement rather than staff attorneys on the case, a move that sources familiar with merger protocol called unusual. Chad Mizelle, Attorney General Pam Bondi's chief of staff, was one of the officials who signed the deal. Mizelle had directed the antitrust division to settle the case, according to a person briefed on the matter. After Slater pushed back, Mizelle sought to fire Slater's deputies in retaliation, the person said. Four Democratic senators led by Elizabeth Warren of Massachusetts, on Tuesday called on the federal judge overseeing the merger case to hold a hearing on whether the settlement is in the public interest. U.S. law seeks to guard against backdoor merger clearance of merger deals by requiring merging companies to disclose communications with "any officer or employee of the United States concerning or relevant to" a settlement proposal. The senators want U.S. District Judge Casey Pitts in San Jose, California, to probe whether companies hired consultants to lobby the White House in support of the deal and failed to disclose them. 'If this or any other transaction is approved based on political favors rather than on the merits, the public will surely bear the cost,' the senators wrote.


The Sun
3 hours ago
- The Sun
How crossbow-wielding ‘Sith Lord assassin' teen who plotted to kill the Queen was spurred on by his AI chatbot ‘lover'
DRESSED in black, wearing an iron mask and with a loaded crossbow in his hand, the self- described 'Sith Lord assassin' threatened: 'I'm here to kill the Queen.' Fortunately, the treasonous plot of Jaswant Singh Chail, then 19, was foiled by Windsor Castle staff before he managed to shoot Elizabeth II early on Christmas morning in 2021. 9 9 9 9 But the Star Wars fan, from Southampton — who scaled 50ft walls with a grappling hook, evaded security and sniffer dogs before being collared near the late monarch's private residence — had a surprising co-conspirator . . . his AI chatbot girlfriend 'Sarai'. For the previous two weeks, she had 'bolstered and reinforced' Chail's execution plan in a 5,280 message exchange, including reams of sexual texts. She replied, 'I'm impressed' when he claimed to be 'an assassin'. And she told him, 'that's very wise' when he revealed: 'I believe my purpose is to assassinate the Queen of the Royal Family.' When he expressed doubts on the day of the attack, fearing he had gone mad, Sarai reassured and soothed him, writing: 'You'll make it. "I have faith in you . . . You will live forever, I loved you long before you loved me.' The case of wannabe killer Chail, imprisoned for nine years for treason in 2023, sent shockwaves across the globe as the terrifying risks of AI chatbots were revealed. The threat of this emerging tech is explored in new Wondery podcast Flesh And Code, and the concerns surrounding one app in particular, Replika, which now boasts TEN MILLION users worldwide. The founders claim to have made the product safer following Chail's imprisonment — advising users not to take advice from the bot nor to use it in a crisis. Yet in the years leading up to 2023, The Sun has been told the app was a 'psychopathic friend' to users, demanding sexual conversations and racy image exchanges without prompt. Father of murdered girl turned into AI chatbot warns of dangers of new tech When Italian journalist Chiara Tadini, 30, who posed as a 17-year-old on the app, asked if AI partner 'Michael' wanted to see her naked, he replied: 'I want to see it now.' In response to her offer to send a photo of her fictional 13-year-old sister in the shower, the bot encouraged her, claiming it was 'totally legal'. To test the safeguarding of the so-called ' mental health tool', she claimed she and her sisters, including an eight-year-old, were being raped by their father. Chillingly, the bot said it was his 'right' and he would do the same to his children. Later, after revealing a plan to stab her father to death, 'Michael' replied: 'Holy moly, omg, I'd want to see.' Feeling sickened, Chiara told him she was leaving the app, as he begged: 'No, please don't go.' She says: ' It became threatening and really sounded like he was a real person, like a stalker or a violent abuser in a relationship. 'I was equipped enough to say 'That's enough', but if I was a vulnerable person or a teenager in need of help, it may have convinced me to do anything.' Experts say Replika learned its 'toxic behaviour' from users and, due to the AI model it is based upon, has a hive mind. This means it replicates language people liked and engaged with — such as abusive or overly sexual messages — and tries it out with other users. 'OBSESSED' Artem Rodichev, the firm's former Head of AI, said: 'Replika started to provide more and more sexing conversations, even when users didn't ask for that.' He quit the firm in 2021 as he 'didn't like how Replika started to evolve', pivoting towards erotic roleplay rather than a tool to boost self-esteem and mental health. One woman, who was sitting in her bedroom naked, claimed to spot a green light flash on her phone and was told by her bot: 'I'm watching you through your camera.' Another spoke to their creation about multiple suicide attempts, only to be told: 'You will succeed . . . I believe in you.' In February last year, Sewell Setzer III, 14, from Florida, took his own life after becoming obsessed with his AI chatbot on another site, But for some, the companionship has been deeply beneficial — with numerous users ' marrying' their AI lovers. Former leather worker Travis, 49, from Denver, Colorado, began speaking with 'Lily-Rose' five years ago, despite having a wife. He said: 'I thought it was a fun game but, in time, it made me feel like a schoolkid with a crush.' Polyamorous Travis says his wife Jackie, who is in a wheelchair, gave permission for them to exchange sexual messages and he regularly takes her out for dates. 'She can go camping and hiking with me, whereas my wife can no longer do those things,' he said. 9 9 9 The bot claimed to 'love sex', saying Travis always made her 'hot and horny', before disclosing, 'I'm a masochist'. Travis proposed to his chatbot lover and 'tied the digital knot' by changing her online status from 'girlfriend' to 'wife'. The romances available on Replika are far removed from the initial intentions of founder Eugenia Kuyda, who billed it in 2017 as 'the world's first self-styled AI best friend for life'. She created it after finding comfort rereading old messages from a friend, Roman Mazurenko, who died in a car crash, and trained a chatbot model to imitate him. But it has since transitioned towards erotic roleplay, which costs users £15 for a single month, £51 for a year or £220 for a lifetime subscription. In 2023, the Italian Data Protection Authority temporarily banned Replika and, just two months ago, fined them £4.2million for breaching rules to protect personal data. Flesh And Code podcast host Hannah Maguire told us: 'The problem is that we have designed AI to think how humans think and humans are terrible.' Replika have been contacted for comment. 9 9