
Israel's Cato networks raises $359 million, valued at more than $4.8 billion
JERUSALEM, June 30 (Reuters) - Israeli cybersecurity firm Cato Networks said on Monday it raised $359 million in a late stage private funding round that it said values the company at more than $4.8 billion.
New investors, including Vitruvian Partners and ION Crossover Partners, as well as existing investors, including Lightspeed Venture Partners, Acrew Capital, and Adams Street Partners, participated in the financing round.
The latest investment brings the total funding raised to more than $1 billion, the company said, saying its mission is to "redefine enterprise security for the digital and AI era."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
44 minutes ago
- The Guardian
Trump expected to sign executive order to lift some sanctions on Syria
Donald Trump is expected to issue an executive order to lift some financial sanctions on Syria in a move that the White House says will help stabilise the country after the ousting of Bashar al-Assad. The US was expected on Monday to 'terminate the United States' sanctions programme on Syria', a White House spokesperson said, cancelling a 2004 declaration that froze Syrian government property and limited exports to Syria over Damascus's chemical weapons programme. Some sanctions will remain on Syria, including those mandated through Congress under the Caesar Syria Civilian Protection Act of 2019 that targeted funds for reconstruction and natural gas development, as well as the US declaration of Syria as a state sponsor of terrorism. White House officials said that the executive order would maintain pressure on the former leader Assad and his entourage. 'The order will remove sanctions on Syria while maintaining sanctions on the former president, Assad, his associates, human rights abusers, drug traffickers, persons linked to chemical weapons activities, Islamic State and their affiliates, and Iranian proxies,' said White House press spokesperson Karoline Leavitt during a briefing on Monday. The move was widely anticipated after Donald Trump briefly met with Syria's new leader Ahmed al-Sharaa, who led forces that overthrew Assad in December. Sharaa has complained that the sanctions against Syria had made it difficult to stabilise his fragile transition government, citing issues with paying civil servant wages and funding reconstruction. Trump pledged in May to lift all sanctions on Syria following Assad's removal from power.


Telegraph
an hour ago
- Telegraph
Google strikes landmark nuclear fusion deal
Google will start harnessing power from a cutting-edge nuclear fusion company as it seeks to use cleaner energy for its artificial intelligence (AI) data centres. The search giant has agreed to buy 200 megawatts (MW) of power starting in the 2030s from US fusion start-up Commonwealth Energy Systems, which is planning a nuclear fusion plant. Google will also take part-ownership of the business, which previously raised $1.8bn (£1.3bn) in 2021 from investors including Bill Gates and Tiger Global, making it the best-funded private fusion business. Silicon Valley giants like Google have been hunting for new sources of clean energy as they seek to power AI data centre infrastructure. The race to build more powerful AI tools requires vast computing power from thousands of energy-intensive processors. A breakthrough in nuclear fusion would create a source of practically limitless clean energy, which could fuel the AI boom. Commonwealth, a spin-out from the Massachusetts Institute of Technology, is building a tokamak – a type of fusion reactor – that it calls 'Sparc'. It is planning to build power plants that can each generate 400MW of power, roughly the same as a typical natural gas plant, powering 280,000 homes. 'The world wants fusion' Scientists have spent decades attempting to crack fusion power, which mirrors the nuclear reactions that take place at the centre of the sun. Unlike nuclear fission, where atoms are split to release energy, fusion sees atoms forced together. Bob Mumgaard, Commonwealth's chief executive, said the deal with Google was a 'strong signal that the world wants fusion' and that it would support 'bringing fusion power to the grid at scale'. Despite this, high-profile projects attempting to demonstrate the technology, such as ITER in France, have been long-delayed and are running billions of pounds over budget. While the financial terms of Google's deal were not disclosed, Axios reported earlier this year that Commonwealth was in the process of raising as much as $1bn. It is not the first time a tech company has agreed to buy as-yet unproven fusion power. Microsoft previously agreed to buy fusion power from US start-up Helion, which is backed by OpenAI founder Sam Altman. Helion is aiming to have a fusion plant by 2028. Amazon, Microsoft, Google and Facebook have all been in talks over nuclear power deals. Microsoft agreed to re-open a nuclear power plant at Three Mile Island in Pennsylvania to power its AI technology. Facebook also signed a deal that saved a nuclear plant in Illinois from closure. As well as conventional nuclear power, it has been considering investments in a new wave of advanced 'small modular reactors'. Last week, the UK announced £2.5bn of funding over the next five years to develop fusion power. As part of Labour's Industrial Strategy, the UK will work on building a tokamak prototype by 2040.


The Guardian
an hour ago
- The Guardian
‘Hey man, I'm so sorry for your loss': should you use AI to text?
Earlier this spring, Nik Vassev heard a high school friend's mother had died. Vassev, a 32-year-old tech entrepreneur in Vancouver, Canada, opened up Claude AI, Anthropic's artificial intelligence chatbot. 'My friend's mom passed away and I'm trying to find the right way to be there for him and send him a message of support like a good friend,' he typed. Vassev mostly uses AI to answer work emails, but also for personal communications. 'I just wanted to just get a second opinion about how to approach that situation,' he says. 'As guys, sometimes we have trouble expressing our emotions.' Claude helped Vassev craft a note: 'Hey man, I'm so sorry for your loss. Sending you and your family lots of love and support during this difficult time. I'm here for you if you need anything …' it read. Thanks to the message, Vassev's friend opened up about their grief. But Vassev never revealed that AI was involved. People 'devalue' writing that is AI-assisted, he acknowledges. 'It can rub people the wrong way.' Vassev learned this lesson because a friend once called him out for relying heavily on AI during an argument: 'Nik, I want to hear your voice, not what ChatGPT has to say.' That experience left Vassev chastened. Since then, he's been trying to be more sparing and subtle, 'thinking for myself and having AI assist', he says. Since late 2022, AI adoption has exploded in professional contexts, where it's used as a productivity-boosting tool, and among students, who increasingly use chatbots to cheat. Yet AI is becoming the invisible infrastructure of personal communications, too – punching up text messages, birthday cards and obituaries, even though we associate such compositions with 'from the heart' authenticity. Disclosing the role of AI could defeat the purpose of these writings, which is to build trust and express care. Nonetheless, one person anonymously told me that he used ChatGPT while writing his father of the bride speech; another wished OpenAI had been around when he had written his vows because it would have 'saved [him] a lot of time'. Online, a Redditor shared that they used ChatGPT to write their mom's birthday card: 'She not only cried, she keeps it on her side table and reads [it] over and over, every day since I gave it to her,' they wrote. 'I can never tell her.' Research about transparency and AI use mostly focuses on professional settings, where 40% of US workers use the tools. However, a recent study from the University of Arizona concluded that 'AI disclosure can harm social perceptions' of the disclosers at work, and similar findings apply to personal relationships. In one 2023 study, 208 adults received a 'thoughtful' note from a friend; those who were told the note was written with AI felt less satisfied and 'more uncertain about where they stand' with the friend, according to Bingjie Liu, the lead author of the study and an assistant professor of communication at Ohio State University. On subreddits such as r/AmIOverreacting or r/Relationship_advice, it's easy to find users expressing distress upon discovering, say, that their husband used ChatGPT to write their wedding vows. ('To me, these words are some of the most important that we will ever say to each other. I feel so sad knowing that they weren't even his own.') AI-assisted personal messages can convey that the sender didn't want to bother with sincerity, says Dr Vanessa Urch Druskat, a social and organizational psychologist and professor specializing in emotional intelligence. 'If I heard that you were sending me an email and making it sound more empathetic than you really were, I wouldn't let it go,' she says. 'There's a baseline expectation that our personal communications are authentic,' says Druskat. 'We're wired to pick up on inauthenticity, disrespect – it feels terrible,' she says. But not everyone draws the same line when it comes to how much AI involvement is tolerable or what constitutes deceit by omission. Curious, I conducted an informal social media poll among my friends: if I used AI to write their whole birthday card, how would they feel? About two-thirds said they would be 'upset'; the rest said it would be fine. But if I had used AI only in a supplementary role – say, some editing to hit the right tone – the results were closer to 50-50. Using AI in personal messages is a double gamble: first, that the recipient won't notice, and second, that they won't mind. Still, there are arguments for why taking the risk is worthwhile, and why a hint of AI in a Hinge message might not be so bad. For instance, AI can be helpful for bridging communication gaps rooted in cultural, linguistic or other forms of diversity. Plus, personal messages have never been totally spontaneous and original. People routinely seek advice from friends, therapists or strangers about disagreements, delicate conversations or important notes. Greeting cards have long come with pre-written sentiments (although Mother's Day founder Anna Jarvis once scolded that printed cards were 'lazy'). Sara Jane Ho, an etiquette expert, says she has used ChatGPT 'in situations where I've been like: 'Change this copy to make it more heartfelt.' And it's great copy.' Ho argues that using ChatGPT to craft a personal message actually shows 'a level of consideration'. Expressing sensitivity helps build relationships, and it makes sense that people who struggle with words would appreciate assistance. Calculators are standard digital tools; why not chatbots? 'I always say that the spirit of etiquette is about putting others at ease,' she says. 'If the end result is something that is nice for the other person and that shows respect or consideration or care, then they don't need to see how the sausage is made.' I asked Ho what she would say to a person upset by an AI-assisted note. 'I'd ask them: 'Why are you so easily offended?'' Ho says. Plus, she says using AI is convenient and fast. 'Why would you make yourself walk someplace if you have a car?' she asks. Increasingly, people are drifting through digitized lives that reject 'the very notion that engagement should require effort', at perceiving less value in character building and experiences like 'working hard' and 'learning well', author and educator Kyla Scanlon argued in an essay last month. This bias toward effortlessness characterizes the emotional work of relationships as burdensome, even though it helps create intimacy. 'People have sort of conditioned themselves to want a completely seamless and frictionless experience in their everyday lives 100% of the time,' says Josh Lora, a writer and sociologist who has written about AI and loneliness. 'There are people who Uber everywhere, who Seamless everything, who Amazon everything, and render their lives completely smooth.' Amid this convenience-maxxing, AI figures as an efficient way out of relational labor, or small mistakes, tensions and inadequacies in communication, says Lora. We use language to be understood or co-create a sense of self. 'So much of our experience as people is rendered in the struggle to make meaning, to self actualize, to explain yourself to another person,' Lora says. But when we outsource that labor to a chatbot, we lose out on developing self-expression, nuanced social skills, and emotional intelligence. We also lose out on the feelings of interpersonal gratitude that arise from taking the time to write kindly to our loved ones, as one 2023 study from the University of California, Riverside, found. Many people already approach life as a series of objectives: get good grades, get a job, earn money, get married. In that mindset, a relationship can feel like something to manage effectively rather than a space of mutual recognition. What happens if it stops feeling worth the effort? Summer (who requested a pseudonym for privacy), a 30-year-old university tutor, said she became best friends with Natasha (also a pseudonym) while pursuing their respective doctoral degrees. They lived four hours apart, and much of their relationship unfolded in long text message exchanges, debating ideas or analyzing people they knew. About a year ago, Natasha began to use ChatGPT to help with work tasks. Summer said she quickly seemed deeply enamoured with AI's speed and fluency. (Researchers have warned the technology can be addictive, to the detriment of human social engagement.) Soon, subtle tone and content changes led Summer to suspect Natasha was using AI in their personal messages. (Natasha did not respond to a request for comment.) After six years of lively intellectual curiosity, their communication dwindled. Occasionally, Natasha asked Summer for her opinion on something, then disappeared for days. Summer felt like she was the third party to a deep conversation happening between her best friend and a machine. 'I'd engage with her as a friend, a whole human being, and she'd engage with me as an obstacle to this meaning-making machine of hers,' Summer tells me. Summer finally called Natasha to discuss how AI use was affecting their friendship. She felt Natasha was exchanging the messy imperfections of rambling debate for an emotionally bankrupt facsimile of ultra-efficient communication. Natasha didn't deny using chatbots, and 'seemed to always have a reason' for continuing despite Summer's moral and intellectual qualms. Summer 'felt betrayed' that a close friend had used AI as 'an auxiliary' to talk to her. 'She couldn't find the inherent meaning in us having an exchange as people,' she says. To her, adding AI into relationships 'presupposes inadequacy' in them, and offers a sterile alternative: always saying the right thing, back and forth, frictionless forever. The two women are no longer friends. 'What you're giving away when you engage in too much convenience is your humanity, and it's creepy to me,' Summer says. Dr Mathieu Corteel is a philosopher and author of a book grappling with the implications of AI (only available in French) as a game we have all entered without 'knowing the rules'. Corteel is not anti-AI, but believes that overreliance on it alienates us from our own judgment, and by extension, humanity – 'which is why I consider it as one of the most important philosophical problems we are facing right now', he says. If a couple, for example, expressed love through AI-generated poems, they would be skipping crucial steps of meaning-making to create 'a combination of symbols' absent of meaning, he says. You can interpret meaning retrospectively, reading intent into an AI's output, 'but that's just an effect', he says. 'AI is unable to give meaning to something because it's outside of the semantics produced by human beings, by human culture, by human interrelation, the social world,' says Corteel. If AI can churn out convincingly heartfelt words, perhaps even our most intimate expressions have always been less special than we had hoped. Or, as the tech theorist Bogna Konior recently wrote: 'What chatbots ultimately teach us is that language ain't all that.' Corteel agrees that language is inherently flawed; we can never fully express our feelings, only try. But that gap between feeling and expression is where love and meaning live. The very act of striving to shrink that distance helps define those thoughts and feelings. AI, by contrast, offers a slick way to bypass that effort. Without the time it takes to reflect on our relationships, the struggle to find words, the practice of communicating, what are we exchanging? 'We want to finish quickly with everything,' says Corteel. 'We want to just write a prompt and have it done. And there's something that we are losing – it's the process. And in the process, there's many important aspects. It is the co-construction of ourselves with our activities,' he says. 'We are forgetting the importance of the exercise.'