
This Week in AI & Cybersecurity: Key Global Moves and Breakthroughs
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
From high-stakes talent wars among Big Tech to India's push for indigenous cybersecurity innovation, this week saw significant developments in artificial intelligence and digital defence. Here are five major updates in AI and cybersecurity.
Meta Continues Talent Poaching Spree from Apple to Build AGI
Meta has made two more senior-level hires from Apple's AI team for its newly launched Superintelligence Labs division. Mark Lee has already joined, while Tom Gunter is expected to follow. Both worked under Ruoming Pang, Apple's former head of Foundation Models, who recently accepted a multi-million-dollar compensation package from Meta. This intensifies the rivalry among AI giants, as Meta aggressively builds toward Artificial General Intelligence (AGI), having already brought on board researchers from OpenAI, Anthropic, and Google.
Perplexity AI Hits USD 18 Billion Valuation Amid Funding Surge
Generative AI search startup Perplexity AI has secured an additional USD 100 million in funding, bringing its valuation back to USD 18 billion. This is an extension of an earlier round that valued the company at USD 14 billion. Founded in 2022, Perplexity has experienced rapid valuation jumps, underlining continued investor interest in core internet services reimagined through AI, particularly in competition with traditional search engines like Google.
DeepMind Unveils Gemini Robotics On-Device Model for Real-World AI Tasks
Google DeepMind announced the release of Gemini Robotics On-Device, a foundation model for vision-language-action (VLA) tasks that operates directly on robotic hardware with low latency. Trained using Aloha robots, the model adapts to tasks such as food preparation or playing cards with as few as 50–100 demonstrations. This on-device model achieved over 60 per cent task success rates, marking a notable step toward deploying multimodal AI in real-world physical environments.
IIT Kanpur's C3iHub Launches Cohort VII to Incubate Cybersecurity Startups
The C3iHub at IIT Kanpur has opened applications for its seventh startup incubation cohort, offering funding support of up to INR 30 lakh (approximately USD 35,900) per startup over two years. The initiative targets Indian startups tackling challenges such as mobile forensics, LLM security, and supply chain risks. Following its recent upgrade to a Technology Translational Research Park, C3iHub continues to strengthen India's cybersecurity capabilities through innovation and academic-industry collaboration.
Commenting on this, Prof. Manindra Agrawal, Director, IIT Kanpur said, "At IIT Kanpur, we are committed to advancing India's cybersecurity landscape through cutting-edge research, deep-tech innovation, and impactful entrepreneurship. C3iHub's seventh startup cohort reflects this vision by nurturing ventures working at the forefront of cybersecurity. Recently upgraded as Technology Translational Research Park, C3i hub has been delivering solutions that address emerging national and global challenges, while also contributing to India's self-reliance."
Google's AI Agent 'Big Sleep' Stops Cyber Exploit Before Deployment
Google CEO Sundar Pichai announced that the company's AI agent Big Sleep proactively detected and neutralised a cyber exploit before it could be executed. Though details remain sparse, this marks a shift in cyber defence from reactive to pre-emptive AI action, potentially paving the way for widespread adoption of AI agents in enterprise and national security systems.
"New from our security teams: Our AI agent Big Sleep helped us detect and foil an imminent exploit. We believe this is a first for an AI agent — definitely not the last — giving cybersecurity defenders new tools to stop threats before they're widespread," Pichai posted on X (formerly Twitter).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
18 minutes ago
- Yahoo
Why Dell Technologies Shares Are Climbing Today
July 18 - Shares of Dell Technologies (NYSE:DELL) surged about 3% Friday after Bank of America projected the company's earnings per share could nearly double by 2030, driven by rising demand for enterprise and sovereign AI infrastructure. BofA analysts said Dell's EPS may climb to $19.01 by the end of the decade, fueled by expanding AI server deployments and a rebound in cloud-related capital expenditures. Revenue growth is expected to accelerate to 12% annually over the next five years, up from 2% in the previous five. The firm added that AI-related investments could prompt a shift from elevated operating costs to increased capital spending across the enterprise segment. That trend, it noted, may drive stronger operating profit and cash flow through the end of the decade. While margins could be pressured by the mix of AI hardware, Bank of America expects overall profitability to rise as adoption widens. It reiterated a Buy rating on Dell and raised its price target to $165 from $155. This article first appeared on GuruFocus.
Yahoo
18 minutes ago
- Yahoo
Amogy secures $23m to boost global growth and advance ammonia-to-power solutions
Amogy, an ammonia-to-power solutions provider, has secured an additional $23m in venture capital in a latest equity financing round. The additional funding supplements a $56m in venture funding initially reported in January 2025. The funding round, co-led by Korea Development Bank and KDB Silicon Valley, saw participation from new investors BonAngels Venture Partners, Pathway Investment, and JB Investment. The financial boost has propelled Amogy's total funding to nearly $300m since its inception, marking a significant increase in the company's valuation. This funding is set to accelerate the company's growth into the Asian market and further develop its maritime and stationary power generation systems. Following the successful demonstration of its first carbon-free, ammonia-powered maritime vessel in September 2024, Amogy has been forging partnerships with maritime industry leaders. These collaborations aim to deploy the company's technology in both newbuild and retrofit vessel applications, contributing to the international effort to decarbonise global shipping. Amogy co-founder and CEO Seonghoon Woo said: 'We've long recognised the strong demand for ammonia-to-power technology in the shipping industry, but we also see much broader opportunities to use ammonia as a clean fuel – especially with the growing demand for the 'clean power' globally. We're ready to meet that market demand. 'Support for a hydrogen-based economy is especially strong in Asia, and as the most cost-effective hydrogen carrier, ammonia is quickly evolving into the leading zero-carbon fuel solution for these markets.' For nations such as South Korea, Japan, and Singapore, which lack abundant fossil fuel resources, ammonia presents a viable and economical option for transporting and storing zero-carbon energy, according to the company. South Korea's Clean Hydrogen Portfolio Standard and the Distributed Energy Act are catalysing a shift towards a new energy economy, with hydrogen and ammonia expected to contribute to 2% of the country's electricity by 2030 and 7% by 2035. Amogy's expansion into the Asian market, particularly South Korea, has seen rapid advancements. The company's technology is now being applied in stationary power generation, including a clean, ammonia-fuelled distributed power generation system in Pohang, South Korea, with plans for commercial operations by 2028-2029. "Amogy secures $23m to boost global growth and advance ammonia-to-power solutions" was originally created and published by Ship Technology, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Sign in to access your portfolio


Scientific American
20 minutes ago
- Scientific American
AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines
There are many ways to test the intelligence of an artificial intelligence —conversational fluidity, reading comprehension or mind-bendingly difficult physics. But some of the tests that are most likely to stump AIs are ones that humans find relatively easy, even entertaining. Though AIs increasingly excel at tasks that require high levels of human expertise, this does not mean that they are close to attaining artificial general intelligence, or AGI. AGI requires that an AI can take a very small amount of information and use it to generalize and adapt to highly novel situations. This ability, which is the basis for human learning, remains challenging for AIs. One test designed to evaluate an AI's ability to generalize is the Abstraction and Reasoning Corpus, or ARC: a collection of tiny, colored-grid puzzles that ask a solver to deduce a hidden rule and then apply it to a new grid. Developed by AI researcher François Chollet in 2019, it became the basis of the ARC Prize Foundation, a nonprofit program that administers the test—now an industry benchmark used by all major AI models. The organization also develops new tests and has been routinely using two (ARC-AGI-1 and its more challenging successor ARC-AGI-2). This week the foundation is launching ARC-AGI-3, which is specifically designed for testing AI agents—and is based on making them play video games. Scientific American spoke to ARC Prize Foundation president, AI researcher and entrepreneur Greg Kamradt to understand how these tests evaluate AIs, what they tell us about the potential for AGI and why they are often challenging for deep-learning models even though many humans tend to find them relatively easy. Links to try the tests are at the end of the article. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. [ An edited transcript of the interview follows. ] What definition of intelligence is measured by ARC-AGI-1? Our definition of intelligence is your ability to learn new things. We already know that AI can win at chess. We know they can beat Go. But those models cannot generalize to new domains; they can't go and learn English. So what François Chollet made was a benchmark called ARC-AGI—it teaches you a mini skill in the question, and then it asks you to demonstrate that mini skill. We're basically teaching something and asking you to repeat the skill that you just learned. So the test measures a model's ability to learn within a narrow domain. But our claim is that it does not measure AGI because it's still in a scoped domain [in which learning applies to only a limited area]. It measures that an AI can generalize, but we do not claim this is AGI. How are you defining AGI here? There are two ways I look at it. The first is more tech-forward, which is 'Can an artificial system match the learning efficiency of a human?' Now what I mean by that is after humans are born, they learn a lot outside their training data. In fact, they don't really have training data, other than a few evolutionary priors. So we learn how to speak English, we learn how to drive a car, and we learn how to ride a bike—all these things outside our training data. That's called generalization. When you can do things outside of what you've been trained on now, we define that as intelligence. Now, an alternative definition of AGI that we use is when we can no longer come up with problems that humans can do and AI cannot—that's when we have AGI. That's an observational definition. The flip side is also true, which is as long as the ARC Prize or humanity in general can still find problems that humans can do but AI cannot, then we do not have AGI. One of the key factors about François Chollet's benchmark... is that we test humans on them, and the average human can do these tasks and these problems, but AI still has a really hard time with it. The reason that's so interesting is that some advanced AIs, such as Grok, can pass any graduate-level exam or do all these crazy things, but that's spiky intelligence. It still doesn't have the generalization power of a human. And that's what this benchmark shows. How do your benchmarks differ from those used by other organizations? One of the things that differentiates us is that we require that our benchmark to be solvable by humans. That's in opposition to other benchmarks, where they do 'Ph.D.-plus-plus' problems. I don't need to be told that AI is smarter than me—I already know that OpenAI's o3 can do a lot of things better than me, but it doesn't have a human's power to generalize. That's what we measure on, so we need to test humans. We actually tested 400 people on ARC-AGI-2. We got them in a room, we gave them computers, we did demographic screening, and then gave them the test. The average person scored 66 percent on ARC-AGI-2. Collectively, though, the aggregated responses of five to 10 people will contain the correct answers to all the questions on the ARC2. What makes this test hard for AI and relatively easy for humans? There are two things. Humans are incredibly sample-efficient with their learning, meaning they can look at a problem and with maybe one or two examples, they can pick up the mini skill or transformation and they can go and do it. The algorithm that's running in a human's head is orders of magnitude better and more efficient than what we're seeing with AI right now. What is the difference between ARC-AGI-1 and ARC-AGI-2? So ARC-AGI-1, François Chollet made that himself. It was about 1,000 tasks. That was in 2019. He basically did the minimum viable version in order to measure generalization, and it held for five years because deep learning couldn't touch it at all. It wasn't even getting close. Then reasoning models that came out in 2024, by OpenAI, started making progress on it, which showed a step-level change in what AI could do. Then, when we went to ARC-AGI-2, we went a little bit further down the rabbit hole in regard to what humans can do and AI cannot. It requires a little bit more planning for each task. So instead of getting solved within five seconds, humans may be able to do it in a minute or two. There are more complicated rules, and the grids are larger, so you have to be more precise with your answer, but it's the same concept, more or less.... We are now launching a developer preview for ARC-AGI-3, and that's completely departing from this format. The new format will actually be interactive. So think of it more as an agent benchmark. How will ARC-AGI-3 test agents differently compared with previous tests? If you think about everyday life, it's rare that we have a stateless decision. When I say stateless, I mean just a question and an answer. Right now all benchmarks are more or less stateless benchmarks. If you ask a language model a question, it gives you a single answer. There's a lot that you cannot test with a stateless benchmark. You cannot test planning. You cannot test exploration. You cannot test intuiting about your environment or the goals that come with that. So we're making 100 novel video games that we will use to test humans to make sure that humans can do them because that's the basis for our benchmark. And then we're going to drop AIs into these video games and see if they can understand this environment that they've never seen beforehand. To date, with our internal testing, we haven't had a single AI be able to beat even one level of one of the games. Can you describe the video games here? Each 'environment,' or video game, is a two-dimensional, pixel-based puzzle. These games are structured as distinct levels, each designed to teach a specific mini skill to the player (human or AI). To successfully complete a level, the player must demonstrate mastery of that skill by executing planned sequences of actions. How is using video games to test for AGI different from the ways that video games have previously been used to test AI systems? Video games have long been used as benchmarks in AI research, with Atari games being a popular example. But traditional video game benchmarks face several limitations. Popular games have extensive training data publicly available, lack standardized performance evaluation metrics and permit brute-force methods involving billions of simulations. Additionally, the developers building AI agents typically have prior knowledge of these games—unintentionally embedding their own insights into the solutions.