Latest news with #GoogleDeepMind


CNET
8 hours ago
- Science
- CNET
Google AI Model Helps Us See the Planet as We Never Have Before
It's a view of Mother Earth as we've never seen her, and it just might help us solve some our most existential issues: Google has launched a new AI model called AlphaEarth Foundations, which can take a bunch of images and measurements from satellites and other sources to create current and accurate digital representations of lands and waters. With all this data, scientists and researchers can monitor problems like water scarcity, deforestation and crop health, among others. Google says AlphaEarth's AI modeling has already been helpful. "Our partners are already seeing significant benefits, using the data to better classify unmapped ecosystems, understand agricultural and environmental changes, and greatly increase the accuracy and speed of their mapping work," the Google DeepMind blog said Wednesday. Satellites deliver a treasure trove of data every day, but all this information varies in its modalities -- such as satellite, radar, simulations and laser mapping -- and how current it is. AlphaEarth can integrate all that data and "weaves all this information together to analyze the world's land and coastal waters in sharp, 10x10 meter squares." AlphaEarth also creates summaries for each of these squares that "require 16 times less storage space than those produced by other AI systems that we tested and dramatically reduces the cost of planetary-scale analysis," Google said. Scientists "no longer have to rely on a single satellite passing overhead."

The Hindu
18 hours ago
- Science
- The Hindu
Google DeepMind's AlphaEarth AI model maps the planet like a 'virtual satellite'
Google DeepMind has unveiled a new AI model called AlphaEarth Foundations that can map the world's land and coastal waters like a 'virtual satellite.' This observation data can help scientists make decisions around serious issues like food security, deforestation, urban expansion and water resources. The model integrates data into a unified digital representation or 'embedding,' which is easily processed by computer systems. The team also released a dataset of annual embeddings in Google Earth engine to promote research and real-world use. In a blog post, the team described how the model worked. 'First, it combines volumes of information from dozens of different public sources— optical satellite images, radar, 3D laser mapping, climate simulations, and more. It weaves all this information together to analyse the world's land and coastal waters in sharp, 10x10 meter squares, allowing it to track changes over time with remarkable precision,' it noted. Google DeepMind said the AI model solved big issues that existed with mapping geospatial data, which were data overload and inconsistency. The researchers also claimed that the AI model delivered 24% lower error rate than other leading AI models and required 16 times less storage.


India Today
19 hours ago
- Business
- India Today
Mark Zuckerberg says Meta Superintelligence now in sight but it won't be fully open source
Meta has been spending like there's no tomorrow, snapping up AI startups, building vast data centres, and poaching some of the brightest minds in the field. And if you ask Mark Zuckerberg, it's finally starting to pay off. In a memo posted on Wednesday ahead of Meta's latest earnings report, the CEO set out his vision for a new era of artificial intelligence that goes beyond the chatbots and digital assistants we've seen so the last few months we have begun to see glimpses of our AI systems improving themselves,' Zuckerberg wrote. 'The improvement is slow for now, but undeniable. Developing superintelligence is now in sight," he offered no precise definition of 'superintelligence', nor any benchmarks for when it would arrive, but he did admit that such a breakthrough 'would pose novel safety concerns'. 'We'll need to be rigorous about mitigating these risks and careful about what we choose to open source,' he added. Zuckerberg further explained, "This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well.""The intersection of technology and how people live is Meta's focus, and this will only become more important in the future," he used the memo to draw a line between Meta's ambitions and those of its competitors. 'Personal superintelligence', as he calls it, will be designed to empower individuals rather than to automate away work.'The rest of this decade seems likely to be the decisive period for determining the path this technology will take, and whether superintelligence will be a tool for personal empowerment or a force focused on replacing large swaths of society,' he is not the first time he has highlighted Meta's different philosophy. While rivals such as OpenAI, Google DeepMind and xAI keep their models firmly locked away, Meta has made openness a key selling point, particularly with its Llama models. In a 2024 letter, he wrote, 'Starting next year, we expect future Llama models to become the most advanced in the industry.'However, there's a caveat. Speaking on a podcast last year, he admitted that this commitment to openness is not absolute. 'If at some point however there's some qualitative change in what the thing is capable of, and we feel like it's not responsible to open source it, then we won't.' In other words, open source may not always be the default. And critics already argue that Llama is not fully open in the strictest sense because Meta hasn't released the training data behind its is Meta happy to share what others guard? As Zuckerberg explained last year, Meta's core business is advertising, not licensing AI, 'Releasing Llama doesn't undercut our revenue, sustainability, or ability to invest in research like it does for closed providers.'His longerterm plan is starting to come into focus. Instead of selling API access to its models, Meta hopes to build 'personal superintelligence' into its own hardware, from augmented reality glasses to virtual reality headsets. 'Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices,' Zuckerberg for whether future models will remain open, Meta is hedging its bets. A spokesperson stated, 'Our position on open source AI is unchanged. We plan to continue releasing leading open source models. We haven't released everything we've developed historically and we expect to continue training a mix of open and closed models going forward," reported TechCrunch.- Ends
Yahoo
a day ago
- Business
- Yahoo
Zuckerberg signals Meta won't open source all of its ‘superintelligence' AI models
Meta CEO Mark Zuckerberg shared his vision on Wednesday for 'personal superintelligence,' the idea that people should be able to use AI to achieve their personal goals. Smuggled into the letter is a signal that Meta is shifting how it plans to release AI models as it pursues 'superintelligence.' 'We believe the benefits of superintelligence should be shared with the world as broadly as possible,' wrote Zuckerberg. 'That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source.' That wording about open source is significant. Zuckerberg has historically positioned Meta's Llama family of open models as the company's key differentiator from competitors like OpenAI, xAI, and Google DeepMind. Meta's goal has been to create open AI models that were as good as or better than those closed models. In a 2024 letter, Zuckerberg wrote, 'Starting next year, we expect future Llama models to become the most advanced in the industry.' Zuckerberg has previously left himself room to maneuver on this commitment. 'If at some point however there's some qualitative change in what the thing is capable of, and we feel like it's not responsible to open source it, then we won't,' he said in a podcast last year. And while many say Llama doesn't fit the strict definition of open source AI — partly because Meta hasn't released its massive training datasets — Zuckerberg's words point to a possible change in priority: Open source may no longer be the default for Meta's cutting-edge AI. There's a reason why Meta's rivals keep their models closed. Closed models give companies more control over monetizing their products. Zuckerberg pointed out last year that Meta's business isn't reliant on selling access to AI models, so 'releasing Llama doesn't undercut our revenue, sustainability, or ability to invest in research like it does for closed providers.' Meta, of course, makes most of its money from selling internet advertising. Still, that stated viewpoint on open models was before Meta started to feel like it was falling behind competitors, and executives became obsessed with beating OpenAI's GPT-4 model while developing Llama 3. Cut to June 2025, when Meta began its public AGI sprint in earnest by investing $14.3 billion in Scale AI, acquiring Scale's founder and CEO, and restructuring its AI efforts under a new unit called Meta Superintelligence Labs. Meta has spent billions of dollars to acquire researchers and engineers from top AI firms and build out new data centers. Recent reports indicate that all that investment has led Meta to pause testing on its latest Llama model, Behemoth, and instead focus efforts on developing a closed model. With Zuckerberg's mission for introducing 'personal superintelligence' to the world — a decided shift from the rivals he says are working on 'automating all valuable work' — his AI monetization strategy is taking shape. It's clear from Zuckerberg's words today that Meta plans to deliver 'personal superintelligence' through its own products like augmented reality glasses and virtual reality headsets. 'Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices,' Zuckerberg wrote in Wednesday's letter. When asked about Meta potentially keeping its most advanced models closed, a Meta spokesperson said that the company remains committed to open source AI and said it also expects to train closed source models in the future. 'Our position on open source AI is unchanged,' a spokesperson said. 'We plan to continue releasing leading open source models. We haven't released everything we've developed historically and we expect to continue training a mix of open and closed models going forward.' This article was updated with more information about Mark Zuckerberg's stance on open AI models. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Time of India
a day ago
- Science
- Time of India
Google DeepMind launches AI model that works like ‘virtual satellite'
Google DeepMind , Google's AI unit, has introduced an AI model, AlphaEarth Foundations , that integrates a massive cache of Earth observation data and generates a data representation that can help scientists and researchers understand and monitor our planet. Tired of too many ads? go ad free now The company says that the newest AI model functions like a 'virtual satellite,' to provide insights into global changes. According to Google DeepMind, the model accurately and efficiently characterises the planet's entire terrestrial land and coastal waters by integrating huge amounts of Earth observation data into a unified digital representation, or "embedding," that computer systems can easily process. This allows the model to provide scientists with a more complete and consistent picture of our planet's evolution. How AlphaEarth Foundations AI model works According to the company, the data has been taken by satellites that capture information-rich images and measurements, providing scientists and experts with a nearly real-time view of our planet. The data is impactful, however, its complexity, multimodality and refresh rate creates a new challenge of connecting different datasets and making use of them all effectively. The AI model visualises the rich details of the world by assigning the colours red, green and blue to three of the 64 dimensions. For instance, in Ecuador, AlphaEarth Foundations can 'see through persistent cloud cover' to detail agricultural plots. In Antarctica, it maps complex surfaces in clear detail, an area notoriously difficult for irregular satellite imaging. It can even reveal variations in Canadian agricultural land use that are invisible to the naked eye. Tired of too many ads? go ad free now says it tested AlphaEarth Foundations, consistently finding it to be the most accurate when compared against traditional methods and other AI mapping systems. To accelerate research and unlock use cases, Google is releasing a collection of AlphaEarth Foundations' annual embeddings as the Satellite Embedding dataset in Google Earth Engine . The company is currently using the model to generate annual embeddings and believes its utility could be further amplified by combining it with general reasoning LLM agents like Gemini in the future.