
Some AI Prompts Can Cause 50 Times More CO2 Emissions Than Others
Whether it be writing an email or planning a vacation, about a quarter of Americans say they interact with artificial intelligence several times a day, while another 28% say their use is about once a day.
But many people might be unaware of the environmental impact of their searches. A request made using ChatGPT, for example, consumes 10 times the electricity of a Google search, according to the International Energy Agency. In addition, data centers, which are essential for powering AI models, represented 4.4% of all the electricity consumed in the U.S. in 2023—and by 2028 they're expected to consume approximately 6.7 to 12% of the country's electricity. It's likely only going to increase from there: The number of data centers worldwide have risen from 500,000 in 2012 to over 8 million as of September 2024.
A new study, published in Frontiers, aims to draw more attention to the issue. Researchers analyzed the number of 'tokens'—the smallest units of data that a language model uses to process and generate text—required to produce responses, and found that certain prompts can release up to 50 times more CO2 emissions than others.
Different AI models use a different number of parameters; those with more parameters often perform better. The study examined 14 large language models (LLMs) ranging from seven to 72 billion parameters, asking them the same 1,000 benchmark questions across a range of subjects. Parameters are the internal variables that a model learns during training, and then uses to produce results.
Reasoning-enabled models, which are able to perform more complex tasks, on average created 543.5 'thinking' tokens per question (these are additional units of data that reasoning LLMs generate before producing an answer). That's compared to more concise models which required just 37.7 tokens per question. The more tokens were used, the higher the emissions—regardless of whether or not the answer was correct.
The subject matter of the topics impacted the amount of emissions produced. Questions on straightforward topics, like high school history, produced up to six times fewer emissions than subjects like abstract algebra or philosophy, which required lengthy reasoning processes.
Currently, many models have an inherent 'accuracy-sustainability trade-off,' researchers say. The model which researchers deemed the most accurate, the reasoning-enabled Cogito model, produced three times more CO2 emissions than similar sized models that generated more concise answers. The inherent challenge then, in the current landscape of AI models, is to be able to optimize both energy efficiency and accuracy. 'None of the models that kept emissions below 500 grams of CO₂ equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly,' first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a press release.
It's not just the types of questions asked or the degree of the answer's accuracy, but the models themselves that can lead to the difference in emissions. Researchers found that some language models produce more emissions than others. For DeepSeek R1 (70 billion parameters) to answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York, while Qwen 2.5 (72 billion parameters) can answer over three times as many questions—about 1.9 million—with similar accuracy rates and the same number of emissions.
The researchers hope that users might be more mindful of the environmental impact of their AI use. 'If users know the exact CO₂ cost of their AI-generated outputs, such as casually turning themselves into an action figure," said Dauner, "they might be more selective and thoughtful about when and how they use these technologies.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
2 Top Stocks Down 40% to Buy With $1,000
Reddit's data could be a valuable asset that Wall Street is overlooking. Marvell is well-positioned to benefit from data centers for AI workloads. 10 stocks we like better than Reddit › Buying shares of competitively positioned companies that are experiencing robust growth for their products can put you on the road to financial freedom. Sometimes the market gives you the opportunity to buy quality stocks at big discounts that can set you up for outstanding results. If you have $1,000 you don't need for at least five years, there are a few growth stocks that Wall Street is currently sleeping on that could deliver great returns over the next few years. Reddit (NYSE: RDDT) is a popular online platform that is built around discussion threads on an endless number of topics. Over 400 million people visit Reddit on a weekly basis. This has driven strong growth in the company's advertising revenue, which is the primary means it monetizes its platform. The stock is down 39% from its recent highs. This can be attributed to two things. First, it was due for a correction after climbing to a high price-to-sales multiple of around 25. It now trades at a lower multiple of 19. Second, Wall Street has been concerned about Alphabet's Google's launch of new artificial intelligence (AI) features in Search. Google's AI Overviews, for example, is taking content from Reddit and summarizing it in Google Search results. This could lead to less traffic going directly to Reddit's platform and limit its revenue growth prospects. However, Reddit continued to report extremely strong growth in the first quarter. Revenue grew 61% year over year, with 108 million daily active users. Advertisers continue to invest in Reddit's platform, given the high engagement from these users, not to mention that many people visiting Reddit are researching a product to buy, making it more likely they will click on an ad. All the discussions and comments across Reddit's communities are not only leading to strong advertising growth, but also opening up new growth opportunities. In fact, Reddit is starting to make a significant amount of money licensing its data to companies building AI models. Its "other" revenue grew 66% year over year in Q1, representing about 9% of its quarterly revenue. This growth in data licensing signals a competitive advantage for Reddit not fully reflected in the stock price. This makes the stock a compelling buy after the recent dip. There is substantial investment pouring into data center infrastructure (e.g., advanced chips and networking systems) to lay the groundwork for an AI-driven economy. Marvell Technology (NASDAQ: MRVL) is riding this wave, yet the stock is down 41% from its recent high, setting up a buying opportunity ahead of a potential bull run. Marvell is a leader in supplying custom chip solutions and networking products for data centers. Its data center business totaled 76% of its revenue last quarter and also, coincidentally, grew 76% year over year. The chipmaker has benefited greatly from its partnership with Amazon Web Services, the leading cloud services provider for enterprises. In late 2024, it signed a new five-year deal to supply AWS with custom AI chips and networking products, which are needed for faster data transfer in AI workloads. Marvell also has a partnership with Nvidia to integrate its chips in Nvidia's NVLink Fusion. NVLink is a game-changing product that brings together custom chip solutions from multiple suppliers on a single platform. This could spell more demand for Marvell's accelerator processing units (XPUs). These agreements with AWS and Nvidia significantly bolster Marvell's long-term prospects. The stock looks expensive, trading at high multiples of sales and earnings. But keep in mind that it is seeing margins improve from growing demand. Adjusted earnings more than doubled year over year to $0.62 in the first quarter. Wall Street analysts expect 46% annualized earnings growth over the next few years, which could support significant upside in the stock. Before you buy stock in Reddit, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Reddit wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $713,547!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $966,931!* Now, it's worth noting Stock Advisor's total average return is 1,062% — a market-crushing outperformance compared to 177% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Ballard has positions in Nvidia. The Motley Fool has positions in and recommends Alphabet, Amazon, and Nvidia. The Motley Fool recommends Marvell Technology. The Motley Fool has a disclosure policy. 2 Top Stocks Down 40% to Buy With $1,000 was originally published by The Motley Fool Inicia sesión para acceder a tu cartera de valores


Tom's Guide
7 hours ago
- Tom's Guide
These simple prompt tweaks made my AI videos look way better — try them yourself
As chatbots continue to get smarter and gain the ability to do more, AI video generators are evolving just as fast. And while many are still in beta or invite-only, one thing is clear: prompt quality makes a huge testing several platforms (including Google Veo 3, the premium video generator currently available for $249/month through Google Ultra) I've learned that the way you write your prompts completely changes the results. Here's how I leveled up my AI videos — and the tricks you can use to make your own clips look sharper, more cinematic and more keep things fair, all prompts were tested using Veo 2 — Google's lower-tier video generator that doesn't include audio. To try it yourself, you'll need to join the waitlist for Google Labs' VideoFX tool, where Veo 2 is currently available. You can also access Veo 3 through Google AI Studio or Canva Pro. These prompts can be used on any of the best AI video generators including Midjourney, Firefly and Sora. A common mistake many users make is starting with adjectives like, 'a stunning cinematic video of a beach at sunset.' Sounds nice, but vague. Instead, I always find better success when I lead with structure: 'wide establishing shot of a beach at golden hour, camera slowly panning left.' The more you write like a director, the more polished your results will look. Veo seems to respond well to cinematic language. Prompts like 'low angle tracking shot of a child running through tall grass' or 'drone shot pulling away from a mountain cabin at sunrise' generated far more dynamic visuals than basic descriptions. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. If you're not familiar with filmmaking terms, think in shots: wide, medium, close-up, overhead. While I am not a camera operator, I have written enough screenplays and been on enough television and movie sets to have these memorized. Here are a few basic filmmaking terms that I frequently use in my prompts. Instead of asking for a whole story in one sentence, break your prompt into visual beats. For example: Beat 1: Wide shot of a skateboarder riding downhill at sunset Beat 2: Slow motion close-up of wheels hitting the pavement Beat 3: Camera follows from behind as sparks fly from the board Even if your preferred AI video generator doesn't yet support full scene transitions, this approach helps guide the system toward more intentional storytelling. Movement is key. I started adding phrases like 'camera tilts up,' 'fog rolling in,' or 'wind rustling trees.' These made a surprising difference in realism and cinematic feel. So did mood cues like 'soft lighting,' 'overcast skies,' or 'neon glow.' AI video is still new, and results can be unpredictable. I ran each prompt multiple times to compare outputs, tweaking a word here and there. Veo's consistency is noticeably better than other tools I've tried. It doesn't always nail it, but when it does, the results can look like they came from a professional video shoot. The prompts below were used with Veo 3 so they also have sound. Landscape prompt: Wide establishing shot of a mountain range at golden hour, camera slowly panning left. Fog rolls between the peaks, soft ambient lighting, cinematic depth of field. Sports prompt: Slow-motion shot of a skateboarder jumping off a ramp at sunset. Camera follows from low angle as dust kicks up. Warm lighting, dynamic energy. Food reel prompt: Overhead shot of a steaming bowl of ramen being served in a Tokyo street market. Close-up of chopsticks lifting noodles, steam rising. Warm lighting, vibrant details. Action POV prompt: First-person POV of a cyclist riding down a forest trail. Leaves fly past, sun flickers through trees. GoPro-style realism, immersive movement. If your AI videos look generic or "like AI," your prompt may be to blame. With a few smart tweaks; focusing on structure, camera angles and specific action, you can get more polished, more cinematic results. Google Veo is one of the most capable tools I've tested, but like any AI, it's only as good as what you feed it. Want to try it yourself? Start small. Pick a scene, describe it like a shot list and watch what happens next. Let me know in the comments how everything came out!

Engadget
7 hours ago
- Engadget
Android 16 will protect users from fake cell towers and potential spying threats
It turns out that your smartphone could be an overlooked vulnerability that puts you at risk of being tracked. To combat this, Google is rolling out a new security feature in Android 16 that will warn users if their device is using a fake or insecure mobile network or if that network requests identifying information about a connected device. However, these features likely won't be available until the next generation of Android devices, as first reported by Android Authority . Since the current Android devices lack the hardware to support these features, the first compatible Android device we could see with this tech may be the Pixel 10 that's expected to debut later this summer. This feature is designed to counteract cell site simulators, or devices that act like a cell tower and trick nearby devices into connecting to it. Once connected, these simulators can glean sensitive information, like the location of a smartphone. These cell site simulators are better known by their commercial nickname, Stingray, and have been reportedly used by agencies like U.S. Immigration and Customs Enforcement, as well as Customs and Border Protection. The upcoming security features are rolling out as part of the latest Android OS update, which was released earlier this month. Compatible devices will have the option to toggle "network notifications" on or off, which will warn you if your device connects to an unencrypted network or when the connected network requests your phone's unique identifiers. On top of that, there's another option that lets you turn on "2G network protection" to avoid the less secure mobile network type.