logo
#

Latest news with #MunichUniversityofAppliedSciences

ChatGPT isn't great for the planet. Here's how to use AI responsibly.
ChatGPT isn't great for the planet. Here's how to use AI responsibly.

Washington Post

time19-06-2025

  • Washington Post

ChatGPT isn't great for the planet. Here's how to use AI responsibly.

If you care about the environment, it can be hard to tell how you should feel about using AI models such as ChatGPT in your everyday life. The carbon cost of asking an AI model a single text question can be measured in grams of CO2 — which is something like 0.0000001 percent of an average American's annual carbon footprint. A query or two or 1,000 won't make a huge dent over the course of a year. But those little costs start to add up when you multiply them across 1 billion people peppering AI models with requests for text, photos and video. The data centers that host these models can devour more electricity than entire cities. Predictions about their rapid growth have pushed power companies to extend the lives of coal plants and build new natural gas plants. Keeping those computers cool uses freshwater — about one bottle's worth for every 100 words of text ChatGPT generates. That doesn't mean you have to shun the technology entirely, according to computer scientists who study AI's energy consumption. But you can be thoughtful about when and how you use AI chatbots. 'Use AI when it makes sense to use it. Don't use AI for everything,' said Gudrun Socher, a computer science professor at Munich University of Applied Sciences. For basic tasks, you may not need AI — and when you do use it, you can choose to use smaller, more energy-efficient models. For simple questions — such as finding a store's hours or looking up a basic fact — you're better off using a search engine or going directly to a trusted website than asking an AI model, Socher said. A Google search takes about 10 times less energy than a ChatGPT query, according to a 2024 analysis from Goldman Sachs — although that may change as Google makes AI responses a bigger part of search. For now, a determined user can avoid prompting Google's default AI-generated summaries by switching over to the 'Web' search tab, which is one of the options alongside images and news. Adding '-ai' to the end of a search query also seems to work. Other search engines, including DuckDuckGo, give you the option to turn off AI summaries. If you have a thornier problem, especially one that involves summarizing, revising or translating text, then it's worth using an AI chatbot, Socher said. For some tasks, using AI might actually generate less CO2 than doing it yourself, according to Bill Tomlinson, a professor of informatics at the University of California at Irvine. 'The real question isn't: Does [AI] have impact or not? Yes, it clearly does,' Tomlinson said. 'The question is: What would you do instead? What are you replacing?' An AI model can spit out a page of text or an image in seconds, while typing or digitally illustrating your own version might take an hour on your laptop. In that time, a laptop and a human worker will cause more CO2 pollution than an AI prompt, according to a paper Tomlinson co-authored last year. Tomlinson acknowledged there are many other reasons you might not choose to let AI write or illustrate something for you — including worries about accuracy, quality, plagiarism and so on — but he argued it could lower emissions if you use it to save labor and laptop time. Not all AI models are equal: You can choose between bigger models that use more computing power to tackle complicated questions or small ones designed to give shorter, quicker answers using less power. ChatGPT, for instance, allows paying users to toggle between its default GPT-4o model, the bigger and more powerful GPT-4.5 model, and the smaller o4-mini model. Socher said the mini is good enough for most situations. But there's something of a trade-off between size, energy use and accuracy, according to Socher, who tested the performance of 14 AI language models from Meta, Alibaba, DeepSeek and a Silicon Valley start-up called Deep Cogito in a paper published Thursday. (Socher and her co-author, Maximilian Dauner, couldn't test popular models such as OpenAI's ChatGPT or Google's Gemini because those companies don't share their code publicly.) Socher and Dauner asked the AI models 500 multiple-choice and 500 free-response questions on high school math, world history, international law, philosophy and abstract algebra. Bigger models gave more accurate answers but used several times more energy than smaller models. If you have a request for an AI chatbot that involves grappling with complicated or theoretical concepts — such as philosophy or abstract algebra — it's worth the energy cost to use a bigger model, Socher said. But for simpler tasks, such as reviewing a high school math assignment, a smaller model might get the job done with less energy. No matter what model you use, you can save energy by asking the AI to be concise when you don't need long answers — and keeping your own questions short and to the point. Models use more energy for every extra word they process. 'People often mistake these things as having some sort of sentience,' said Vijay Gadepally, a senior scientist at the MIT Lincoln Laboratory who studies ways to make AI more sustainable. 'You don't need to say 'please' and 'thank you.' It's okay. They don't mind.' Using AI doesn't just mean going to a chatbot and typing in a question. You're also using AI every time an algorithm organizes your social media feed, recommends a song or filters your spam email. 'We may not even realize it … because a lot of this is just hidden from us,' Gadepally said. If you're not a ChatGPT power user, these behind-the-scenes algorithms probably represent the bulk of your AI usage — and there's not much you can do about it other than using the internet less. It's up to the companies that are integrating AI into every aspect of our digital lives to find ways to do it with less energy and damage to the planet.

Can you choose an AI model that harms the planet less?
Can you choose an AI model that harms the planet less?

Time of India

time19-06-2025

  • Science
  • Time of India

Can you choose an AI model that harms the planet less?

From uninvited results at the top of your search engine queries to offering to write your emails and helping students do homework, generative artificial intelligence is quickly becoming part of daily life as tech giants race to develop the most advanced models and attract users. All those prompts come with an environmental cost: A report last year from the Energy Department found AI could help increase the portion of the nation's electricity supply consumed by data centers from 4.4% to 12% by 2028. To meet this demand, some power plants are expected to burn more coal and natural gas. And some chatbots are linked to more greenhouse gas emissions than others. A study published Thursday in the journal Frontiers in Communication analyzed different generative AI chatbots' capabilities and the planet-warming emissions generated from running them. Researchers found that chatbots with bigger "brains" used exponentially more energy and answered questions more accurately -- up until a point. "We don't always need the biggest, most heavily trained model, to answer simple questions. Smaller models are also capable of doing specific things well," said Maximilian Dauner, a doctoral student at the Munich University of Applied Sciences and lead author of the paper. "The goal should be to pick the right model for the right task." The study evaluated 14 large language models, a common form of generative AI often referred to by the acronym LLMs, by asking each a set of 500 multiple choice and 500 free response questions across five different subjects. Dauner then measured the energy used to run each model and converted the results into carbon dioxide equivalents based on global most of the models tested, questions in logic-based subjects, like abstract algebra, produced the longest answers -- which likely means they used more energy to generate compared with fact-based subjects, such as history, Dauner said. Live Events AI chatbots that show their step-by-step reasoning while responding tend to use far more energy per question than chatbots that don't. The five reasoning models tested in the study did not answer questions much more accurately than the nine other studied models. The model that emitted the most, DeepSeek-R1, offered answers of comparable accuracy to those that generated a fourth of the amount of emissions. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories There is key information not captured by the study, which only included open-source LLMs: Some of the most popular AI programs made by large tech corporations, such as OpenAI's ChatGPT and Google's Gemini, were not included in the results. And because the paper converted the measured energy to emissions based on a global CO2 average, it only offered an estimate; it did not indicate the actual emissions generated by using these models, which can vary hugely depending on which country the data center running it is in. "Some regions are going to be powered by electricity from renewable sources, and some are going to be primarily running on fossil fuels," said Jesse Dodge, a senior research scientist at the Allen Institute for AI who was not affiliated with the new research. In 2022, Dodge led a study comparing the difference in greenhouse gas emissions generated by training a LLM in 16 different regions of the world. Depending on the time of year, some of the most emitting areas, like the central United States, had roughly three times the carbon intensity of the least emitting ones, such as Norway. But even with this limitation, the new study fills a gap in research on the trade-off between energy cost and model accuracy, Dodge said. "Everyone knows that as you increase model size, typically models become more capable, use more electricity and have more emissions," he said. Reasoning models, which have been increasingly trendy, are likely further bumping up energy costs, because of their longer answers. "For specific subjects an LLM needs to use more words to get to a more accurate response," Dauner said. "Longer answers and those that use a reasoning process generate more emissions." Sasha Luccioni, the AI and climate lead at Hugging Face, an AI company, said that subject matter is less important than output length, which is determined by how the model was trained. She also emphasized that the study's sample size is too small to create a complete picture of emissions from AI. "What's relevant here is not the fact that it's math and philosophy, it's the length of the input and the output," she said. Last year, Luccioni published a study that compared 88 LLMs and also found that larger models generally had higher emissions. Her results also indicated that AI text generation -- which is what chatbots do -- used 10 times as much energy compared with simple classification tasks like sorting emails into folders. Luccioni said that these kinds of "old school" AI tools, including classic search engine functions, have been overlooked as generative models have become more widespread. Most of the time, she said, the average person doesn't need to use an LLM at all. Dodge added that people looking for facts are better off just using a search engine, since generative AI can "hallucinate" false information. "We're reinventing the wheel," Luccioni said. People don't need to use generative AI as a calculator, she said. "Use a calculator as a calculator." This article originally appeared in The New York Times.

Steering technology towards better vehicle dynamics
Steering technology towards better vehicle dynamics

Time of India

time14-06-2025

  • Automotive
  • Time of India

Steering technology towards better vehicle dynamics

Until autonomous cars become mainstream, which some argue is a really long time away, if at all it happens, the steering wheel isn't going away and the vehicle steering system will continue to see technology development. 'I don't think that the steering system is disappearing in the next 30 years,' Dr. Peter Pfeffer , CEO, MdynamiX , and Professor, Automotive Engineering , Munich University of Applied Sciences. A joystick could also replace the steering wheel, but at the system level there's a change happening, and that's software replacing hardware. Steer-by-wire technology is set to be the next big thing in steering system engineering. 'A lot of OEMs have large projects to get the steer-by-wire in mass production. Some of them stopped the game because of the economic pressure, but some others are still working,' says Dr. Pfeffer. ADAS and vehicle dynamics With the ADAS (Advanced Driver Assistance System) trend increasingly gaining traction, vehicle driveability, handling and comfort, or vehicle dynamics in other words, is attracting more attention. That's also leading to more hardware-in-loop (HIL) testing. The HIL test benches, linked to the driving simulators, can help gauge and calibrate the steering feel and brake feel on the driving simulator. 'This is a new trend, and a lot of people say that then driving simulator makes really sense, because then we can calibrate the systems like steering system, brake system, lane keeping system and so on, and make the evaluation of the tyres too,' says Dr. Pfeffer, who's also a global expert in steering systems and vehicle dynamics. The Pfeffer Steering System designed by Dr. Pfeffer is a well used conceptual framework for engineering vehicle dynamics, with a special focus on steering behaviour and its effect on droveability. It was first developed as a "very large research project" for BMW, which focused on strong driving dynamics as a brand attribute, with also a tagline of 'Sheer Driving Pleasure'. The German luxury car major's goal was to find objective targets for steering feel. They wanted to develop the steering feel with virtual methods. 'And here one part was to make this evaluation of the steering system with test drivers, subjective evaluations, and the other side was the objective evaluation. And the next step was that we want to produce these objective values out of the simulation tools. And this was the driver of this development,' explains Dr. Pfeffer. Good steering feel or feedback is key for a better driveability or driving experience. Will a drive-by-wire system be able to match the experience of a mechanically linked steering system? Yes, according to Dr. Pfeffer. He says, 'We made such a car, in partnership. It was also used for racing, and the feedback from race drivers was very, very positive, but this is not in mass production. It was just some first prototypes.' Autonomous, ADAS tech will see gradual progression Autonomous Driving is one of the key global megatrends but its progression has significantly lagged behind other megatrends such as electrification, Connected Vehicles. Assisted driving in the form of ADAS tech is paving the path for it, but time will have to be invested too for the requisite technology maturity. 'When you look back, the first ACC (Adaptive Cruise Control) was launched around 2000, and now 70 or 80% of cars in Germany are equipped with this system. It has taken more than 20 years (for the tech) to get to a maturity level, and the same will be for lane keeping tech which started later. So it takes time,' says Dr. Pfeffer, who is confident that Autonomous Driving will 'of course' become a reality. Role of Industry-Academia collaboration In an increasingly technology-intensive, disruptive era, automotive industry players have to innovate and develop technologies faster than ever before. In such a scenario, the value of or need for industry-academia collaboration may be stronger than ever. Dr. Pfeffer, who also had a stint at Audi as a chassis and NVH engineer, many years ago, believes that it's important for academia to be in tune with the trends and needs in the industry landscape. 'The big benefit of having worked in industry is that you know what the industry is needing, and you are not teaching stuff which is not for any use in the industry,' says Dr. Pfeffer, who also points out that in Germany it's essential for an individual to have industry work experience to apply for professorship. Academicians-led enterprise Dr. Pfeffer, and his core teammates at MdynamiX also form an interesting example of academicians and entrepreneurs at the same time. Peter Pfeffer, Bernhard Schick, Stefan Sentpali, and Markus Krug, all professors, came together to form MdynamiX, an engineering firm specialising in ADAS/AD, UX, vehicle dynamics, steering & brakes and NVH, in 2014. Why did a group of professors form an engineering firm when they could have done developmental projects in their labs? 'Because there are so many PhD theses, tons of papers written, there's so much knowledge in this. But development engineers don't have time to read the PhD for the whole day or so. And so to make it easier to use them, we said, okay, we have to develop easy to use products and software out of the best ideas,' says Dr. Pfeffer. MdynamiX also has an academy automotive engineers are trained to use such methods. Given the growing ADAS trend in India and the opportunities arising thereof, the Munich based MdynamiX has also established an India presence with a joint venture with Delhi based Automotive Test Systems . To get more insights and also discuss the various vehicle dynamics related topics with Dr. Peter Pfeffer, be at the 6th ETAuto Tech Summit, where the technologist and academician will participate as a Keynote Speaker.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store