logo
#

Latest news with #GPT-4

Opinion: Wanna help save the planet? Stop asking AI dumb questions
Opinion: Wanna help save the planet? Stop asking AI dumb questions

The Star

time12 hours ago

  • The Star

Opinion: Wanna help save the planet? Stop asking AI dumb questions

It takes huge amounts of energy to power artificial intelligence – so much energy that it's looking less and less likely that the US will meet its goals for reducing greenhouse gas emissions. (If we still have any such goals under President Donald Trump.) What's less known is that AI also consumes copious amounts of water needed to cool all that IT equipment. To generate a 100-word email, a chatbot using GPT-4 requires 519 millilitres of water – roughly equivalent to an 18-ounce bottle of water. That doesn't sound like much, but when you multiply that by the number of users, it's significant. Also, it requires far more than 100 words for AI to respond to our most pressing questions, such as: – What are three excuses for skipping dinner at my (fill in the blank's) house tonight? – Can you rewrite this email to make me sound smarter? – How do you make a mojito? – Does this outfit look good on me? If you are wondering about that last query, yes, there are folks who rely on ChatGPT for wardrobe advice. Some check in with Chat on a daily basis by uploading a photo of themselves before they leave the house, just to make sure they look presentable. These superusers often spring for a US$20-per-month (RM84) subscription to ChatGPT Plus, which provides priority access, among other perks. Chat can also help you write a dating profile, plan a trip to Mexico City, manage your finances, give you relationship advice, tell you what shampoo to use and what color to paint your living room. Another plus: ChatGPT never talks down to you. Even the most outlandish queries get a polite, ego-boosting response like this: 'That's a thoughtful and important question. Here's a grounded response.' Google vs ChatGPT But again, it's hard to get around the fact that AI is hard on the planet. Example: The New York Times reports that Amazon is building an enormous AI data centre in Indiana that will use 2.2 gigawatts of electricity, which is enough to power a million homes. And according to a report from Goldman Sachs, 'a ChatGPT query needs nearly 10 times as much electricity to process as a Google search.' So we could save energy by opting for Google search, except Google is getting in to the AI business, too. Have you noticed those 'AI overviews' at the top of search results? Those come at an environmental cost. 'Embedding generative AI in such a widely used application is likely to deepen the tech sector's hunger for fossil fuels and water,' writes Scientific American staffer Allison Parshall. The good news is there is a way to block those pesky AI overviews; YouTube has tutorials like this one that will walk you through it. In further good news, there are smart people looking for ways to make AI more environmentally friendly, but that could take a while. in the meantime, should we conserve water and energy by letting AI focus on important tasks like diagnosing breast cancer, predicting floods and tracking icebergs? Maybe stop running to ChatGPT every time we have a personal problem? Should I feel guilty, for example, if I ask Chat how to stop my cats from scratching the couch? Not according to Chat. 'No, guilt isn't productive unless it's leading you to positive action,' Chat told me. 'Instead, awareness is more productive.' But if you do worry about the planet, Chat recommends using AI 'with purpose' rather than as entertainment. No need to swear it off entirely. 'The focus should be on conscious consumption rather than abstinence,' Chat says. Lower 'brain engagement' That sounds reasonable, except a recent MIT study offers evidence that the longer we use AI, the less conscious we become. Using an EEG to measure brain activity of 54 subjects, researchers found that those who used ChatGPT to write SAT essays had lower 'brain engagement' than two other groups – one was allowed to use Google search and the other relied solely on brain power to complete the essays. 'Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study,' Time magazine reported. Granted, this is only one small study. But to be on the safe side, I'm going to lay off Chat for a while. Maybe I'll hit Google with that cat question. There is, however, one thing Google can't tell me: Does that dress I ordered online look OK on me or should I send it back? Tell me what you think, Chat. And please, be brutally honest. – The Sacramento Bee/Tribune News Service

Google's Launches Gemma 3n to Deliver Smarter, Offline AI to Mobile Devices and Laptops
Google's Launches Gemma 3n to Deliver Smarter, Offline AI to Mobile Devices and Laptops

International Business Times

time18 hours ago

  • Business
  • International Business Times

Google's Launches Gemma 3n to Deliver Smarter, Offline AI to Mobile Devices and Laptops

Technology giant Google is upping its AI game by giving a tough time to its competitors and launching back-to-back AI models. Now the world's top search engine giant has introduced Gemma 3n, an AI model that fits directly on smartphones, tablets, and laptops with no internet connection required. Supporting text, images, audio, and video, it provides AI superpowers to devices you can hold in your hand. As an open-weight model, Gemma 3n can be analyzed and balanced by developers—representing a move to privacy-enabled on-device AI. Google has shaken the industry with Gemma 3n, a super advanced new AI model that delivers on big promises of bringing an out-of-this-world AI experience to regular consumer hardware like phones and laptops. Instead of relying on cloud servers that traditional AI systems use, Gemma 3n operates locally—without an internet connection—providing both speed and privacy. This shift reflects a broader trend in AI, which is away from large, centralized server models to small, efficient personal-device models. 3. Gemma is multimodal, i.e., it accepts multimodal inputs, and therefore, it can read not only text but also images, audio, and video. This has opened up new possibilities for real-time translation, speech recognition, image analysis, and much more, without sending any data to the cloud. What makes Gemma 3n unique is its open-weight design. Unlike proprietary systems like OpenAI's GPT-4 or Google's own Gemini, open-weight models allow developers to download and run the model on their own machine. This leads to more flexible customization, rapid innovation, and more control over privacy. Gemma 3n comes in two model sizes: a 5-billion-parameter model that can be run with as little as 2 GB of RAM and an 8-billion-parameter model that runs effectively with about 3 GB of RAM. Despite their small size, both models deliver performance comparable to older, larger models. Google also included many smart tools in Gemma 3n to help it work well. Another new architecture—MatFormer—helps the model adapt to different devices by using resources more flexibly. Per-Layer Embeddings and KV Cache Sharing are details to further accelerate speed and shrink memory usage, especially for longer video and audio tasks. The model's audio skills rely on Google's Universal Speech Model, which assists with on-device transcription and translation. The vision encoder uses MobileNet-V5 architecture for video processing up to 60 fps, even on smartphones. Google has made the model available to developers and researchers by providing Gemma 3n through services like Hugging Face, Amazon SageMaker, Kaggle, and Google AI Studio. It fosters innovation and application development across other sectors, from healthcare and education to mobile apps and security tools.

What is Mu? Microsoft Launches Small AI Model That Runs Fast and Private on Your PC
What is Mu? Microsoft Launches Small AI Model That Runs Fast and Private on Your PC

International Business Times

time4 days ago

  • International Business Times

What is Mu? Microsoft Launches Small AI Model That Runs Fast and Private on Your PC

Technology giant Meta has moved a step ahead in making AI competition more interesting by launching a new AI tool, Mu, a small yet powerful language model designed to run directly on your computer. Unlike most AI tools, which operate on remote cloud servers, Mu utilizes your PC's Neural Processing Unit (NPU) for all of its processing, which in turn translates into faster performance, improved privacy, and no need for the internet to accomplish tasks. Mu is built for Copilot+ PCs and currently resides in the Windows 11 Settings app. It enables users to change system settings—such as screen brightness or battery saver—by simply typing or speaking natural language commands. No more digging through menus. For instance, if you say "turn on battery saver," the setting is activated immediately. Mu has just 330 million parameters, which is way smaller than other language models such as GPT-4, developed by OpenAI, a research lab known as a pioneer in natural language processing and AI. MU performs surprisingly well despite having considerably fewer parameters. It can generate 200 words per second on devices like the Surface Laptop 7. In tests on Qualcomm's Hexagon NPU, Mu processed tasks up to five times faster than many other models available on the market, making it one of the fastest in its category for real-time tasks. What sets Mu apart is its emphasis on privacy. All computation occurs in the device itself. That means your commands and data don't leave your PC, a significant benefit for individuals and businesses worried about the safety of their data. Microsoft also trained Mu to handle hundreds of system-level tasks, helping users with simple and easy-to-understand instructions. It is based on the company's Phi family of AI models, which are designed for high efficiency in limited computing environments. This makes Mu a perfect fit for devices that do not have good processing power. Currently, Mu is being tested through the Windows Insider Program. Users in the Dev and Beta Channels with supported hardware can already try it out. Microsoft is planning to broaden the application of Mu to more PCs using AMD and Intel chips. Mu is not flashy, yet it represents a smart shift in how people use their computers. It is like a quiet little helper that simply does its job when needed—quick, private, and efficient. As AI becomes a bigger part of our everyday lives, small tools like Mu might have the most meaningful impact.

Musk's Lawyer Claims He Doesn't Use a Computer—His Own Posts Say Otherwise
Musk's Lawyer Claims He Doesn't Use a Computer—His Own Posts Say Otherwise

Hans India

time4 days ago

  • Business
  • Hans India

Musk's Lawyer Claims He Doesn't Use a Computer—His Own Posts Say Otherwise

A new twist has emerged in the ongoing legal battle between Elon Musk and OpenAI, as the billionaire's lawyer made a bold claim: Musk 'does not use a computer.' The statement, included in a recent court filing and first reported by Wired, is now drawing skepticism—especially in light of Musk's own social media posts that seem to directly contradict it. The unusual claim came as part of a response to OpenAI's demands for additional documentation in their legal standoff with Musk and his AI startup, xAI. OpenAI has asked for files and data that could potentially be stored on a computer. But Musk's legal team insisted that searching a computer was unnecessary—because, they argued, he simply doesn't use one. However, Musk's digital footprint suggests otherwise. In a widely seen post from December 2024, Musk shared a photo of his laptop while testing Starlink's in-flight streaming capabilities mid-flight. The image featured a gaming laptop with a prominent Dogecoin sticker. Musk wrote that the sticker was a gift from a fan in Germany and added that it was 'too cool to lose.' Then, on June 1, 2025, Musk again referred to the same laptop, saying, 'Still using my ancient PC laptop with the @DOGE sticker made long ago by a fan.' The laptop was identified as a Gigabyte Aero, a model known for high performance and favoured by PC gamers, suggesting he uses it at least occasionally for either work or recreation. Musk has even weighed in on tech setup frustrations. In February 2024, he tweeted about his experience buying a new laptop and being required to create a Microsoft account just to finish setting it up. 'This is messed up,' he wrote in the post, expressing concern that this gave Microsoft's AI access to his machine. The following day, he tagged Microsoft CEO Satya Nadella in another post, urging the company to restore the option to skip that setup step. These online posts have triggered fresh scrutiny of the court claim that Musk doesn't use a computer. Whether this assertion was a legal strategy meant to narrow the scope of discovery, or an earnest statement, remains unclear. But for many, it appears to strain credibility. At the heart of the legal fight is Musk's March 2024 lawsuit against Sam Altman, OpenAI, and Microsoft. Musk accuses the defendants of transforming OpenAI from its original nonprofit mission into a profit-driven enterprise heavily influenced by Microsoft. He argues that GPT-4—central to OpenAI's success—was never intended to be commercialised to its current scale. Musk's legal team has said it searched his phone and email for responsive records, but drew the line at searching any computers. Critics argue the contradiction between that claim and Musk's public tech habits only deepens the drama. Whether this curious detail will influence the legal proceedings is yet to be seen. But in a case already filled with boardroom intrigue, philosophical clashes over AI ethics, and high-stakes corporate maneuvering, the question of whether Elon Musk uses a computer is proving to be more than just a technicality.

Elon Musk does not use computer, his lawyer claims
Elon Musk does not use computer, his lawyer claims

India Today

time4 days ago

  • Business
  • India Today

Elon Musk does not use computer, his lawyer claims

The latest development in the ongoing legal drama between Elon Musk and OpenAI brings another interesting new twist. The billionaire's lawyer claimed in a recent court filing that Musk 'does not use a computer'. This was first reported by Wired. The statement was reportedly made in a filing that was submitted in response to accusations from OpenAI that Musk and his AI startup, xAI, had failed to comply with discovery obligations in the lawsuit Musk filed earlier this year. The claim, however, is now being questioned after multiple posts by Musk himself appear to contradict X, the platform formerly known as Twitter and now owned by Musk, he has repeatedly referred to using a laptop. In one post from December 2024, Musk shared an image of what he described as his laptop, stating that he was testing Starlink's in-flight streaming capabilities while playing the game Diablo. In the photo, the laptop carries a large Dogecoin-themed sticker across its lid – something Musk appears fond of, noting that the sticker was given to him by a fan in Germany and calling it too cool to lose. In another post dated 1 June 2025, Musk doubled down on his affection for the same machine, writing, 'Still using my ancient PC laptop with the @DOGE sticker made long ago by a fan.' The laptop, identified in the photo as a Gigabyte Aero – popular among PC gamers – suggests that he at least occasionally uses it for personal or professional tasks. advertisement Even earlier, in February 2024, Musk posted about buying a new PC laptop. In the now-viral tweet, he complained that Microsoft's setup process forced users to create an account before accessing the device, effectively giving its AI access to his computer. 'This is messed up,' Musk wrote, asking others if they'd seen the same restriction. The very next day, he tagged Microsoft CEO Satya Nadella in another post, asking the company to restore the option to skip Microsoft account login during setup. Given these public posts, Musk's legal team's claim that he doesn't use a computer is raising eyebrows. The context behind this strange contradiction lies in Musk's ongoing lawsuit against Sam Altman, OpenAI, and Microsoft. Filed in March 2024, Musk accuses the trio of steering OpenAI away from its original nonprofit mission, instead turning it into a for-profit operation dominated by Microsoft's commercial interests. At the heart of the lawsuit is Musk's claim that OpenAI's GPT-4 was never meant to be monetised at the scale it has reached. The legal filing claiming Musk doesn't use a computer came after OpenAI asked for documentation and records that could include emails or files created on a computer. Musk's legal team responded by stating that it had searched his phone and email, and insisted that there was no need to search a computer – because, they claimed, he doesn't use this statement was made to limit document production or as a genuine claim remains unclear. But it adds another layer of intrigue to the Musk vs Altman saga, which has already been full of surprising turns – from power struggles and secret board meetings to dramatic blog posts and corporate U-turns.- EndsTune In

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store