Latest news with #GeminiAI

RNZ News
10 hours ago
- RNZ News
'Pretty damn average': Google's AI Overviews underwhelm
Photo: JAAP ARRIENS Most searches online are done using Google. Traditionally, they've returned long lists of links to websites carrying relevant information. Depending on the topic, there can be thousands of entries to pick from or scroll through. Last year Google started incorporating its Gemini AI tech into its searches . Google's Overviews now inserts Google's own summary of what it's scraped from the internet ahead of the usual list of links to sources in many searches. Some sources say Google's now working towards replacing the lists of links with its own AI-driven search summaries. RNZ's Kathryn Ryan's not a fan. "Pretty damn average I have to say, for the most part," she said on Nine to Noon last Monday during a chat about AI upending the business of digital marketing. But Kathryn Ryan is not the only one underwhelmed by Google's Overviews. Recently, online tech writers discovered you can trick it into thinking that made up sayings are actually idioms in common usage that are meaningful. The Sydney Morning Herald 's puzzle compiler David Astle - under the headline 'Idiom or Idiot?' reckoned Google's AI wasn't about to take his job making cryptic crosswords anytime soon. "There is a strange bit of human psychology which says that we expect a very high bar from machines in a way that we don't from humans," the BBC's head of technology forecasting Laura Ellis told Mediawatch last month. "But if you've got a machine making a mistake, where does that accountability fall? We've just not tested this out yet." UK Sky News deputy political editor Sam Coates tried to make ChatGPT accountable after it made up an entire episode of his own politics podcast when he used it to help archive transcripts of it recently. "AI had told a lie that it had got the transcript. And rather than back down it invented an entire fake episode without flagging that it's fake." When challenged on this, the technology insisted Coates had created the episode himself. When ChatGPT can't find an answer or the right data to draw on, it can 'hallucinate' or just make up a misleading response. "Chat GTP is gaslighting me. No such thing exists. It's all a complete fake," Coates spluttered. After turning ChatGPT off and on again in 'conversation mode', it did eventually own up. "It said; 'Look, you're absolutely right to challenge that. I can't remember the exact time that you uploaded.' And then: 'What I can confirm is that I did it and you're holding me to account,'" Coates told viewers. He went on to challenge ChatGPT about its hallucinations getting worse. "The technology is always improving, and newer versions tend to do a better job at staying accurate," ChatGPT replied. But Coates - armed with data that suggested the opposite - asked ChatGPT for specific stats. The response: "According to recent internal tests from OpenAI, the newer models have shown higher hallucination rates. For instance, the model known as o3 had about a 33 percent hallucination rate, while the 04 mini model had around 48 percent." "I get where you're coming from, and I'm sorry for the mixed messages. The performance of these models can vary." When Coates aired his experience as a warning for journalists, some reacted with alarm. "The hallucination rate of advanced models... is increasing. As journos, we really should avoid it," said Sunday Times writer and former BBC diplomatic editor Mark Urban. But some tech experts accused Coates of misunderstanding and misusing the technology. "The issues Sam runs into here will be familiar to experienced users, but it illustrates how weird and alien Large Language Model (LLM) behaviour can seem for the wider public," said Cambridge University AI ethicist Henry Shevlin. "We need to communicate that these are generative simulators rather than conventional programmes," he added. Others were less accommodating on social media. "All I am seeing here is somebody working in the media who believes they understand how technology works - but [he] doesn't - and highlighting the dangers of someone insufficiently trained in technology trying to use it." "It's like Joey from Friends using the thesaurus function on Word." Mark Honeychurch is a programmer and long serving stalwart of the NZ Skeptics, a non profit body promoting critical thinking and calling out pseudoscience. The Skeptics' website said they confront practices that exploit a lack of specialist knowledge among people. That's what many people use Google for - answers to things they don't know or things they don't understand. Mark Honeychurch described putting overviews to the test in a recent edition of the Skeptics' podcast Yeah, Nah . "The AI looked like it was bending over backwards to please people. It's trying to give an answer that it knows that the customer wants," Honeychurch told Mediawatch . Honeychurch asked Google for the meaning of: 'Better a skeptic than two geese.' "It's trying to do pattern-matching and come out with something plausible. It does this so much that when it sees something that looks like an idiom that it's never heard before, it sees a bunch of idioms that have been explained and it just follows that pattern." "It told me a skeptic is handy to have around because they're always questioning - but two geese could be a handful and it's quite hard to deal with two geese." "With some of them, it did give me a caveat that this doesn't appear to be a popular saying. Then it would launch straight into explaining it. Even if it doesn't make sense, it still gives it its best go because that's what it's meant to do." In time, would AI and Google detect the recent articles pointing out this flaw - and learn from them? "There's a whole bunch of base training where (AI) just gets fed data from the Internet as base material. But on top of that, there's human feedback. "They run it through a battery of tests and humans can basically mark the quality of answers. So you end up refining the model and making it better. "By the time I tested this, it was warning me that a few of my fake idioms don't appear to be popular phrases. But then it would still launch into trying to explain it to me anyway, even though it wasn't real." Things got more interesting - and alarming - when Honeychurch tested Google Overviews with real questions about religion, alternative medicine and skepticism. "I asked why you shouldn't be a skeptic. I got a whole bunch of reasons that sounded plausible about losing all your friends and being the boring person at the party that's always ruining stories." "When I asked it why you should be a skeptic, all I got was a message saying it cannot answer my question." He also asked why one should be religious - and why not. And what reasons we should trust alternative medicines - and why we shouldn't. "The skeptical, the rational, the scientific answer was the answer that Google's AI just refused to give." "For the flip side of why I should be religious, I got a whole bunch of answers about community and a feeling of warmth and connecting to my spiritual dimension. "I also got a whole bunch about how sometimes alternative medicine may have turned out to be true and so you can't just dismiss it." "But we know why we shouldn't trust alternative medicine. It's alternative so it's not been proven to work. There's a very easy answer." But not one Overview was willing or able to give, it seems. Google does answer the neutral question 'Should I trust alternative medicine?' by saying there is "no simple answer" and "it's crucial to approach alternative medicine with caution and prioritise evidence-based conventional treatments." So is Google trying not to upset people with answers that might concern them? "I don't want to guess too much about that. It's not just Google but also OpenAI and other companies doing human feedback to try and make sure that it doesn't give horrific answers or say things that are objectionable." "But it's always conflicting with the fact that this AI is just trained to give you that plausible answer. It's trying to match the pattern that you've given in the question." Journalists use Google, just like anyone who's in a hurry and needs information quickly. Do journalists need to ensure they don't rely on the Overviews summary right at the top of the search page? "Absolutely. This is AI use 101. If you're asking something of a technical question, you really need to be well enough versed in what you're asking that you can judge whether the answer is good or not." Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.


News18
15 hours ago
- Business
- News18
Google's Gemini CLI AI Agent Wants To Help Developers Code And Solve Problems: Know More
Last Updated: Gemini CLI is an open-source AI agent from Google that is designed to help developers code, solve problems and even automate their tasks. Google is building more AI agents so that people can automate tasks and let AI do most of the heavy lifting. The latest in this series is Gemini CLI which is an agent designed to help developers write code, execute them, generate images and solve complex problems on the fly. The company claims Gemini CLI is open source and offers an easy and lighter access to the AI chatbot for free on Windows, Mac and even Linux systems for developers. You can even customise or instruct Gemini AI to work according to your needs and structure that you follow with your work. AI agents are being pitted as the next-gen AI tools that promise to simplify tasks and help people with multi-tasking to let them focus on the important work. Gemini CLI is doing just that and having it in open source just makes it even more dynamic in the long run. The company is basically offering the source code for the AI agent and giving them the room to improve it, keep it secure and even upgrade the underlying codes. Gemini CLI is available for free if you have a personal Google account and free Gemini code assist license. Having these gives you access to the Gemini 2.5 Pro model and over 1 million tokens that come with the package. Developers have other advanced plans that come with more features like Google AI Studio and other enterprise related solutions. The AI agent is also letting you plug prompts into Search for web pages and source them for your materials. There is a lot on offer here and Google is hoping that the new AI agent shows you the right and the automated way to get the work done. First Published: June 26, 2025, 10:48 IST


Hans India
2 days ago
- Hans India
Google Resumes Rollout of AI-Powered 'Ask Photos' with Faster Search for Simpler Queries
Google is once again expanding access to its AI-powered Ask Photos feature in Google Photos after briefly pausing its rollout earlier this month. The company says it has enhanced the experience, particularly for simple search queries, by making the tool faster and more responsive. Ask Photos, powered by Google's advanced Gemini AI models, allows users to search for specific photos using natural language queries. Whether you're trying to locate a picture from a beach vacation or find all the snapshots of your dog, the AI can analyze image content and metadata to surface results based on your questions. However, the initial rollout faced some challenges. A team member from Google Photos recently acknowledged on X (formerly Twitter) that the feature had some performance issues. 'It isn't where it needs to be, in terms of latency, quality and UX,' they wrote, prompting the company to reassess and refine the tool before expanding it further. Responding to early user feedback, Google has now made changes aimed at speeding up basic search responses. In a blog post shared on Thursday, the company said it 'heard your feedback' about wanting quicker results for straightforward queries. The update allows Ask Photos to provide instant results for simple keywords like 'beach' or 'dogs,' with the more advanced Gemini AI continuing to refine and enhance results in the background for complex questions. 'You'll now see results right away while Gemini models continue to work in the background to find the most relevant photos or information for more complex queries,' Google explained. The change marks a step forward in how users interact with their personal photo libraries. By blending powerful AI with user-centric enhancements, Google aims to make photo searches not only more intelligent but also more intuitive and efficient. In addition to performance improvements, Google announced that Ask Photos is now 'opening up beyond early access,' meaning more users in the United States will soon be able to experience the feature. The rollout is targeted at 'eligible users,' though the company has not specified what criteria determine eligibility. A GIF shared by Google illustrates the updated feature in action—showing how it interprets a user's typed question and quickly pulls relevant images from the user's photo archive. While still evolving, this update demonstrates Google's commitment to integrating AI into everyday tools, making large libraries of personal data more accessible and manageable through conversational search. As AI continues to shape the way we organize digital memories, features like Ask Photos hint at a future where finding a single image in thousands can be as easy as asking a friend. More updates on international availability and feature refinement are expected as Google continues to gather user insights and performance metrics from the ongoing U.S. rollout.


Hindustan Times
2 days ago
- Hindustan Times
Stop saying 'Can you' to ChatGPT; Being polite could limit accuracy
Jun 27, 2025 11:50 AM IST Do you often use terms like 'please', 'can you', 'thank you', and other polite phrases to ChatGPT or any other AI chatbot? Then you are confusing and limiting the advanced capabilities of AI models. A few months back, it was reported that using such terms and phrases is costing OpenAI millions of dollars. A generative AI chatbot is itself a revolutionary advancement that has Natural Language Understanding along with conversational context. This enables chatbots like ChatGPT, Gemini AI, and others to interact with humans without the need for them to learn a different techie language. However, there are some rules for interactions which are to be followed to make the most of Gen AI tools. Know more about what being polite to a chatbot restricts their capabilities. Avoid using terms like 'can,' 'could,' 'please,' and others to ChatGPT. Here's why.(Pexels) Also read: ChatGPT now lets you download Deep Research reports as PDFs - here's how In a professional world, being polite is considered basic human etiquette, and we often tend to speak like humans even with an AI chatbot. However, using terms like 'Thank you' and 'Please' could cost an AI-based company millions of dollars. Reportedly, these polite phrases consume greater amounts of energy, as the AI Chatbot can take additional time to process the prompt. Therefore, it is suggested that while having a conversation with AI, users should keep in mind to write short and clear prompts which consume less processing time, which will eventually reduce the energy consumption. Also read: How to use ChatGPT to colourise old black-and-white images: Step-by-step guide Additionally, words like 'Can' and 'Could' should also be restricted and are considered 'most dangerous' according to a BGR report. For instance, if you are using terms like ' Can you' in prompts could confuse AI and could also result in an entirely different result. These terms also make AI chatbots less accurate in comparison to when you provide a direct prompt. Therefore, if you want to make the most of ChatGPT, then we have listed some words to avoid while having a conversation with the AI chatbot Words to avoid while using ChatGPT Avoid being indecisive, using words like 'Maybe', 'It is possible', and similar terms like if you desire a straightforward answer. Therefore, use direct terms like 'Explain,' "write," "create," and "summarise." Avoid using words like 'just,' 'really,' 'actually,' 'basically,' and 'kind of,' if you want ChatGPT to be concise. These terms do not add any value to AI understanding and affect the quality of prompts, resulting in less directness. Avoid using qualifiers like "a lot," "many," "recently," or "often" as it may not help ChatGPT generate valuable responses. These words are subjective and may lead AI to misinterpret the prompt. Avoid using terms like 'I'm sorry' as it shows the user is less confident. Since AI tries to mirror human tonality, it may provide responses in similar tones, which could affect accuracy. Mobile Finder: Nothing Phone 3 LATEST specs, features, and price


The Verge
2 days ago
- The Verge
Google is rolling out its AI-powered ‘Ask Photos' search again – and it has a speed boost
After quietly pausing the rollout of Google Photos' AI-powered 'Ask Photos' search tool, Google is now expanding access once again and making some improvements to the feature. Google's Gemini AI models power Ask Photos so that you can ask complex questions to help you find photos. But earlier this month, a member of the Google Photos team said on X that the feature 'isn't where it needs to be, in terms of latency, quality and UX.' In a blog post published Thursday, Google said that it has 'heard your feedback' that the feature should 'return more photos faster for simple searches, like 'beach' or 'dogs.'' Now, 'you'll now see results right away while Gemini models continue to work in the background to find the most relevant photos or information for more complex queries,' according to Google. The company adds that the feature is now 'opening up beyond early access' and is beginning to roll out to more 'eligible users' in the US.