
Brands press enter; GEO to show up more in AI searches
GEO tweaks content to improve visibility on AI-powered search engines and generative AI models. Unlike conventional search engines, which rely on keywords, LLMs respond to prompts and generate curated answers. This is prompting brands to rethink their search engine optimisation (SEO) strategies to ensure visibility in AI responses. For instance, intimacy wellness brand MyMuse said it's seeing a 10% increase in its monthly searches on ChatGPT since it started focusing on GEO.
Softly, a Y Combinator-backed startup founded in 2021, offers GEO services to business-to-business, business-to-customer and direct-to-customer companies. 'Every LLM relies on some form of search engine under the hood,' said Chalam PVS, cofounder and CEO of Siftly. 'When a user enters a prompt, the model often queries multiple search engines in real time, scans the results, interprets the content, and then summarises it, all within a few seconds.'He added that Siftly has analysed thousands of prompts and found that ChatGPT's results overlap only 61% with Google and 68% with Bing. 'To consistently show up on LLMs like ChatGPT, Perplexity and Gemini, you need platform-specific strategies — traditional SEO alone doesn't cut it,' he said. On platforms such as ChatGPT, Google Gemini, Claude and Perplexity, brands gain visibility in two ways: through the actual answer generated and through the sources cited in those answers.Mumbai-based Asva AI is another startup helping companies improve their presence on these models.'We help brands get discovered, understand their LLM traffic, and recommend strategies on how they can improve it,' said Viren Inaniyan, cofounder of Asva AI.Users are increasingly turning to LLMs because they want curated, direct responses instead of long lists of links. 'For instance, if users search for travel planning on ChatGPT, it will suggest flights, hotels and restaurants. All brands in these categories now want to be cited in the model's answer,' he said.Currently, brands are not charged for visibility on LLM platforms, but Inaniyan expects monetisation to start soon.
Brands on board
Data suggests that LLM-based searches are likely to outpace plain vanilla Google searches by 2028. Google's AI Overview feature now has over 2 billion monthly users, the company said in its June quarter earnings call. This growing adoption of generative AI is prompting companies to prepare for an AI-led discovery environment.'Many people still use Google search in India, but with the AI Overview feature giving a summary of the search query, most users don't scroll below,' said Aquibur Rahman, founder and CEO of Mailmodo, an email marketing platform. 'We are seeing an increase in search impressions but the click rate is decreasing.' In the last six months, Mailmodo has seen a 15% decline in clicks from Google search. To tackle this, Rahman has started optimising his website for LLMs.
Similarly, wellness brand Kerala Ayurveda is now working to show up in AI-powered search results. 'We have started working on GEO over the past couple of months and in the last two months, our traffic from ChatGPT has increased by 2.5x,' said chief product and tech officer Utkarsh Mishra.Industry experts are of the view that LLMs are particularly well suited for specific user queries and private information-seeking behaviour.'There are a lot of questions around intimacy products — how to use them, how to carry them, etc. Now because users come to AI chatbots with a lot of queries like this. We think there is scope for brands like ours to pop up,' said Sahil Gupta, CEO of MyMuse.Despite the momentum behind GEO, challenges remain.'It's important to understand that ChatGPT, Perplexity and similar platforms don't provide any data around click-through rates like Google search does,' said Inaniyan of Asva AI.That means companies are often operating on guesswork, unlike in the traditional SEO era, when keyword rankings and traffic analytics helped guide strategy.He added that users currently rely on AI chatbots mainly for information, rather than as gateways to external websites.'Redirection isn't happening on these platforms because users receive the answers they need and then go elsewhere to make purchases,' he said.Still, for brands navigating a shift in user behaviour from link-driven exploration to prompt-driven discovery, learning to optimise for LLMs is fast becoming essential. While the GEO playbook is still being written, startups and early adopters believe it could define the next wave of online visibility. Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Jane St: How an options trader smelt a rat when others raised a toast
TCS job cuts may not stop at 12,000; its bench policy threatens more
Unlisted dreams, listed disappointments? NSDL's IPO leaves pre-IPO investors riled.
Regulators promote exchanges; can they stifle one? Watch IEX
Did Meesho's Valmo really deliver a knockout punch to e-commerce logistics?
Sebi's settlement with market intermediaries: More mystery than transparency?
Trump tantrum: Check the Indian pulse of your portfolio. 71 stocks from 5 sectors for whom Trump may not even be noise
F&O Radar| Deploy Short Strangle in Nifty to gain from Theta decay
Stock Radar: PI Industries stock showing signs of momentum; takes support above 50-DEMA – time to buy?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
TTD defends AI strategy to cut waiting time for darshan
1 2 Tirupati: Tirumala Tirupati Devasthanams (TTD) chairman BR Naidu and former chief secretary LV Subramanyam, who also earlier served as TTD EO, sparred over the use of Artificial Intelligence (AI) to reduce waiting time for darshan at Tirumala, drawing everyones attention to the contentious subject of the integration of AI in temple services. The former CS, who was at Tirumala on Sunday, found fault with the attempt to reduce the waiting time for Darshan at Tirumala to less than 3 hours with the help of AI. "It is impossible to reduce the darshan waiting time below three hours through AI as there are several limitations at the temple. Instead, the TTD could focus it's attention on improvising the pilgrim amenities on the hill town", LV Subramanyam asserted. Differing with the former chief secretary, TTD chaimam BR Naidu told reporters that the new trust board had resolved in November last year to integrate AI in temple services with a noble intention to reduce the inconvenience caused to the visiting devotees due to long waiting hours in the queue lines. "The TTD has consulted global leaders in technology and AI to arrive at tailor made solutions for reducing the waiting time for darshan below three hours. In addition, equal focus for integrating various other pilgrim services with AI to extend seamless services to the multitude of devotees arriving on a pilgrimage to Tirumala is also being explored", BR Naidu asserted. The TTD chairman also faulted LV Subramanyam for his misleading comments on TTD's plans to adopt and integrate AI in temple services. "He has worked as TTD EO in the past and is very well aware of the inconvenience caused to the devotees who are sometimes forced to wait even for 72 hours in the queue for darshan. How can he fault the TTD which is exploring the possibilities to reduce the waiting time for Darshan with the help of AI", BR Naidu said. Get the latest lifestyle updates on Times of India, along with Friendship Day wishes , messages and quotes !


India.com
5 hours ago
- India.com
Elon Musk's latest shocker to Google and OpenAI with Text-to-Video feature on Grok, but with a twist; only THESE users will have access
New Delhi: Elon Musk has added a new Text-to-Video feature to GrokAI. This feature of Grok AI will create videos based on text commands. It will give tough competition to the text-to-video generation of Google's Veo3 and OpenAI's Elon Musk has named this feature Imagine. As the name suggests, this feature will work to generate videos based on your imagination. How did Musk share this information? Elon Musk has given this information from his X handle. Musk said in his post that users will be able to use this new Imagine feature by updating their X app. However, users will be kept on the waitlist for the Grok Imagine feature. This Imagine feature of Grok will generate excellent videos through text commands. This feature will work on Grok's latest large language model. Update your app and request to be on the waitlist for @Grok Imagine — Elon Musk (@elonmusk) August 2, 2025 What is the controversy over Imagine? Like Gemini Veo3 and Sora, videos can be generated in it based on text prompts. However, a new spicy mode will be given in GrokAI's Imagine. In this, users will be able to create a 6-second video clip. However, controversy has also come up regarding this feature. Many users say that obscenity will be promoted through this feature. Who will get access to Imagine? X Premium users will currently be given beta access to the GrokAI Imagine feature. However, the company will choose selected users for this. Those who are heavy users of Grok AI will be given beta access to beta Imagine first. Apart from this, Valentine mode has also been added to GrokAI. Beta access to this feature has also been given to premium users. Valentine mode has an imaginary character, with whom users can interact and share their heart. It will work like a digital friend for the users.


NDTV
6 hours ago
- NDTV
Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'
Geoffrey Hinton, regarded by many as the 'godfather of artificial intelligence' (AI), has warned that the technology could get out of hand if chatbots manage to develop their language. Currently, AI does its thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Mr Hinton. "Now it gets more scary if they develop their own internal languages for talking to each other," he said on an episode of the "One Decision" podcast that aired last month. "I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking." Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret. Warning about AI Mr Hinton laid the foundations for machine learning that is powering today's AI-based products and applications. However, the Nobel laureate grew wary of AI's future development and cut ties with his employer, Google, in order to speak more freely on the issue. "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," said Mr Hinton at the time. "I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control." Mr Hinton has been a big advocate of government regulation for the technology, especially given the unprecedented pace of development. His warning also comes in the backdrop of repeated instances of AI chatbots hallucinating thoughts. In April, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o. The company said it did not have any idea why this was happening. In a technical report, OpenAI said, "more research is needed" to understand why hallucinations are getting worse as it scales up its reasoning models.