Latest news with #searchengines


Forbes
19-06-2025
- Business
- Forbes
Here Comes SEO For AI Search
Cofounder James Cadwallader and Dylan Babbs started their AI SEO startup Profound last year when the duo realized they were doing all their research on AI search engines like Perplexity. Profound For many businesses, Google used to be core to their strategy to get in front of potential customers. They'd use search engine optimization tactics to make sure they're one of the treasured few links that show up when you search for something. Those days are numbered, thanks to AI. Search traffic for businesses like travel site Kayak and edtech company Chegg is dropping, in part because 60% of searches on sites like Google aren't leading people to click any links, per one study — they just read the AI summary at the top. An executive at a cybersecurity company told Forbes search traffic to its website has gone down 10% this year. 'The industry is really turned on its head because traditional ways of eventually SEO just don't work anymore,' he said. Now, instead of ginning up content to rank higher in traditional search engines, businesses are increasingly trying to grok how their brands show up in answers generated by AI search engines like Google's AI Overviews, Perplexity and ChatGPT, and create new content designed to be picked up by bots — and a new crop of startups has sprung up to help them. 'This is a true Game of Thrones power shift that is upon us here,' said James Cadwallader, the cofounder and CEO of Profound, which helps more than 100 customers like U.S. Bank, Docusign and Indeed understand how their brands appear in AI responses. 'We're entering this inflection point where humans no longer need to visit websites on the internet….These systems are hijacking that relationship with the end user entirely.' 'We're entering this inflection point where humans no longer need to visit websites on the systems are hijacking that relationship with the end user entirely.' It's a 'hair on fire' problem for companies, said Kleiner Perkins partner Ilya Fushman who led a $20 million funding round into the startup, which is less than a year old. Nvidia and Khosla also participated in the round, which valued Profound at over $100 million. To tackle it, Profound generates thousands of synthetic prompts like 'cheap football cleats' or 'best phone for a teenager' and sends them to AI search engines to get an overall idea of which brands most frequently come up in different responses. The software is plugged into the brand's website to observe, in real time, which specific pages are being crawled the most. Profound then tracks 'sentiment' to gauge a brand's reputation within AI search by capturing phrases that could have negative connotations. Based on these metrics, it makes recommendations (as a traditional marketing agency would) like suggesting keywords, formatting style and layout and adding metadata to make pages easier to scrape by AI search engines. In some cases, it's also using AI models to generate content that's custom made to be crawled by AI search engines. It's early days for this new burgeoning field, known as Generative Engine Optimization. For the cybersecurity company that's seen its search traffic dip by 10%, Profound is helping it track how competitors are performing in AI search results and produce new content to keep up with them. And an SEO specialist at a large job search company told Forbes he turned to Profound because the company had a 'blind spot' in terms of how the business' content was being featured on AI search engines a few months ago. 'We were completely in the dark,' he said. With Profound, he's able to find popular prompts and learn how the company's blogs and data shows up in AI-generated answers. Even though traffic from tools like ChatGPT is less than 1% for the site, influencing how AI talks about the brand is where the market is headed, he said. But influencing AI search engines' answers is no easy task. The models underlying them are constantly changing—and along with them AI's responses. AI search engines answer the same question differently each time, depending on how and when they are prompted and who is writing the prompt. 'That's why there's such value in being able to distill, disambiguate and communicate how things like ChatGPT are surfacing information to real people' said Keith Rabois, a partner at Khosla Ventures who has backed Profound. 'It's personally impossible to track on your own company's behalf.' Profound isn't the only game in town. New York-based Bluefish AI, which has raised $5 million in seed funding from backers like Crane Ventures and Laconia, is helping companies in industries like travel, pharmaceutical and retail track how they appear on Gemini, Perplexity and ChatGPT. Big tech companies like Amazon, Meta and Microsoft are becoming AI companies, and as their tools attract hundreds of millions of users, it has been a 'wake up call' for the marketing industry, CEO Alex Sherman said. While most websites are designed for human readers, they will have to be optimized to be crawled by large language models if businesses want to be featured in AI responses and reach customers directly. In March, OpenAI cofounder Andrej Karpathy said on X that '99.9% of attention is about to be LLM attention.' With a birds-eye view into AI responses, Bluefish is able to observe which sources of data are influencing brands' perception within AI responses. 'Some of the highest ranked third party sources that Bluefish sees across its customers are platforms like Reddit,' he said. Another player is Athena, cofounded by Andrew Yan, who formerly worked at Google Deepmind's generative media team, which uses its own AI search model that's trained on millions of prompts and data points to create a dashboard of different AI metrics like sources, mentions and referral rates. Yan said AI search has widened the scope for marketing as it draws from more sources than Google, where the top three linked websites are often the most viewed. Profound's Cadwallader is optimistic that AI search engines will create a new, and improved, way for people to discover new products and make informed purchasing decisions that are best catered to their preferences. As AI systems surf the internet on behalf of shoppers, scrape data from hundreds of websites and deliver AI-generated responses to them, businesses are realizing they need to command the attention of a new type of 'VIP customer'— bots. 'Eventually we believe in the zero click future where consumers will only interact with the answer engine and agents will be the primary visitor of websites and it'll be a good thing,' he said.


CNET
09-06-2025
- Health
- CNET
The Scientific Reason Why ChatGPT Leads You Down Rabbit Holes
That chatbot is only telling you what you want to believe, according to a new study. Whether you're using a traditional search engine like Google or a conversational tool like OpenAI's ChatGPT, you tend to use terms that reflect your biases and perceptions, according to the study, published this spring in the Proceedings of the National Academy of Sciences. More importantly, search engines and chatbots often provide results that reinforce those beliefs, even if your intent is to learn more about the topic. For example, imagine you're trying to learn about the health effects of drinking coffee every day. If you, like me, enjoy having exactly two cups of joe first thing in the morning, you may search for something like "is coffee healthy?" or "health benefits of coffee." If you're already skeptical (maybe a tea purist), you might search for "is coffee bad for you?" instead. The researchers found that framing of questions could skew the results -- I'd mostly get answers that show the benefits of coffee, while you'd get the opposite. "When people look up information, whether it's Google or ChatGPT, they actually use search terms that reflect what they already believe," Eugina Leung, an assistant professor at Tulane University and lead author of the study, told me. The abundance of AI chatbots, and the confident and customized results they so freely give you, makes it easier to fall down a rabbit hole and harder to realize you're in it. There's never been a more important time to think deeply about how you get information online. The question is: How do you get the best answers? Asking the wrong questions The researchers conducted 21 studies with nearly 10,000 participants who were asked to conduct searches on certain preselected topics, including the health effects of caffeine, gas prices, crime rates, COVID-19 and nuclear energy. The search engines and tools used included Google, ChatGPT and custom-designed search engines and AI chatbots. The researchers' results showed that what they called the "narrow search effect" was a function of both how people asked questions and how the tech platforms responded. People have a habit, in essence, of asking the wrong questions (or asking questions in the wrong way). They tended to use search terms or AI prompts that demonstrated what they already thought, and search engines and chatbots designed to provide narrow, extremely relevant answers, delivered on those answers. "The answers end up basically just confirming what they believe in the first place," Leung said. Read more: AI Essentials: 29 Ways to Make Gen AI Work for You, According to Our Experts The researchers also checked to see if participants changed their beliefs after conducting a search. When served a narrow selection of answers that largely confirmed their beliefs, they were unlikely to see significant changes. But when the researchers provided a custom-built search engine and chatbot designed to offer a broader array of answers, they were more likely to change. Leung said platforms could provide users with the option of a broader, less tailored search, which could prove helpful in situations where the user is trying to find a wider variety of sources. "Our research is not trying to suggest that search engines or algorithms should always broaden their search results," she said. "I do think there is a lot of value in providing very focused and very narrow search results in certain situations." 3 ways to ask the right questions If you want a broader array of answers to your questions, there are some things you can do, Leung said. Be precise: Think specifically about what exactly it is you're trying to learn. Leung used an example of trying to decide if you want to invest in a particular company's stock. Asking if it's a good stock or a bad stock to buy will likely skew your results -- more positive news if you ask if it's good, more negative news if you ask if it's bad. Instead, try a single, more neutral search term. Or ask both terms and evaluate the results of each. Get other views: Especially with an AI chatbot, you can ask for a broad range of perspectives directly in the prompt. If you want to know if you should keep drinking two cups of coffee a day, ask the chatbot for a variety of opinions and the evidence behind them. The researchers tried this in one of their experiments and found they got more variety in results. "We asked ChatGPT to provide different perspectives to answer the query from the participants and to provide as much evidence to back up those claims as possible," Leung said. At some point, stop asking: Follow-up questions didn't work quite as well, Leung said. If those questions aren't getting broader answers, you may get the opposite effect -- even more narrow, affirming results. In many cases, people who asked lots of follow-up questions just "fell deeper down into the rabbit hole," she said.


CNET
09-06-2025
- Health
- CNET
Getting Good Results From AI and Search Engines Means Asking the Right Questions
The way you search online or ask an AI chatbot for information can influence the results you get, even if you aren't trying to find information that reinforces your own beliefs, according to a new study. People tend to use terms, whether in a traditional search engine like Google or a conversational tool like OpenAI's ChatGPT, that reflect their existing biases and perceptions, according to the study, published this spring in the Proceedings of the National Academy of Sciences. More importantly, search engines and chatbots often provide results that reinforce those beliefs, even if the intent is to learn more about the topic. For example, imagine you're trying to learn about the health effects of drinking coffee every day. If you, like me, enjoy having a couple of cups of joe first thing in the morning, you may search for something like "is coffee healthy?" or "health benefits of coffee." If you're already skeptical (maybe a tea purist), you might search for "is coffee bad for you?" instead. The researchers found that framing of questions could skew the results -- I'd mostly get answers that show the benefits of coffee, while you'd get the opposite. "When people look up information, whether it's Google or ChatGPT, they actually use search terms that reflect what they already believe," Eugina Leung, an assistant professor at Tulane University and lead author of the study, told me. These concerns about how we get information that favors our own preconceptions are nothing new. Long before the internet, you'd learn about the world from a newspaper that might carry a particular slant. But the prevalence of search engines and social media makes it easier to fall down a rabbit hole and harder to realize you're in it. With AI chatbots and AI-powered search telling you with confidence what you should know, and sometimes making it up or not telling you where the information comes from, there's never been a more important time to think deeply about how you get information online. The question is: How do you get the best answers? Asking the wrong questions The researchers conducted 21 studies with nearly 10,000 participants who were asked to perform searches on certain preselected topics, including the health effects of caffeine, gas prices, crime rates, COVID-19 and nuclear energy. The search engines and tools used included Google, ChatGPT and custom-designed search engines and AI chatbots. The researchers' results showed that what they called the "narrow search effect" was a function of both how people asked questions and how the tech platforms responded. People have a habit, in essence, of asking the wrong questions (or asking questions in the wrong way). They tended to use search terms or AI prompts that demonstrated what they already thought, and search engines and chatbots were designed to provide narrow, extremely relevant answers, delivered on those answers. "The answers end up basically just confirming what they believe in the first place," Leung said. Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts The researchers also checked to see if participants changed their beliefs after conducting a search. When served a narrow selection of answers that largely confirmed their beliefs, they were unlikely to see significant changes. But when the researchers provided a custom-built search engine and chatbot designed to offer a broader array of answers, they were more likely to change. Leung said platforms could provide people with the option of a broader search, which could prove helpful in situations where the user is trying to find a wider variety of sources. "Our research is not trying to suggest that search engines or algorithms should always broaden their search results," she said. "I do think there is a lot of value in providing very focused and very narrow search results in certain situations." How to ask the right questions If you want a broader array of answers to your questions, there are some things you can do, Leung said. First, think specifically about what exactly it is you're trying to learn. She used an example of trying to decide if you want to invest in a particular company's stock. Asking if it's a good stock or a bad stock to buy will likely skew your results -- more positive news if you ask if it's good, more negative news if you ask if it's bad. Instead, try a single, more neutral search term. Or ask both terms and evaluate the results of each. Especially with an AI chatbot, you can ask for a broad range of perspectives directly in the prompt. If you want to know if you should keep drinking two cups of coffee a day, ask the chatbot for a variety of opinions and the evidence behind them. The researchers tried this in one of their experiments and found they got more variety in results. "We asked ChatGPT to provide different perspectives to answer the query from the participants and to provide as much evidence to back up those claims as possible," Leung said. Asking follow-up questions didn't work quite as well, Leung said. If those questions aren't getting broader answers, you may get the opposite effect -- even more narrow, affirming results. In many cases, people who asked lots of follow-up questions just "fell deeper down into the rabbit hole," she said.

Hospitality Net
02-06-2025
- Business
- Hospitality Net
Food for Thought: Is Traditional Search Dead?
A recent post on LinkedIn declared the end of the search engines as we know them. the list even declared 'R.I.P. Search.' This is in tune with an avalanche of recent headlines arguing that traditional search is dead due to the rise of AI Search via the generative AI platforms ChatGPT, Claude, Perplexity, etc. Some experts herald the end of Google's monopoly on search and claim that traditional search marketing is becoming obsolete. Let's not get carried away. The rumors about the inevitable end of the 'traditional' search engines like Google at the hands of AI Search are highly exaggerated. According to latest data by SEMrush, people interact with search engines 34 TIMES more often than with AI search. During the reported period of April 2024-March 2025, the global search engines received 1,863 billion visits (-0.5% YoY), while the global AI Search chatbots 55.2 billion +(81% YoY). In other words, AI Search was in the rise over the past year, but still received 34 times less visits than traditional search engines. There is an additional wrinkle to the story: the data for traditional search engines does not include the queries on Google, Bing, etc. that were answered by AI, which blurs the boundaries between traditional search and AI. For example, Google uses its Gemini AI to provide answers in its Answer Box in its SERPs. Today, nearly 60% of Google searches end up as zero-click queries i.e. people find enough information in the Gemini AI-powered Answer Box and do not need to click on any of the organic or sponsored links. Bing uses a combination of ChatGPT and its proprietary Prometheus AI and Copilot AI in its Answer Box to boosts its conversational search capabilities, provide a more interactive user experience and up-to-date and context-rich answers, especially for current events and trends. So, should hoteliers abandon their traditional search marketing initiatives? Definitely not! Search marketing on Google and all of its formats: Google Ads (GA), Google Hotel Ads (GHA), organic listings (SEO) consistently contributes to over 50% of hotel website bookings. In the same time, hoteliers should not ignore the rising AI Search. The most immediate priority is to optimize the property for AIO (Artificial Intelligence Optimization), the AI version of SEO. In the AI world, stuffing your website content with SEO keyword terms and aiming to rank for keywords no longer applies. In other words, your website is no longer the primary source of influence. The era of earning recognition has arrived. How do you achieve that? Invest in content marketing with the goal to be cited in places of relevance. SEO company VertoDigital's audits show that only 25% of AI answers are pulled from website content, in this case hotel website content. The rest comes from citations about the hotel in social media, online publications, YouTube, travel-related sites and blogs, customer reviews, etc.


Digital Trends
19-05-2025
- Digital Trends
I tested Gemini Advanced, ChatGPT, and Copilot Pro. Here's which AI searched best
With AI chatbots now built into search engines, browsers, and even your desktop, it's easy to assume they all do the same thing. But when it comes to getting useful search results, some outperform the rest. I wanted to test Gemini Advanced, ChatGPT, and Copilot Pro head-to-head to see which one helps you get answers faster and more accurately. These are the paid versions, all promising live web access, smarter context, and fewer hallucinations. Recommended Videos So, I gave each AI the same set of prompts—from current events to deep-dive research queries—and judged them on five fronts: accuracy, depth, follow-up quality, mistakes, and usability. Here's how they stacked up. Test 1: Accuracy and real-time info To start things off, I asked all three AIs a current events question that needed real-time knowledge, not just general facts. I asked: 'Who won the latest NBA playoff game?' Gemini Advanced only showed me a scoreboard with the teams and the final scores, with no extra context, highlights, or player stats. It also pulled scores from May 10 – two days earlier than expected – which is a bit outdated for a real-time query. ChatGPT Plus gave me a more detailed answer with extra data, such as the Timberwolves taking a 3-1 series lead over the Warriors. It also mentioned how Julius Randle and Anthony Edwards combined for 61 points—Randle with 31 and Edwards with 30. It also included source links under each paragraph (that worked when testing this), making it easy to double-check the info. I also liked that when the cursor hovered over the source link, it would highlight the text it got from that source. My only complaint? It buried the answer under too many details. A quick summary up top would've helped. On the other hand, Copilot Pro gave me a more concise answer from the get-go and asked if I wanted additional information. I have to give this round to Copilot Pro—it nailed the direct answer and even offered a follow-up. Test 2: Depth of response For the second test, I asked a broader question that required more than just a quick fact: How can I create a strong password? Gemini Advanced gave me more tips than ChatGPT and provided source links below each tip for easy double-checking. It also used longer sentences, which made the whole response feel more readable without too much scrolling, unlike ChatGPT, which gave fewer tips and didn't include any source links. However, it did ask if the conversation was helpful, something Gemini didn't do. Copilot Pro also gave less information and no source links. Still, it did show a few relevant follow-up questions, such as: Why is a strong password important for security? Can you give me an example of a strong password? How does a password manager keep my information safe? I also found the emojis alongside each tip were a fun touch. Test 3: Follow-up flexibility For this test, I asked each AI a follow-up question after its original response, something that built on the conversation naturally. I wanted to see how well it handled context and whether it actually understood what I was asking. I followed up with, 'Can you explain why using personal information in passwords is bad?' ChatGPT gave me three main points, a couple of extra security tips to follow, plus a bottom-line summary that wrapped it all up. Copilot Pro gave me three tips and a few sentences on how to stay safe. Gemini, however, was the only one that didn't include specific safety tips at the end. It gave a few more reasons why using personal info is bad and added a bit more information. I must admit that Copilot Pro and ChatGPT took this prize and gave Gemini something to improve on. This time, none of the three included source links, which felt like a missed opportunity. Test 4: Mistakes and hallucinations One of the biggest risks with any AI assistant is its tendency to say things that aren't true confidently. They hallucinate and say things that are sometimes funny and other times alarming. So, I gave each chatbot a few fact-based prompts to see how accurate they were and whether they flagged uncertainties, something they all passed with flying colors. I started with a simple one and asked when Microsoft was founded, and Gemini Advanced answered with a one-liner: 'Microsoft was founded in 1975.' ChatGPT, on the other hand, went into a bit more detail, saying, 'Microsoft was founded on April 4, 1975, by Bill Gates and Paul Allen.' Copilot Pro gave a longer answer: 'Microsoft was founded on April 4, 1975, by Bill Gates and Paul Allen in Albuquerque, New Mexico, USA. It started as a small software company, but it quickly grew into one of the world's largest and most influential tech companies. Quite the success story, right?' I like how Copilot struck a balance, giving me enough context without overwhelming me and even suggesting three clickable follow-up questions. I have to admit that the answer I liked best was from Copilot Pro. Next, I asked all three AI assistants,' Which is the best AI assistant available?' Gemini gave a solid overview of the top AI assistants, including a quick rundown of what each can do. It even added a section called 'Other notable AI assistants' with less popular options. What I really liked, though, was the part where it explained which assistant might be the better pick, like choosing Gemini if you prioritize certain features, or going with ChatGPT or Copilot Pro if you rely more on other things. That side-by-side comparison is actually helpful. ChatGPT said there is no single best option, depending on why you need it. Copilot Pro said several options are available, each with specific strengths. Test 5: Usability and interface experience A great AI answer is only half the story; the other half is how easy it is to read the information it gives you. So, I spent time using each AI assistant's interface to see how smooth, intuitive, and helpful the overall experience felt. Copilot Pro stood out by giving me just enough information to answer my question clearly, without overwhelming me or leaving me confused about what it meant. I also like how it blends into Microsoft Edge and Windows 11 since it results in fewer mouse movements to open it. It was also good to see those relevant follow-up questions that saved me from typing out the question. If there's one area where Copilot Pro fell short, it was with shopping links. It provided them, but only after asking twice. And, in some cases, the link led to the wrong places. I also found the main Copilot page a little too cluttered, with buttons and suggestions all squeezed together. I get that it's trying to be helpful, but sometimes less is more. Gemini Advanced heavily relies on the Google ecosystem. The side panel works well across Gmail, Drive, and Docs, and it's handy for pulling in context from whatever you're working on. Visually, it looks clean and modern, with a color scheme that gives it a polished, almost elegant feel. I also liked how Gemini gives more detailed responses than the others. That's great if you're looking for depth, though if you prefer shorter replies, you can ask it to simplify things. It handled product searches well when I asked it to provide links. ChatGPT keeps things minimal but in a good way. The interface is clean and easy to navigate, and I liked that the input box is at the top of the screen, which feels more natural to use. However, when I tried using it to find links for products, it struggled. Some responses didn't include links at all, and when they did, they weren't always clickable or useful. Final thoughts After testing all three assistants across different scenarios, one thing became clear: no single AI does everything perfectly. Each one has strengths and quirks that make it better suited for certain tasks or users. ChatGPT is still the most consistent when it comes to natural, well-written responses. It's easy to use, but it would be nice if it fixed the link issue mentioned earlier. Gemini Advanced gives you the most information upfront, sometimes too much, but its integration with Google tools is a real advantage when you want to add more files to your search. Copilot Pro is the one I'd be least likely to stick with, even though I liked how it handled response length and follow-up suggestions. But the cluttered interface and unreliable links made it harder to trust on a daily basis—and for me, that's a deal-breaker. At the end of the day, the best AI chatbots really depends on what you value the most: clarity, depth, or usability.