
AI Is Taking Over Your Search Engine. Here's a Look Under the Hood
That way of searching, it seems, is starting to go the way of AltaVista, may it rest in peace.
In May, Google announced the rollout of its new AI Mode for search, which uses a generative AI model (based on the company's Gemini large language model) to give you conversational answers that feel a lot more like having a chat and less like combing through a set of links. Other companies, like Perplexity and OpenAI, have also deployed search tools based on gen AI. These tools, which merge the functionality of a chatbot and a traditional search engine, are quickly gaining steam.
You can't even escape AI by doing just a regular Google search: AI Overviews have been popping up atop those results pages since last year, and about one in five searches are now showing this kind of summary, according to a Pew Research Center report. I'm surprised it's not even more than that.
These newfangled search tools feel a lot like your typical chatbot, like ChatGPT, but they do things a little differently. Those differences share a lot of DNA with their search engine ancestors. Here's a look under the hood at how these new tools work, and how you can use them effectively.
Everything Announced at Google I/O 2025 Everything Announced at Google I/O 2025
Click to unmute
Video Player is loading.
Play Video
Pause
Skip Backward
Skip Forward
Next playlist item
Unmute
Current Time
0:13
/
Duration
15:40
Loaded :
6.33%
00:13
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
15:27
Share
Fullscreen
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text
Color White Black Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Text Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Transparent Caption Area Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset Done
Close Modal Dialog
End of dialog window.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Everything Announced at Google I/O 2025
Search engines vs. AI search: What's the difference?
The underlying technology of a search engine is kinda like an old library card catalog. The engine uses bots to crawl the vast expanses of the internet to find, analyze and index the endless number of web pages. Then, when you do a search to ask who played Dr. Angela Hicks on ER, because you're trying to remember what else you've seen her in, it will return pages for things like the cast of ER or the biography of the actor, CCH Pounder. From there, you can click through those pages, whether they're on Wikipedia or IMDB or somewhere else, and learn that you know CCH Pounder from her Emmy-winning guest appearance on an episode of The X-Files.
"When customers have a certain question, they can type that question into Google and then Google runs their ranking algorithms to find what content is the best for a particular query," Eugene Levin, president of the marketing and SEO tool company Semrush, told me.
Generally, with a traditional search, you have to click through to other websites to get the answer you're looking for. When I was trying to figure out where I recognized CCH Pounder from, I clicked on at least half a dozen different sites to track it down. That included using Google's video search -- which combs an index of videos across different hosting platforms -- to find clips of her appearance on The X-Files.
Google announced AI Mode at its I/O developer conference in May.
Google/Screenshot by Joe Maldonado/CNET
These multiple searches don't necessarily have to happen. If I just want to know the cast of ER, I can type in "cast of ER" and click on the Wikipedia page at the top.
You'll usually find Wikipedia or another relevant, trustworthy site at or near the top of a search result page. That's because a main way today's search algorithms work is by tracking which sites and pages get most links from elsewhere on the web. That model, which "changed the game for search" when Google launched it in the 1990s, was more reliable than indexing systems that relied on things like just how many times a keyword appeared on a page, said Sauvik Das, associate professor at Carnegie Mellon University's Human-Computer Interaction Institute.
"There's lots of cookie recipes on the web, but how do you know which ones to show first?" Das said. "Well, if a bunch of other websites are linking to this website for the keywords of 'cookie recipe,' that's pretty difficult to game."
AI-powered search engines work a little differently, but operate on the same basic infrastructure. In my quest to see where I recognized CCH Pounder from, I asked Google's AI Mode, literally, "Where do I recognize the actress who plays Dr. Angie Hicks on ER from?" In a conversation that felt far more like chatting with a bot than doing searches, I narrowed it down. The first result gave me a list of shows and movies I hadn't seen, so I asked for a broader list, which featured her guest appearances on other shows. Then I could ask for more details about her X-Files appearance, and that narrowed it down.
While the way I interacted with Google was different, the search mechanisms were basically the same. AI Mode just used its Gemini model to develop and process dozens of different web searches to gather the information needed, Robby Stein, vice president of product for Google Search, told me. "A user could've just queried each of those queries themselves."
Basically, AI Mode did the same thing I did, just a lot faster.
So many searches, so little time
The approach here is called "query fan-out." The AI model takes your request and breaks it down into a series of questions, then conducts searches to answer those components of the request. It then takes the information it gathers from all those searches and websites and puts it together in an answer for you. In a heartbeat.
Those searches are using the same index that a traditional search would. "They work on the same foundation," Levin said. "What changes is how they pull information from this foundation."
This fan-out process allows the AI search to pull in relevant information from sites that might not have appeared on the first page of traditional search results, or to pull a paragraph of good information from a page that has a lot more irrelevant information. Instead of you going down a rabbit hole to find one tiny piece of the answer you want, the AI goes down a wide range of rabbit holes in a few seconds.
"They will anticipate, if you're looking for this, what is the next thing you might be interested in?" Levin said.
Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts
The number of searches the AI model will do depends on the tool you're using and on how complicated your question is. AI Mode that uses Google's Deep Search will spend more time and conduct more searches, Stein said. "Increasingly, if you ask a really hard question, it will use our most powerful models to reply," Stein said.
The large language models that power these search engines also have their existing training data to pull from or use to guide their searches. While a lot of the information is coming from the up-to-date content it finds by searching the web, some may come from that training data, which could include reams of information ranging from websites like this one to whole libraries of books. That training data is so extensive that lawsuits over whether AI companies actually had the right to use that information are quickly multiplying. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
AI search isn't just a chatbot
Not relying on training data is one thing that sets an AI-powered search engine apart from a traditional chatbot, even though the underlying language model might be largely the same. While ChatGPT Search will scour the internet for relevant sites and answers, regular ChatGPT might rely on its own training data to answer your question.
"The right answer might be in there," Das said. "It might also hallucinate a likely answer that isn't anywhere in the pre-training data."
The AI search uses a concept called retrieval-augmented generation to incorporate what it finds on the internet into its answer. It collects information from a source you point it to (in this case, the search engine index) and tells it to look there instead of making something up if it can't find it in its training data. "You're telling the AI the answer is here, I just want you to find where," Das said. "You get the top 10 Google results, and you're telling the AI the answer is probably in here."
Perplexity offers AI-powered search through its app and through a newly announced browser.
Stefani Reynolds/Bloomberg via Getty Images
Can you really trust AI search results?
These AI-powered search tools might be more reliable than just using a chatbot itself, because they're pulling from current, relevant information and giving you links, but you still have to think critically about it. Here are some tips from the experts:
Bring your human skepticism
Consider how bad people are at telling when you're sarcastic on the internet. Then think about how bad a large language model might be at it. That's how Google's AI Overviews came up with the idea to put glue on pizza -- by pulling information from a humorous Reddit post and repeating it as if it were real culinary advice. "The AI doesn't know what is authentic and what is humorous," Das said. "It's going to treat all that information the same."
Remember to use your own judgement and look for the sources of the information. They might not be as accurate as the LLM thinks, and you don't want to make important life decisions based on somebody's joke on an internet forum that a robot thought was real.
AI can still make stuff up
Even though they're supposed to be pulling from search results, these tools can still make things up in the absence of good information. That's how AI Overviews started creating fake definitions for nonsensical sayings.
The retrieval-augmented generation might reduce the risk of outright hallucinations but doesn't eliminate it, according to Das. Remember that an LLM doesn't have a sense of what the right answer to a question is. "It's just predicting what is the next English word that would come after this previous stream of other English words or other language words," Das said. "It doesn't really have a concept of truthiness in that sense."
Check your sources
Traditional search engines are very hands-off. They will give you a list of websites that appear relevant to your search and let you decide whether you want to trust them. Because an AI search is consolidating and rewriting that information itself, it may not be obvious when it's using an untrustworthy source.
"Those systems are not going to be entirely error-free, but I think the challenge is that over time you will lose an ability to catch them," Levin said. "They will be very convincing and you will not know how to really go and verify, or you will think you don't need to go and verify."
But you can check every source. But that's exactly the kind of work you were probably hoping to avoid using this new system that's designed to save you time and effort.
"The problem is if you're going to do this analysis for every query you perform in ChatGPT, what is the purpose of ChatGPT?" Levin said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
22 minutes ago
- Forbes
10 Tiny Habits That Make Or Break A Founding Team
The habits that define strong startup teams aren't flashy - they're consistent. Learn 10 small, ... More high-leverage rituals that early-stage founders use to build alignment, trust, and momentum. The success of an early-stage startup often comes down to a few key habits. Not vision. Not funding. Not product. Just the repeatable behaviors a team establishes in the first few months. These small patterns shape how decisions get made, how conflict is resolved, and how momentum builds or stalls. Here are ten habits that often fly under the radar but make a disproportionate difference. 1. Start Every Week With A Quick Priorities Check High-functioning teams get aligned often. A 10-minute Monday standup (async or live) focused solely on what matters most for the week helps avoid drift. It's not a status update. It's a coordination tool. Just one shared Google Doc or Slack thread each week can clarify who's pushing what forward. 2. Write Things Down Before Debating Them Discussions go faster and deeper when each person writes their thinking down first. Stripe famously used written memos for key decisions, helping to clarify logic and reduce groupthink. In small teams, this habit prevents dominant voices from steering conversations without scrutiny. 3. Close The Loop, Every Time It sounds basic, but closing the loop - on a bug report, a sales follow-up, or a customer message builds trust. Early teams that make this a reflex are more operationally tight. Users and teammates start to feel like action follows words. That makes everything else easier. 4. Default To Showing, Not Telling Instead of talking about a problem for 30 minutes, show a mockup, spreadsheet, or quick Loom video. A rough version beats a vague explanation. Founders at Figma and Superhuman made this a habit early - visual, concrete communication shortened feedback loops and made their teams feel faster. 5. End Each Week With A Lightweight Retro Even a 15-minute end-of-week reflection helps early teams improve. What worked? What didn't? What felt off? You don't need fancy tooling. Just capture a few bullet points and a single improvement to try next week. Tiny improvements compound faster than you'd think. 6. Discuss How You Communicate, Not Just What You're Communicating Most teams wait until things are tense to talk about how they talk. But tiny misalignments in communication style create friction early. Do you use Slack or email for decisions? Are async replies expected within hours or days? These patterns can quietly sabotage trust if they're not clarified early. You can check our Startup Communication & Negotiation Guide for a bit more in-depth insights into the importance of how to communicate effectively in the team and with outside stakeholders. 7. Name The Hard Stuff Out Loud It's tempting to avoid naming difficult truths like a strategy that's not working or a cofounder dynamic that's drifting. But high-trust teams normalize surfacing tension early. That doesn't mean oversharing. It just means saying the quiet part out loud, before it becomes resentment. 8. Keep The Calendar Sacred In the early days, teams often overbook meetings or swing to the other extreme and meet only when there's a fire. A consistent cadence, like for example a product review every Friday, a retro every two weeks, helps establish a rhythm. Rituals aren't bureaucracy. They're a defense against chaos. 9. Limit Who Touches What Too many founders try to "co-own" everything. But the strongest teams make clear calls on ownership. Who owns marketing copy? Who decides on design changes? Ownership creates clarity. Clarity reduces churn. It doesn't mean people stop collaborating - it just means someone decides. 10. Celebrate Progress Publicly (Even If It's Small) Momentum is fragile. Especially in a startup's first year. Teams that develop a habit of sharing wins, even small ones, build morale. This doesn't require parties or bonuses. A simple Slack thread or internal weekly email can remind everyone that forward motion is happening.
Yahoo
an hour ago
- Yahoo
Alphabet Inc. (GOOGL): 'This Stock Should Be Up Much More,' Says Jim Cramer
We recently published . Alphabet Inc. (NASDAQ:GOOGL) is one of the stocks Jim Cramer recently discussed. Cramer regularly discussed tech mega-cap Alphabet Inc. (NASDAQ:GOOGL) ahead of its earnings. The firm's shares have reversed course in July and are up by 1.9% year-to-date, primarily due to July's 9.9% gain. Before the report, Cramer was explicit in sharing that he regretted selling Alphabet Inc. (NASDAQ:GOOGL)'s stock. This time, he discussed the firm's businesses and shared that the stock should be higher after the earnings: [GOOGL]'[On earnings report] Yeah, look cloud was important. I think the big focus is frankly, uh, that paid clicks picked up 4%. I mean I was thinking paid clips might be down, I was worried that I felt that this was the beginning of the erosion and the cannibalization versus Gemini. That was completely wrong. YouTube up 200 million. Really, really fantastic. . . .Look the story here is this that the more chips that they get, better they're doing. They have so much demand I was quite surprised. 20 New Technology Trends for 2024 'This stock should be up much more than that. While we acknowledge the potential of GOOGL as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey.


Forbes
an hour ago
- Forbes
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.