logo
LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots

LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots

CNET31-05-2025
Chances are, you've heard of the term "large language models," or LLMs, when people are talking about generative AI. But they aren't quite synonymous with the brand-name chatbots like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI and Anthropic's Claude.
These AI chatbots can produce impressive results, but they don't actually understand the meaning of words the way we do. Instead, they're the interface we use to interact with large language models. These underlying technologies are trained to recognize how words are used and which words frequently appear together, so they can predict future words, sentences or paragraphs. Understanding how LLMs work is key to understanding how AI works. And as AI becomes increasingly common in our daily online experiences, that's something you ought to know.
This is everything you need to know about LLMs and what they have to do with AI.
What is a language model?
You can think of a language model as a soothsayer for words.
"A language model is something that tries to predict what language looks like that humans produce," said Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center. "What makes something a language model is whether it can predict future words given previous words."
This is the basis of autocomplete functionality when you're texting, as well as of AI chatbots.
What is a large language model?
A large language model contains vast amounts of words from a wide array of sources. These models are measured in what is known as "parameters."
So, what's a parameter?
Well, LLMs use neural networks, which are machine learning models that take an input and perform mathematical calculations to produce an output. The number of variables in these computations are parameters. A large language model can have 1 billion parameters or more.
"We know that they're large when they produce a full paragraph of coherent fluid text," Riedl said.
How do large language models learn?
LLMs learn via a core AI process called deep learning.
"It's a lot like when you teach a child -- you show a lot of examples," said Jason Alan Snyder, global CTO of ad agency Momentum Worldwide.
In other words, you feed the LLM a library of content (what's known as training data) such as books, articles, code and social media posts to help it understand how words are used in different contexts, and even the more subtle nuances of language. The data collection and training practices of AI companies are the subject of some controversy and some lawsuits. Publishers like The New York Times, artists and other content catalog owners are alleging tech companies have used their copyrighted material without the necessary permissions.
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed on Ziff Davis copyrights in training and operating its AI systems.)
AI models digest far more than a person could ever read in their lifetime -- something on the order of trillions of tokens. Tokens help AI models break down and process text. You can think of an AI model as a reader who needs help. The model breaks down a sentence into smaller pieces, or tokens -- which are equivalent to four characters in English, or about three-quarters of a word -- so it can understand each piece and then the overall meaning.
From there, the LLM can analyze how words connect and determine which words often appear together.
"It's like building this giant map of word relationships," Snyder said. "And then it starts to be able to do this really fun, cool thing, and it predicts what the next word is … and it compares the prediction to the actual word in the data and adjusts the internal map based on its accuracy."
This prediction and adjustment happens billions of times, so the LLM is constantly refining its understanding of language and getting better at identifying patterns and predicting future words. It can even learn concepts and facts from the data to answer questions, generate creative text formats and translate languages. But they don't understand the meaning of words like we do -- all they know are the statistical relationships.
LLMs also learn to improve their responses through reinforcement learning from human feedback.
"You get a judgment or a preference from humans on which response was better given the input that it was given," said Maarten Sap, assistant professor at the Language Technologies Institute at Carnegie Mellon University. "And then you can teach the model to improve its responses."
LLMs are good at handling some tasks but not others.
Alexander Sikov/iStock/Getty Images Plus
What do large language models do?
Given a series of input words, an LLM will predict the next word in a sequence.
For example, consider the phrase, "I went sailing on the deep blue..."
Most people would probably guess "sea" because sailing, deep and blue are all words we associate with the sea. In other words, each word sets up context for what should come next.
"These large language models, because they have a lot of parameters, can store a lot of patterns," Riedl said. "They are very good at being able to pick out these clues and make really, really good guesses at what comes next."
What are the different kinds of language models?
There are a couple kinds of sub-categories you might have heard, like small, reasoning and open-source/open-weights. Some of these models are multimodal, which means they are trained not just on text but also on images, video and audio. They are all language models and perform the same functions, but there are some key differences you should know.
Is there such a thing as a small language model?
Yes. Tech companies like Microsoft have introduced smaller models that are designed to operate "on device" and not require the same computing resources that an LLM does, but nevertheless help users tap into the power of generative AI.
What are AI reasoning models?
Reasoning models are a kind of LLM. These models give you a peek behind the curtain at a chatbot's train of thought while answering your questions. You might have seen this process if you've used DeepSeek, a Chinese AI chatbot.
But what about open-source and open-weights models?
Still, LLMs! These models are designed to be a bit more transparent about how they work. Open-source models let anyone see how the model was built, and they're typically available for anyone to customize and build one. Open-weights models give us some insight into how the model weighs specific characteristics when making decisions.
Meta AI vs. ChatGPT: AI Chatbots Compared Meta AI vs. ChatGPT: AI Chatbots Compared
Click to unmute
Video Player is loading.
Play Video
Pause
Skip Backward
Skip Forward
Next playlist item
Unmute
Current Time
0:04
/
Duration
0:06
Loaded :
0.00%
0:04
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:02
Share
Fullscreen
This is a modal window. This video is either unavailable or not supported in this browser
Error Code: MEDIA_ERR_SRC_NOT_SUPPORTED
The media could not be loaded, either because the server or network failed or because the format is not supported.
Technical details :
Session ID: 2025-05-31:c79bda8fcb89fbafa9a86f4a Player Element ID: vjs_video_3
OK
Close Modal Dialog
Beginning of dialog window. Escape will cancel and close the window.
Text
Color White Black Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Text Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Transparent Caption Area Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset Done
Close Modal Dialog
End of dialog window.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Meta AI vs. ChatGPT: AI Chatbots Compared
What do large language models do really well?
LLMs are very good at figuring out the connection between words and producing text that sounds natural.
"They take an input, which can often be a set of instructions, like 'Do this for me,' or 'Tell me about this,' or 'Summarize this,' and are able to extract those patterns out of the input and produce a long string of fluid response," Riedl said.
But they have several weaknesses.
Where do large language models struggle?
First, they're not good at telling the truth. In fact, they sometimes just make stuff up that sounds true, like when ChatGPT cited six fake court cases in a legal brief or when Google's Bard (the predecessor to Gemini) mistakenly credited the James Webb Space Telescope with taking the first pictures of a planet outside of our solar system. Those are known as hallucinations.
"They are extremely unreliable in the sense that they confabulate and make up things a lot," Sap said. "They're not trained or designed by any means to spit out anything truthful."
They also struggle with queries that are fundamentally different from anything they've encountered before. That's because they're focused on finding and responding to patterns.
A good example is a math problem with a unique set of numbers.
"It may not be able to do that calculation correctly because it's not really solving math," Riedl said. "It is trying to relate your math question to previous examples of math questions that it has seen before."
While they excel at predicting words, they're not good at predicting the future, which includes planning and decision-making.
"The idea of doing planning in the way that humans do it with … thinking about the different contingencies and alternatives and making choices, this seems to be a really hard roadblock for our current large language models right now," Riedl said.
Finally, they struggle with current events because their training data typically only goes up to a certain point in time and anything that happens after that isn't part of their knowledge base. Because they don't have the capacity to distinguish between what is factually true and what is likely, they can confidently provide incorrect information about current events.
They also don't interact with the world the way we do.
"This makes it difficult for them to grasp the nuances and complexities of current events that often require an understanding of context, social dynamics and real-world consequences," Snyder said.
How are LLMs integrated with search engines?
We're seeing retrieval capabilities evolve beyond what the models have been trained on, including connecting with search engines like Google so the models can conduct web searches and then feed those results into the LLM. This means they could better understand queries and provide responses that are more timely.
"This helps our linkage models stay current and up-to-date because they can actually look at new information on the internet and bring that in," Riedl said.
That was the goal, for instance, a while back with AI-powered Bing. Instead of tapping into search engines to enhance its responses, Microsoft looked to AI to improve its own search engine, in part by better understanding the true meaning behind consumer queries and better ranking the results for said queries. Last November, OpenAI introduced ChatGPT Search, with access to information from some news publishers.
But there are catches. Web search could make hallucinations worse without adequate fact-checking mechanisms in place. And LLMs would need to learn how to assess the reliability of web sources before citing them. Google learned that the hard way with the error-prone debut of its AI Overviews search results. The search company subsequently refined its AI Overviews results to reduce misleading or potentially dangerous summaries. But even recent reports have found that AI Overviews can't consistently tell you what year it is.
For more, check out our experts' list of AI essentials and the best chatbots for 2025.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Startup founders and others in tech call on Sequoia Capital to act after a partner called Zohran Mamdani an 'Islamist'
Startup founders and others in tech call on Sequoia Capital to act after a partner called Zohran Mamdani an 'Islamist'

Yahoo

time14 minutes ago

  • Yahoo

Startup founders and others in tech call on Sequoia Capital to act after a partner called Zohran Mamdani an 'Islamist'

Sequoia Capital is facing backlash after partner Shaun Maguire called Zohran Mamdani an "Islamist." Founders, tech workers, and business leaders are demanding that Sequoia take action in an open letter. Maguire said his criticism of Mamdani was political, not religious or racial. Sequoia Capital, one of the largest and oldest global VC firms, is facing pressure over a partner's online remarks. Meanwhile, the partner at the center of the controversy appears to be doubling down amid the backlash. Signatories self-identified as founders, investors, and tech workers have signed an open letter calling for Sequoia Capital to take disciplinary action against its partner, Shaun Maguire, after he called New York City mayoral Democratic nominee Zohran Mamdani an "Islamist" on X. "Mamdani comes from a culture that lies about everything," Maguire posted on X on July 4. He included a screenshot referencing The New York Times' reporting about how Mamdani marked his identity on a college application. "It's literally a virtue to lie if it advances his Islamist agenda," Maguire wrote in a post. Maguire's post was met with backlash on X, and the open letter appeared over the weekend. It demands a public apology from Sequoia, a formal investigation into Maguire's conduct, a zero-tolerance policy on hate speech, and the creation of a hotline for reporting discriminatory behavior. The letter gives Sequoia Capital until July 14 to respond. "As founders building the future of technology, we cannot accept leadership from a firm whose partners engage in hate speech and spread bigotry," the letter states. "Maguire's conduct not only tarnishes Sequoia's reputation, it also undermines your ability to serve a global, diverse founder ecosystem." Maguire has said that his criticism was political, not religious or racial, adding that "Islamist" was a political ideology and not the same as Muslim. The letter has hundreds of signatures, though at least several of them appeared to be trolls using made-up or fake names. Some tech workers who self-identified as working for prominent companies such as Microsoft, Turo, Google, and Apple also appeared to have signed the petition. The list includes some business leaders who have previously raised capital from Sequoia-linked funds. That includes Hosam Arab, CEO of Dubai-based fintech Tabby; Hisham Al-Falih, CEO of Lean Technologies; and Ahmed Sabbah, cofounder of Egyptian payments company Telda. They did not respond to requests for comment from BI but confirmed to Bloomberg that they had signed the letter. Maguire has dug in amid the criticism, addressing the controversy in posts on X. He said that his critics "only embolden me" and that he has also received support from people reaching out. "To the Haters and Losers, You cannot imagine how much Love and Support I've received over the last 48 hours," he wrote on X early Tuesday morning. "We have cancelled cancel culture." Maguire said the letter's signatories were either "Marxists," "Pro-Palestine," or "Leftists." "All of these groups want me cancelled because I'm a loud and effective voice," he wrote Tuesday morning. Sequoia Capital declined to comment when reached by Business Insider on Monday. When previously reached by BI, Maguire also declined to comment but noted several follow-up posts he made in response to the backlash, including a 28-minute video he posted early Sunday morning defending calling Mamdani an Islamist. He has also criticized Mamdani's father, Columbia University professor Mahmood Mamdani, accusing him of "radical left-wing Islamism." "To any Muslim that is not an Islamist, and to any Indian that took offense to this tweet, I am very, very sorry," he said in the video. Mamdani's team did not respond to a request for comment. The candidate has previously teared up when speaking with CBS News about the comments he gets, being the first Muslim to run for mayor of New York City. "I get messages that say 'the only good Muslim is a dead Muslim,'" he said. "I get threats on my life, on the people that I love." Mamdani, an outspoken critic of the Israel government, has been accused of anti-Jewish sentiment by some — accusations he has denied. He has declined to condemn the phrase "globalize the intifada" in interviews. When asked about it recently on NBC News' "Meet the Press," he said, "That's not language that I use." This week, Mamdani came under fire over a 2024 post in which he reshared a 2015 music video by a Canadian comedy group that parodied Hanukkah. In response to the video, Maguire wrote on X that it "doesn't really bother me" and "I think people get offended too easily these days." "But I think Mamdani is a master at hiding his true nature and people are underestimating him," Maguire added. Mamdani's affordability-focused platform does not advance any religious ideals. He seeks to expand protection for the LGBTQ+ community, raise the minimum wage, and implement free childcare, among other initiatives. Maguire has previously been vocal about politics and sparked controversy online. In 2024, he wrote in a lengthy post on X, saying that he donated $300,000 to get Trump elected as president shortly after Trump was convicted of falsifying business records, though he said at the time his political donations were personal and "did not reflect the views of Sequoia." He also said late last year that he donated another $500,000 to the America PAC founded by Tesla CEO Elon Musk. In January, he called diversity, equity, and inclusion policies "structural racism" in another post on X. Read the original article on Business Insider

US 'click to cancel' rule blocked by appeals court
US 'click to cancel' rule blocked by appeals court

Yahoo

time16 minutes ago

  • Yahoo

US 'click to cancel' rule blocked by appeals court

By Jody Godoy (Reuters) -A U.S. appeals court blocked a rule that would have required businesses to make it as easy to cancel subscriptions and memberships as it is to sign up, saying the agency that created it did not follow protocol. The U.S. Federal Trade Commission, which passed the rule under former Democratic Chair Lina Khan, failed to conduct a preliminary analysis of the costs and benefits of the rule, said the 8th U.S. Circuit Court of Appeals in St. Louis. The rule was set to take effect on July 14. A spokesperson for the FTC declined to comment on Tuesday. The rule would have required retailers, gyms and other businesses to provide cancellation methods for subscriptions, auto-renewals and free trials that convert to paid memberships that are "at least as easy to use" as the sign up process. It also aimed to keep companies from making consumers who signed up through an app or a website go through a chatbot or agent to cancel. The U.S. Chamber of Commerce and a trade group representing major cable and internet providers such as Charter Communications, Comcast, and Cox Communications, and media companies like Disney Entertainment and Warner Bros. Discovery are among those suing to block the rule.

The Beatbot Robot Pool Cleaner Is at Its Lowest Price Ever for Prime Day
The Beatbot Robot Pool Cleaner Is at Its Lowest Price Ever for Prime Day

CNET

time17 minutes ago

  • CNET

The Beatbot Robot Pool Cleaner Is at Its Lowest Price Ever for Prime Day

Cleaning and maintaining a pool is time-consuming and expensive. Some estimates put the yearly cost at anywhere from $1,000 to $6,000 or more, all for the chance to bask in the sun next to a clean pool. If you've ever thought about offloading the task to a robot, you're in luck, because the Beatbox Aquasense 2 Pro is on sale for 32% off. That puts it at its lowest price ever. We buried the lede a little bit, so let's back up a bit. One of these pool cleaning robots retails for $2,899, and with its 32% discount, that puts the price at $1,969, which translates to $930 of savings. Per CamelCamelCamel, its previous best price was around $2,100, so you can save even more if you act fast. The deal is available on Amazon or from Beatbot's website, and you can order from either one. However, Amazon's listing boasts a free item, which is Beatbot's all-weather protective cover for the AquaSense 2 Pro, adding another $50 of value. The deal will most likely end at the end of the Prime Day event, so act fast if you're interested. Here's a list of the best robot vacuums that we've tested. To be clear, our experts loved the BeatBot's performance, but it's (normally) high price kept it out of the top. Even at its sale price, $1,969 is quite a lot of money, so you're probably wondering what this little guy does. Beatbot introduced the world to the AquaSense 2 Pro at CES 2025, so it's one of the brand's newest products. It works by using its AI-powered camera to map your pool, and then it'll spend its days leisurely cleaning it so you don't have to. Those same cameras act as the robot's eyes, scanning your pool for any dirt or particles that may be floating around so that it can mosey on over and clean it up. Beatbot's AquaSense has premium features and a best-in-class battery life. David Watsky/CNET In terms of actual cleaning, the bot handles just about all of it. It can clean the bottom or walls of the pool, including the waterline where debris tends to lap up onto the pool lining, which makes it competitive with other pool cleaning robots. While it's there, it can skim the surface of the water to remove debris floating on the surface while also cleaning the water of dirt and residue. When it's done, it'll float on the water near the edge of the pool so you can retrieve it. Should the bot miss a spot, Beatbot also has an app that lets you control it manually. Let us help you find more deals. CNET Deals texts are free, easy and can save you money. A robot pool cleaner like this is great to have for people who have pools. If you stumbled into this article and you don't have a pool, there are still plenty of other deals you can surf. For instance, regular robot vacuum cleaners can also save you time by cleaning your hard floors and carpets. Save time, and maybe even some money too Pool cleaning robots get very expensive, and the more features you pack into a bot, the more expensive it gets. The AquaSense 2 Pro has nearly every feature you can ask for, which makes it competitive in the space already. At a $930 discount, that brings it in line with less expensive pool cleaning robots that offer fewer features for the same amount of money. Factor in the time you save by not having to clean the pool all the time and the cost savings of having to bring someone out to clean it for you (or repair it due to lack of maintenance), and you could potentially earn this money back in pretty short order. Plus, with a three-year warranty, Beatbot will replace your robot if it comes with a defect.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store