logo
Correction or Not: This Artificial Intelligence (AI) Stock Is Worth Buying for the Long Haul

Correction or Not: This Artificial Intelligence (AI) Stock Is Worth Buying for the Long Haul

Yahoo14-05-2025
Alphabet put AI to good use decades before it was cool.
Ongoing innovation and massive resources should keep this company ahead in the AI race.
Investors who bought in early have seen huge long-term gains, and the stock still looks affordable.
10 stocks we like better than Alphabet ›
I thought of Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) as an artificial intelligence (AI) specialist long before I saw it as a business or an investment idea.
The underlying Google organization started its game-changing search engine in the late 1990s. I studied information science and AI at the time (go Noles!), and was fascinated with Google's search engine. Older alternatives like Lycos, WebCrawler, and Alta Vista could also deliver helpful search results but only if you knew how to tweak your queries just right. It was a lot of work to design search strings like (Motley AND Fool AND investing) AND NOT (scam OR speculation), hoping to find the exact thing I'm looking for
The magic of Google's search engine is that it went a step further. The search algorithm has become a meme nowadays as it steers web users in certain directions and content publishers strive to capture interest with various details.
But back then, it was a revelation to see Google's search tool anticipate what the user is really looking for. The top results were even ranked in a sensible way without detailed instructions. These unique qualities were later copied in some way by every serious rival. They are built on deep text analysis -- also known as machine learning or artificial intelligence.
Not much has changed after more than 25 years. Google kept improving its search engine, surrounded it with other AI-based tools such as Google Translate and the Google Maps navigation functions, and made AI easily available to anybody. Long before adopting the Alphabet moniker, Google was an AI expert for the masses.
So I wasn't surprised when the company had a large language model (LLM) ready to go just a few months after OpenAI released its ChatGPT 3 platform. If anything, I can't wait to see what Alphabet still hides behind the AI lab's closed doors today.
Alphabet's Google arm remains unbeatable in the online search and advertising market -- to a large extent because of its longtime AI commitment. The Gemini LLM is also a leading ChatGPT challenger, and is already integrated into the popular Gmail and Google Docs tools. The classic Google Search experience got an AI mode in March 2025, too. The Gemini system is going places.
Google's AI competence is simply not up for discussion. I'm talking about a proven leader here, with an enormous amount of engineering and financial resources to throw behind the next big idea.
Google (and Alphabet) has been very kind to longtime investors. If you invested just $1,000 when Google hit the stock market in August 2004, that investment would be worth more than $63,700 on May 13, 2025.
Still, the stock has never looked overvalued. Even now, about two and a half years into the ChatGPT-powered AI boom, Alphabet's valuation ratios look downright affordable. AI rivals like Microsoft (NASDAQ: MSFT) and Nvidia (NASDAQ: NVDA) can't hold a candle to Alphabet's value-investing appeal:
AI Stock
Market Capitalization
Price to Earnings (P/E)
Price to Sales (P/S)
Price to Free Cash Flow (P/FCF)
Alphabet
$1.95 trillion
17.8
5.4
26.0
Microsoft
$3.33 trillion
34.6
12.3
48.0
Nvidia
$3.19 trillion
44.4
24.4
52.4
Data collected from Finviz.com on May 13, 2025.
Alphabet's stock price could double and still compare favorably to Microsoft and Nvidia's valuation ratios. I'll agree that Nvidia has earned its premium price via unbeatable business growth, but Alphabet's sales and earnings are rising faster than Microsoft's. Is Alphabet's stock undervalued or Microsoft's overpriced? You be the judge.
Let's just say that I only own one of these two AI stocks, and my choice isn't headquartered in Redmond, Washington.
Alphabet has come a long way from the Stanford garage of its youth, and it's still a thrilling growth story. With or without broad market corrections along the way, I'm almost always a buyer of Alphabet's stock. It's only more tempting in times like these, as the stock trades 23% below February's all-time highs.
Before you buy stock in Alphabet, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Alphabet wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $613,951!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $796,353!*
Now, it's worth noting Stock Advisor's total average return is 948% — a market-crushing outperformance compared to 170% for the S&P 500. Don't miss out on the latest top 10 list, available when you join .
See the 10 stocks »
*Stock Advisor returns as of May 12, 2025
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Anders Bylund has positions in Alphabet and Nvidia. The Motley Fool has positions in and recommends Alphabet, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.
Correction or Not: This Artificial Intelligence (AI) Stock Is Worth Buying for the Long Haul was originally published by The Motley Fool
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Alphabet Inc. (GOOGL): 'This Stock Should Be Up Much More,' Says Jim Cramer
Alphabet Inc. (GOOGL): 'This Stock Should Be Up Much More,' Says Jim Cramer

Yahoo

time19 minutes ago

  • Yahoo

Alphabet Inc. (GOOGL): 'This Stock Should Be Up Much More,' Says Jim Cramer

We recently published . Alphabet Inc. (NASDAQ:GOOGL) is one of the stocks Jim Cramer recently discussed. Cramer regularly discussed tech mega-cap Alphabet Inc. (NASDAQ:GOOGL) ahead of its earnings. The firm's shares have reversed course in July and are up by 1.9% year-to-date, primarily due to July's 9.9% gain. Before the report, Cramer was explicit in sharing that he regretted selling Alphabet Inc. (NASDAQ:GOOGL)'s stock. This time, he discussed the firm's businesses and shared that the stock should be higher after the earnings: [GOOGL]'[On earnings report] Yeah, look cloud was important. I think the big focus is frankly, uh, that paid clicks picked up 4%. I mean I was thinking paid clips might be down, I was worried that I felt that this was the beginning of the erosion and the cannibalization versus Gemini. That was completely wrong. YouTube up 200 million. Really, really fantastic. . . .Look the story here is this that the more chips that they get, better they're doing. They have so much demand I was quite surprised. 20 New Technology Trends for 2024 'This stock should be up much more than that. While we acknowledge the potential of GOOGL as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey.

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

Forbes

time27 minutes ago

  • Forbes

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.

Anthropic's $150 Billion Goal: How Amazon and Alphabet Could Benefit From the AI Surge
Anthropic's $150 Billion Goal: How Amazon and Alphabet Could Benefit From the AI Surge

Business Insider

timean hour ago

  • Business Insider

Anthropic's $150 Billion Goal: How Amazon and Alphabet Could Benefit From the AI Surge

Artificial intelligence start-up Anthropic is in early talks to raise between $3 billion and $5 billion in a new funding round. The raise could push the company's valuation above $150 billion, according to the Financial Times. That would more than double its current $61.5 billion valuation, reached just a few months ago. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Anthropic Is Growing Fast Anthropic is the company behind Claude, a large language model that competes with OpenAI's ChatGPT. It is backed by Amazon (AMZN) and Alphabet Inc. (GOOG), which have each committed billions in cloud credits and cash. Amazon has already invested up to $8 billion and is reportedly considering further investments to remain among Anthropic's largest shareholders. This funding round comes as competition in artificial intelligence intensifies. OpenAI is preparing to launch GPT-5 and is working with SoftBank (SFTBY) on a separate raise that could bring in tens of billions of dollars. Meanwhile, Anthropic has quietly increased its annualized recurring revenue from $1 billion at the start of the year to over $4 billion, driven mainly by enterprise subscriptions. Investors are closely watching the private AI space, but the real implications may lie with public companies that stand to benefit. Amazon and Alphabet have positioned themselves as infrastructure providers for leading model developers, such as Anthropic. A stronger Claude model, used widely in enterprise software and coding tools, could support growth in Amazon Web Services and Google Cloud revenue. The Middle East Is Calling The funding talks have also drawn interest from Middle Eastern sovereign wealth funds. Anthropic's leadership has expressed concerns internally about taking direct investment from the region, citing political risks. Even so, the company sold $500 million worth of shares to a fund linked to Abu Dhabi in 2023. A broader shift toward sovereign capital could influence how other private AI start-ups raise money. Anthropic remains private for now, but its valuation jump and enterprise growth highlight how quickly the AI market is scaling. Investors seeking exposure are most likely to find it through public companies that enable and support that growth. Using TipRanks' Comparison Tool, we've brought Amazon and Google side by side and compared them to gain a broader look at Anthropic's two most notable backers.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store