logo
AI stocks look 'eerily similar' to the dot-com craze, warns CIO overseeing $15 billion. Invest in this 'boring' corner of the market instead.

AI stocks look 'eerily similar' to the dot-com craze, warns CIO overseeing $15 billion. Invest in this 'boring' corner of the market instead.

Business Insider10 hours ago
The intoxicating buzz around artificial intelligence stocks over the last few years looks concerningly like the dot-com bubble, top investor Richard Bernstein warns.
The CIO at $15 billion Richard Bernstein Advisors wrote in a June 30 post that the AI trade is starting to look rich, and that it may be time for investors to turn their attention toward a more "boring" corner of the market: dividend stocks.
"Investors seem universally focused on 'AI' which seems eerily similar to the '.com' stocks of the Technology Bubble and the 'tronics' craze of the 1960s. Meanwhile, we see lots of attractive, admittedly boring, dividend-paying themes," Bernstein wrote.
Since ChatGPT hit the market in November 2022, the S&P 500 and Nasdaq 100 have risen 54% and 90%, respectively. Valuations, by some measures, have surged back toward record highs, rivaling levels seen during the dot-com bubble and the 1929 peak.
While Bernstein said he's not calling a top, trades eventually go the other way, and the best time to invest in something is when it's out of favor — not when a major rally has already occurred.
"At the beginning of a bull market when momentum and beta strategies are by definition most rewarded, investors' fears leads them to emphasize dividends and lower-beta equities," he wrote. "In later-cycle periods when dividends and lower beta become more attractive, investors' confidence leads them to risk-taking and momentum investing."
"We clearly are not at the beginning of a bull market and, as we've previously written, the profits cycle is starting to decelerate," he added.
That's why dividend stocks could be ripe for appreciation, Bernstein said. He especially likes utilities stocks, which are known for issuing dividends.
Dividends are payments that a company sends out to shareholders on a regular basis (usually quarterly), and they can be used by investors as income or reinvested in the stock. If reinvested, your position in the stock can compound.
When considering compounding returns, dividend stocks actually hold their own against high-flying tech stocks, Bernstein said.
"One of the easiest methods for building wealth has historically been the power of compounding dividends. Compounding dividends is boring as all get out, but it's been highly successful through time."
"In fact, compounding dividend income has been so successful, that the Dow Jones Utilities Index's returns have been roughly neck-and-neck with NASDAQ returns since NASDAQ's inception in 1971!"
Investors can gain broad exposure to dividend stocks through funds like the SPDR S&P Dividend ETF (SDY), Vanguard Dividend Appreciation ETF (VIG), and more.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty
AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty

Yahoo

timean hour ago

  • Yahoo

AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty

When you buy through links on our articles, Future and its syndication partners may earn a commission. Large language models (LLMs) are becoming less "intelligent" in each new version as they oversimplify and, in some cases, misrepresent important scientific and medical findings, a new study has found. Scientists discovered that versions of ChatGPT, Llama and DeepSeek were five times more likely to oversimplify scientific findings than human experts in an analysis of 4,900 summaries of research papers. When given a prompt for accuracy, chatbots were twice as likely to overgeneralize findings than when prompted for a simple summary. The testing also revealed an increase in overgeneralizations among newer chatbot versions compared to previous generations. The researchers published their findings in a new study April 30 in the journal Royal Society Open Science. "I think one of the biggest challenges is that generalization can seem benign, or even helpful, until you realize it's changed the meaning of the original research," study author Uwe Peters, a postdoctoral researcher at the University of Bonn in Germany, wrote in an email to Live Science. "What we add here is a systematic method for detecting when models generalize beyond what's warranted in the original text." It's like a photocopier with a broken lens that makes the subsequent copies bigger and bolder than the original. LLMs filter information through a series of computational layers. Along the way, some information can be lost or change meaning in subtle ways. This is especially true with scientific studies, since scientists must frequently include qualifications, context and limitations in their research results. Providing a simple yet accurate summary of findings becomes quite difficult. "Earlier LLMs were more likely to avoid answering difficult questions, whereas newer, larger, and more instructible models, instead of refusing to answer, often produced misleadingly authoritative yet flawed responses," the researchers wrote. Related: AI is just as overconfident and biased as humans can be, study shows In one example from the study, DeepSeek produced a medical recommendation in one summary by changing the phrase "was safe and could be performed successfully" to "is a safe and effective treatment option." Another test in the study showed Llama broadened the scope of effectiveness for a drug treating type 2 diabetes in young people by eliminating information about the dosage, frequency, and effects of the medication. If published, this chatbot-generated summary could cause medical professionals to prescribe drugs outside of their effective parameters. In the new study, researchers worked to answer three questions about 10 of the most popular LLMs (four versions of ChatGPT, three versions of Claude, two versions of Llama, and one of DeepSeek). They wanted to see if, when presented with a human summary of an academic journal article and prompted to summarize it, the LLM would overgeneralize the summary and, if so, whether asking it for a more accurate answer would yield a better result. The team also aimed to find whether the LLMs would overgeneralize more than humans do. The findings revealed that LLMs — with the exception of Claude, which performed well on all testing criteria — that were given a prompt for accuracy were twice as likely to produce overgeneralized results. LLM summaries were nearly five times more likely than human-generated summaries to render generalized conclusions. The researchers also noted that LLMs transitioning quantified data into generic information were the most common overgeneralizations and the most likely to create unsafe treatment options. These transitions and overgeneralizations have led to biases, according to experts at the intersection of AI and healthcare. "This study highlights that biases can also take more subtle forms — like the quiet inflation of a claim's scope," Max Rollwage, vice president of AI and research at Limbic, a clinical mental health AI technology company, told Live Science in an email. "In domains like medicine, LLM summarization is already a routine part of workflows. That makes it even more important to examine how these systems perform and whether their outputs can be trusted to represent the original evidence faithfully." Such discoveries should prompt developers to create workflow guardrails that identify oversimplifications and omissions of critical information before putting findings into the hands of public or professional groups, Rollwage said. While comprehensive, the study had limitations; future studies would benefit from extending the testing to other scientific tasks and non-English texts, as well as from testing which types of scientific claims are more subject to overgeneralization, said Patricia Thaine, co-founder and CEO of Private AI — an AI development company. Rollwage also noted that "a deeper prompt engineering analysis might have improved or clarified results," while Peters sees larger risks on the horizon as our dependence on chatbots grows. "Tools like ChatGPT, Claude and DeepSeek are increasingly part of how people understand scientific findings," he wrote. "As their usage continues to grow, this poses a real risk of large-scale misinterpretation of science at a moment when public trust and scientific literacy are already under pressure." RELATED STORIES —Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals —'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype — here's why artificial general intelligence isn't what the billionaires tell you it is —Current AI models a 'dead end' for human-level intelligence, scientists agree For other experts in the field, the challenge we face lies in ignoring specialized knowledge and protections. "Models are trained on simplified science journalism rather than, or in addition to, primary sources, inheriting those oversimplifications," Thaine wrote to Live Science. "But, importantly, we're applying general-purpose models to specialized domains without appropriate expert oversight, which is a fundamental misuse of the technology which often requires more task-specific training." In December 2024, Future Publishing agreed a deal with OpenAI in which the AI company would bring content from Future's 200-plus media brands to OpenAI's users. You can read more about the partnership here.

Is ChatGPT killing higher education?
Is ChatGPT killing higher education?

Yahoo

timean hour ago

  • Yahoo

Is ChatGPT killing higher education?

What's the point of college if no one's actually doing the work? It's not a rhetorical question. More and more students are not doing the work. They're offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They're doing everything. We're living in a cheating utopia — and professors know it. It's becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it's not clear that there's anything to be done at this point. So what are we doing here? James Walsh is a features writer for New York magazine's Intelligencer and the author of the most unsettling piece I've read about the impact of AI on higher education. Walsh spent months talking to students and professors who are living through this moment, and what he found isn't just a story about cheating. It's a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt. I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. Let's talk about how students are cheating today. How are they using these tools? What's the process look like? It depends on the type of student, the type of class, the type of school you're going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, 'I need a four to five-page essay,' and copying and pasting that essay without ever reading it. One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they're dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they're using will say something random about broccoli or Dua Lipa. Unless you're incredibly lazy, it takes just a little effort to cover that up. Every professor I spoke to said, 'So many of my students are using AI and I know that so many more students are using it and I have no idea,' because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay. And there are these platforms, these AI detectors, and there's a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that's really just looking at the language and deciding whether or not that language is created by an LLM. But it doesn't account for big ideas. It doesn't catch the students who are using AI and saying, 'What should I write this essay about?' And not doing the actual thinking themselves and then just writing. It's like paint by numbers at that point. Did you find that students are relating very differently to all of this? What was the general vibe you got? It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, 'I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.' And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading. The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, 'Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I'm sitting here in class and people are referencing studies that we haven't even covered in class, and it just makes for a really boring and unfulfilling class.' That was the realization for her that something is really wrong. So there are students like that. And then there are students who feel like they have to use AI because if they're not using AI, they're at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI. What's the general professor's perspective on this? They seem to all share something pretty close to despair. Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn't appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it's the best class she's ever had. So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don't know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you're going to accuse a student of using AI, there's no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down. Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general. How many professors do you think are now just having AI write their lectures? There's been a little reporting on this. I don't know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI. I've spoken to another couple of professors who are like, I'm nearing retirement, so it's not my problem, and good luck figuring it out, younger generation. I just don't think people outside of academia realize what a seismic change is coming. This is something that we're all going to have to deal with professionally. And it's happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, 'We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.' Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it's better for business to shove it aside? I do think there's a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that's coming. They want to adopt the technology and aren't really picking up on how dangerous it might be. I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, . And I think it's revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they're much more content to let the whole AI thing play out. I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn't threaten the bottom line in that way, or at least it doesn't yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I'm being too harsh? I genuinely don't think that's too harsh. I think administrators may not fully appreciate the power of AI and exactly what's happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem. I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, 'Just grade it like it was any other paper.' I think they're just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education. It's this system that hasn't been updated in forever. And in the case of the US higher ed system, it's like, yeah, for a long time it's been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important. The universities, in many cases, also have partnerships with AI companies, right? Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, 'Google is letting parents know that they have created a chatbot for children under [thirteen years old].' And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty. There's been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they've been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand. This is a good time to ask the obligatory question, People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn't just another moral panic? I think there's a lot of different ways we could respond to that. It's not a generational moral panic. This is a tool that's available, and it's available to us just as it's available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won't be in a year. However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, 'This is a calculator for words.' And I just don't really understand how that is compatible with other statements he's made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course. I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we've maxed out the AI's potential to supplement essays or writing. That it might not get much better than it is now. And I think that's a very long shot, one that I would not want to bank on. Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we're also post-thinking. It's a very scary thought that I try not to dwell in — the idea that my profession and what I'm doing is just feeding the machine, that my most important reader now is a robot, and that there's going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas. I think ideas can certainly be expressed in other mediums and that's exciting, but I don't think anybody who's paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we're talking about now is orders of magnitude more powerful than the algorithms on Instagram. Listen to the rest of the conversation and be sure to follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you listen to podcasts.

Here's Why This Nvidia Partner's Stock Surged in June
Here's Why This Nvidia Partner's Stock Surged in June

Yahoo

time2 hours ago

  • Yahoo

Here's Why This Nvidia Partner's Stock Surged in June

A plethora of anecdotal evidence, management commentary, and Nvidia's earnings report suggest spending on data center equipment remains in robust growth mode. The AI application market is still in its early innings. 10 stocks we like better than Vertiv › Shares in data center equipment company Vertiv (NYSE: VRT) surged by 19% in June, according to data provided by S&P Global Market Intelligence. The move comes as the market's concerns over a slowdown in data center capital spending have been assuaged by positive news flow from companies like Vertiv's partner, data center architecture company Nvidia (NASDAQ: NVDA). No one likes piling into the market at the top of a bull market, especially not in the technology sector, where the cyclicality of boom and bust seems to prevail. For the bears, the boom in data center and artificial intelligence (AI)-related spending is set for a natural correction. Additionally, the uncertainty created by the tariff conflict is perceived as potentially causing a pause in spending on data centers. For the bulls, spending on data centers is still in its early innings. In addition, the need for data center capacity to catch up with the demand trend in AI application-related spending will boost spending for many years to come. In terms of Vertiv, the bears will point to the company's flat year-over-year order book in the fourth quarter of 2024 as an early indication of stress down the line. Meanwhile, the bulls will point to the fact that Vertiv raised its full-year 2025 sales guidance to a range of $9.325 billion to $9.575 billion, compared to a prior range of $9.125 billion to $9.275 billion, as announced in the company's first-quarter earnings presentations in April. To stay informed about developments in the data center and AI market, it's a good idea to follow the perspectives of hyperscalers and market service providers, as well as examine anecdotal statements and trading conditions. As previously discussed, a host of information suggests that the market remains strong. In addition, Nvidia's first-quarter 2026 earnings report and management commentary indicated that underlying growth in the market is excellent. For example, Nvidia's data center-related revenue rose by 73% year over year in the first quarter of 2026. Moreover, it's essential to note that while companies like Nvidia are subject to export control regulations on their technology sales to China, Vertiv's power, cooling, and infrastructure solutions for data centers don't face the same kind of restrictions. In fact, the major problem Vertiv faces from the tariff conflict is rising costs associated with products sourced from Mexico and China, areas where management is already rebalancing to mitigate risk. All told, the confirmation of ongoing growth in Vertiv's end markets was a positive for the stock in June, and it remains a great way to play the data center spending boom. Before you buy stock in Vertiv, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Vertiv wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $699,558!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $976,677!* Now, it's worth noting Stock Advisor's total average return is 1,060% — a market-crushing outperformance compared to 180% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 30, 2025 Lee Samaha has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy. Here's Why This Nvidia Partner's Stock Surged in June was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store