Perplexity joins Anthropic and OpenAI in offering a $200 per month subscription
The subscription comes with unlimited monthly usage of Labs, the agentic creation tool Perplexity released this past May. People can use Labs to generate spreadsheets, presentations, web applications and more. Perplexity is also promising early access to new features, including Comet, a new web browser the company claims will be a "powerful thought partner for everything you do on the web." The company adds Max subscribers will receive priority customer support, as well as access to top frontier models from partners like Anthropic and OpenAI.
Perplexity will continue to offer its existing Pro plan, which remains $20 per month. Admittedly, the company is courting a small demographic with the new subscription, noting it's primarily designed for content designers, business strategists and academic research.
OpenAI was the first to open the floodgates of very expensive AI subscriptions when it began offering its ChatGPT Pro plan at the end of last year. Since then, Anthropic, Google have followed suit.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
35 minutes ago
- Forbes
From Existential Threat To Hope. A Philosopher's Guide To AI
AI was never just a tool to make us more productive, or to help us do 'good'. It was always also an ... More expression of who we are and what we are becoming. Photo: 7/31/1946-New York. This photo of "Miss Liberty" was made from a helicopter, the first time it has ever been done. It shows a head and shoulders view of the statue and the torch. The dark side of AI continues to reveal new faces. A few weeks ago, Geoffrey Hinton, Nobel laureate and former AI chief in Google, highlighted two ways in which AI poses an existential threat to humanity: By people misusing AI, and by AI becoming smarter than us. And this week OpenAI admitted that they don't know how to prevent ChatGPT from pushing people towards mania, psychosis and death. At the same time, AI optimists keep stressing that it is only a matter of years before AI will solve scientific, environmental, health and social problems that humanity has been struggling with for ages. And when The United Nations kicks off its global summit on AI for Good next week, it's to gather AI experts from across the world to "identify innovative AI applications to solve global challenges.' But what if the discussion of AI's risks and opportunities, dark and bright sides and bad and good ways to use technology is part of the existential threat we are facing? Why AI For Good May Be A Bad Idea When German philosopher Friedrich Nietzsche urged us to think Beyond Good and Evil (book from 1885), he suggested that it is not what we identify, define, and decide to be 'good' that determines whether we succeed as humans. It is whether we manage to rise above our unquestioned ideas of what good looks like. Labeling some AI products as human-centric or responsible might sound like a step in the right direction towards identifying and designing innovative AI applications to solve global challenges. But it also reinforces the idea that our future depends on how AI is designed, built and regulated rather on how we live, learn and relate to technology. And by focusing on AI when thinking and talking about our future rather than focusing on ourselves and how we exist and evolve as humans, we are not rising above our unquestioned ideas of what good looks like. Rather, we submit to the idea that permeates all technology, that good equals innovative, fast, and efficient. To rise above our unquestioned ideas about the nature and impact of AI, we need to follow Nietzsche's lead. So, here it is: A Philosopher's Guide to AI. German philosopher Friedrich Nietzsche (1844 - 1900) urged us to think Beyond Good and Evil. This ... More involves questioning the idea that permeates all technology, that good equals innovative, fast, and efficient. (Photo by Hulton Archive) 1. Stop Thinking Of AI As A Tool The first step towards shifting the focus from the development of AI to our evolution as humans is to question the widespread and constantly repeated idea that AI, like any other technology, is just a tool that can be used for good as well as evil. Inspired by Nietzsche and others who set the tradition of existential philosophy in motion, German philosopher Martin Heidegger put it like this: 'Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard is as something neutral; for this conception of it, to which today we particularly like to pay homage, makes us utterly blind to the essence of technology.' In The Question Concerning Technology from 1954, Heidegger argued that the essence of technology is to give man the illusion of being in control. When we think of technology as a tool that can be used for good as well as evil, we also think that we are in control of why, when, and for what it is used. But according to Heidegger this is only partly the case. We may make a decision to buy a car to drive ourselves to work. And thus we may think of the car as a means to achieve our goal of getting to work as fast as possible. But we never made the decision that fast is better than slow. It's an idea that comes with the car. So is the idea that it should be easy and convenient for us to get to work. And that fast, easy and convenient is more important than anything else. Like all other technologies, the car comes with a promise that we can achieve more by doing less. And like all other technologies, it makes us think that this is what life is and should be about. But to rise above our unquestioned ideas, we must not only ask the questions we are encouraged to ask when faced with a new technology – like 'how does it work?', 'when can I use it?', and 'how much easier will it be to do X?' We must also ask the questions that the essence of technology discourages us from asking – like 'do I even need technology for this?', 'what does this technology prevent me from doing?', and 'what will my life be like if I trust technology to make everything easy?' 2. Take The History Of Technology Seriously Heidegger made it clear that although different generations of technology have different ways of influencing human beings and behaviors, our fundamental purpose for using technology remains the same: to deal with the fact that we are limited creatures, thrown into this world without knowing why and for how long. Put differently, the question concerning technology is and always was existential. It's about who we are and what we become when we try to overcome our limitations. Ever since our early ancestors began using rocks and branches as tools and weapons, our relationship with technology has been at the heart of how we live, learn and evolve as humans. And more than anything else, it has shaped our understanding of ourselves and our relationship with our surroundings. Living in the early days of the digital revolution, Heidegger didn't know that AI would have the impact it has today. Nor did he know that AI experts would talk about their inventions as posing an existential threat to humanity. But he distinguished between different generations of technology. And he suggested that humanity was moving toward a technological era of great existential significance. Illustration of the difference in how humans relate to technology throughout three technological ... More eras. Having used pre-modern tools to survive and modern technology to thrive, the idea that digital technology can help transcend the limitations set by nature doesn't seem far-fetched (see illustration). However, by not realizing that our relationship with technology is existential, AI experts seem to have missed that AI was never just a tool to make us more productive, or to help us do 'good'. It was always also an expression of who we are and what we are becoming. And by building technology that distances itself from the limitations of nature, we also began to distance ourselves from our human nature. According to Heidegger, this distancing has been going on for centuries without any of us noticing it. The widespread debate about AI as an existential threat is a sign that this is changing. And that AI may be the starting point for us humans to finally develop a more reflective and healthy relationship with technology. 3. Make Existential Hope A Joint Venture Heidegger concludes The Question Concerning Technology by writing: 'The closer we come to the danger, the brighter the ways into the saving power begin to shine and the more questioning we become. For questioning is the piety of thought.' While AI experts are calling for regulation, for AI development to be paused, and even for new philosophers to help them deal with the threat they see AI posing, hope shines from a completely different place than tech companies and regulators. 'Where?' you may ask. And that's just it. We are asking more existential questions about who we are, why we are here, and where we want to go as humanity than ever before. And with 'we', I don't mean philosophers, tech experts, and decision makers. I mean all of us in all sorts of contexts in all parts of the world. There is something about AI that, unlike previous generations of technology, makes us ask the questions that the essence of technology has previously discouraged us from asking. Unlike modern technologies like cars and digital technologies like computers, we actually have a widespread debate about what AI is preventing us from doing and what our lives will be like if we trust AI to make everything easy. And this instills hope. Existential hope that we still know and are willing to do what it takes to stay human. Even when it doesn't equal innovative, fast, and efficient. Senior journalist with BBC Global News, Richard Fisher defines existential hope as "the opposite of existential catastrophe: It's the idea that there could be radical turns for the better, so long as we commit to bringing them to reality. Existential hope is not about escapism, utopias or pipe dreams, but about preparing the ground: making sure that opportunities for a better world don't pass us by.' With A Philosopher's Guide to AI, the questions we ask about AI offers a once in many lifetimes opportunity for a better world. Let's make sure it doesn't pass us by!

Miami Herald
an hour ago
- Miami Herald
Mattel's AI deal raises fears of ‘real damage' to kids
I don't allow my children to play video games. They don't have smartwatches, iPads, or smartphones. And yet, nearly every child in my son's class has either a phone or a smartwatch. Is it hard to say no and explain that he can't have something every other kid has? Don't miss the move: Subscribe to TheStreet's free daily newsletter Honestly, no - only because we believe that's the right thing to do. I am not trying to be too strict or rigid. I support providing them with basic knowledge of technology, but there should be balance. Related: TSA bans these common children's toys from airplanes They should be able to play - really play - outside in the mud, or inside with simple non-electric toys. It's then they actually use their mind, start to be creative, and feel the excitement of creating or discovering something new. "Playing is essential for human brain development, much more than cognitive functioning. Much more important than learning facts. It's play that helps the brain develop. We know that this is a scientific fact," explains Gabor Mate, a Canadian physician and an expert on trauma, addiction, stress, and childhood development. Today it seems harder than ever to foster kids' healthy development, as we are surrounded by technology. And toy producers keep pushing upgrades that seem not only unnecessary, but sometimes even scary. Barbie maker Mattel (MAT) has been making toys for 80 years, during which it has become one of the leading global toy manufacturers, and the creator of franchises cherished by kids and families around the world. On June 12, the toy giant unveiled a strategic partnership with OpenAI, the company behind ChatGPT. The idea behind the collaboration is to "bring the magic of AI to age-appropriate play experiences, with an emphasis on innovation, privacy, and safety," reads the press release. Related: Iconic toy store chain closing stores once again This begs the question - is there such a thing as safe interaction between a child and a chatbot? I sincerely doubt it. "Each of our products and experiences is designed to inspire fans, entertain audiences, and enrich lives through play," Mattel Chief Franchise Officer Josh Silverman said. "AI has the power to expand on that mission and broaden the reach of our brands in new and exciting ways. Our work with OpenAI will enable us to leverage new technologies to solidify our leadership in innovation and reimagine new forms of play." Entertain, inspire, excite, bring new forms of play? Do children really need new forms of play? More Retail: Huge retail chain suddenly closing hundreds of storesMajor retailer scores huge benefit from Joann bankruptcyHome Depot, Target, Ulta and more strike back at retail crime From my experience, you can just give children a little bit of sand, water, small branches, and some rocks. It would make them more than happy, playing for hours. More importantly, it would force them to be creative and develop their own exciting games. I am not alone in my concerns about the idea of ChatGPT-powered toys. Child welfare experts and advocacy groups such as Public Citizen are starting to warn about the potential dangers of this collaboration, writes The Independent. "Children do not have the cognitive capacity to distinguish fully between reality and play," Public Citizen Co-president Robert Weissman stated. Weissman noted that risks include "real damage" to children, undermining social development and impacting the their ability to form peer relationships. Even adults have been known to have dangerous "relationships" with AI chatbots. Why? Aarhus University Psychiatry Professor Søren Dinesen Østergaard explains that "the correspondence with AI chatbots such as Chat is so realistic that one easily gets the impression that there is a real person at the other end." Now imagine the still-developing child's brain in that position. OpenAI CEO Sam Altman recently said his company focuses on safety measures to protect vulnerable users. Related: Wendy's adds old-school toys to take on McDonald's Happy Meal Another important question for parents and caregivers: Is it worth it? Are there any actual benefits of ChatGPT-powered toys for children, or just potential risks? What's more, if simple iPads and smartphones are known to threaten children's development, what about a more powerful toy? Remember, playing is an important part of growing up and helps your child explore the world. While Mate did not comment on Mattel or its OpenAI collaboration, he talked about the general dangers of children's dependence on technology. "We've deprived our children of play. This [showing smartphone] is not play," Mate said. "These kids with their iPads at one year old have been robbed of their capacity to play. And the companies design these gadgets to make these kids addicted." Mate concludes with the words of his friend, an endocrinologist, saying, "What we have here is hacking of the American mind." There are still no details on exactly how OpenAI's technology will be integrated into Mattel's new toys, but hopefully, it ends up being one of those experiments that gets scrapped before it even starts. Related: Classic toy store chain files Chapter 11 bankruptcy The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.
Yahoo
5 hours ago
- Yahoo
After Skyrocketing More Than 559% Over the Past Year, Can Oklo Stock Continue Powering Higher?
Oklo, a developer of small modular nuclear reactors, has seen its stock soar over the past year. Political support for the nuclear energy industry can benefit the company in the future. The growth of data centers and radioisotopic production are two trends that can contribute to Oklo's growth. 10 stocks we like better than Oklo › Amid the current renaissance in the nuclear energy industry, several stocks with exposure to this niche of the energy industry have logged considerable gains recently. Oklo (NYSE: OKLO), for example, has been on an absolute tear, soaring 559.6% as of this writing. And there's plenty of reasons to believe that the stock can continue to rocket even higher as enthusiasm for nuclear energy increases. While some growth stocks may log multibagger returns in a single year thanks to merely one catalyst, Oklo's gains stem from several factors. Investors bid the stock higher in late 2024 when the company announced that it had received letters of intent from two data center customers for the deployment of its Aurora powerhouse small modular reactors. In total, the potential deals can provide up to 750 megawatts in capacity across the United States. The company also announced a nonbinding agreement with Switch -- a company that provides artificial intelligence (AI), cloud, and enterprise data centers -- to deploy 12 gigawatts in Aurora powerhouse projects through 2044. The start of 2025 also proved to be fruitful for the stock. With Sam Altman's OpenAI announcing the Stargate Project in January, investors raced to purchase Oklo, recognizing that the OpenAI plan to develop data center infrastructure could be a potential boon for the company. More recently, the executive orders that President Donald Trump signed in May aimed at reinvigorating the nation's nuclear energy industry represented another catalyst for the stock. After decades of Washington's disinterest in development of the nuclear industry, the Trump administration is clearly enthusiastic about its potential. For prospective investors or current shareholders, it's reasonable to question whether the stock can continue its meteoric rise. Simply put, the answer is a resounding yes. With the extraordinary computing demands that generative AI is placing on data centers, AI companies are investing heavily in data center infrastructure. Research from Dell'Oro Group estimates that global spending is expected to soar from $430 billion on data centers in 2024 to $1.1 trillion by 2029. The interest that Oklo received last year from these developers will very likely extend through 2025 and beyond, helping to push the stock higher. The political goodwill toward the nuclear industry will also benefit the stock. In early June, for example, Oklo notched another victory when the U.S. Nuclear Regulatory Commission agreed to review a report from the company, which could receive regulatory approval for licensing operators for the company's Aurora powerhouse. Oklo's progress with its subsidiary Atomic Alchemy represents another factor that can lift the stock. In June, work began at a planned radioisotopic production facility in Idaho -- just one of what management expects will be numerous projects that will expand its capabilities in commercial radioisotope production. While not as widely discussed as data centers and AI, the market for radioisotopic production is expected to experience notable growth in the coming years. According to Credence Research, the market is projected to soar at an 89.7% compound annual growth rate from about $5.68 billion in 2024 to $953 billion in 2032. Before clicking the buy button on the stock, it's imperative for potential investors to recognize that there are bound to be bumps in the road. Disrupting an industry doesn't come without some volatility, and there's certainly no guarantee that Oklo will prosper as management imagines it will. So, only investors comfortable with the inherent risks should consider a position. Those who have the resolve to stick with Oklo through the ups and downs may find themselves looking at a stock that provides a considerable return. Before you buy stock in Oklo, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Oklo wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $699,558!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $976,677!* Now, it's worth noting Stock Advisor's total average return is 1,060% — a market-crushing outperformance compared to 180% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 30, 2025 Scott Levine has no position in any of the stocks mentioned. The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy. After Skyrocketing More Than 559% Over the Past Year, Can Oklo Stock Continue Powering Higher? was originally published by The Motley Fool