Latest news with #GPT-3.5
Yahoo
14-06-2025
- Business
- Yahoo
Get All the Best AI Tools in One Place for $40
The following content is brought to you by PCMag partners. If you buy a product featured here, we may earn an affiliate commission or other compensation. brings together top-tier AI models and essential tools into one unified platform. Designed for content creators, marketers, freelancers, and small business owners, this all-in-one AI app eliminates the need for juggling multiple AI subscriptions. For a one-time price of $39.99, users get access to chat assistants from OpenAI (GPT-4o, GPT-3.5, GPT-4 Turbo), Claude 3, Gemini Pro, Mistral, and Meta's Llama 3, among others. It's a powerful chat engine built for fast content creation, support tasks, and creative brainstorming. You'll also get a full suite of writing tools: keyword research, blog post generation, content rewriting, summarizing, and social media commenting—complete with brand voice adaptation. On the visual side, the platform supports image generation, background removal, upscaling, and text or object deletion. Users can edit, convert, and interact with PDFs, including summarizing and translating documents. also offers robust AI tools for audio and video. Create text-to-speech narrations, transcribe audio, and even enhance or edit video content, all under one roof. It's a complete solution for creators and teams looking to streamline workflows. Get access to for just $39.99 (reg. $234) while you can. Prices subject to change. PCMag editors select and review products independently. If you buy through StackSocial affiliate links, we may earn commissions, which help support our testing.


Gizmodo
05-06-2025
- General
- Gizmodo
Things Humans Still Do Better Than AI: Understanding Flowers
While it might feel as though artificial intelligence is getting dangerously smart, there are still some basic concepts that AI doesn't comprehend as well as humans do. Back in March, we reported that popular large language models (LLMs) struggle to tell time and interpret calendars. Now, a study published earlier this week in Nature Human Behaviour reveals that AI tools like ChatGPT are also incapable of understanding familiar concepts, such as flowers, as well as humans do. According to the paper, accurately representing physical concepts is challenging for machine learning trained solely on text and sometimes images. 'A large language model can't smell a rose, touch the petals of a daisy or walk through a field of wildflowers,' Qihui Xu, lead author of the study and a postdoctoral researcher in psychology at Ohio State University, said in a university statement. 'Without those sensory and motor experiences, it can't truly represent what a flower is in all its richness. The same is true of some other human concepts.' The team tested humans and four AI models—OpenAI's GPT-3.5 and GPT-4, and Google's PaLM and Gemini—on their conceptual understanding of 4,442 words, including terms like flower, hoof, humorous, and swing. Xu and her colleagues compared the outcomes to two standard psycholinguistic ratings: the Glasgow Norms (the rating of words based on feelings such as arousal, dominance, familiarity, etc.) and the Lancaster Norms (the rating of words based on sensory perceptions and bodily actions). The Glasgow Norms approach saw the researchers asking questions like how emotionally arousing a flower is, and how easy it is to imagine one. The Lancaster Norms, on the other hand, involved questions including how much one can experience a flower through smell, and how much a person can experience a flower with their torso. In comparison to humans, LLMs demonstrated a strong understanding of words without sensorimotor associations (concepts like 'justice'), but they struggled with words linked to physical concepts (like 'flower,' which we can see, smell, touch, etc.). The reason for this is rather straightforward—ChatGPT doesn't have eyes, a nose, or sensory neurons (yet) and so it can't learn through those senses. The best it can do is approximate, despite the fact that they train on more text than a person experiences in an entire lifetime, Xu explained. 'From the intense aroma of a flower, the vivid silky touch when we caress petals, to the profound visual aesthetic sensation, human representation of 'flower' binds these diverse experiences and interactions into a coherent category,' the researchers wrote in the study. 'This type of associative perceptual learning, where a concept becomes a nexus of interconnected meanings and sensation strengths, may be difficult to achieve through language alone.' In fact, the LLMs trained on both text and images demonstrated a better understanding of visual concepts than their text-only counterparts. That's not to say, however, that AI will forever be limited to language and visual information. LLMs are constantly improving, and they might one day be able to better represent physical concepts via sensorimotor data and/or robotics, according to Xu. She and her colleagues' research carries important implications for AI-human interactions, which are becoming increasingly (and, let's be honest, worryingly) intimate. For now, however, one thing is certain: 'The human experience is far richer than words alone can hold,' Xu concluded.

Business Insider
20-05-2025
- Business
- Business Insider
OpenAI's growing pains
This is an excerpt from " Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI" by Karen Hao. The book is based on interviews with around 260 people and an extensive trove of correspondence and documents. Any quoted emails, documents, or Slack messages come from copies or screenshots of those documents and correspondences or are exactly as they appear in lawsuits. The author reached out to all of the key figures and companies that are described in this book to seek interviews and comment. OpenAI and Sam Altman chose not to cooperate. In November 2022, rumors began to spread within OpenAI that its rival Anthropic was testing — and would soon release — a new chatbot. If it didn't launch first, OpenAI risked losing its leading position, which could deliver a big hit to morale for employees who had worked long and tough hours to retain that dominance. Anthropic had not in fact been planning any imminent releases. But for OpenAI executives, the rumors were enough to trigger a decision: The company wouldn't wait to ready GPT-4 into a chatbot; it would release John Schulman 's chat-enabled GPT-3.5 model with the Superassistant team's brand-new chat interface in two weeks, right after Thanksgiving. No one truly fathomed the societal phase shift they were about to unleash. They expected the chatbot to be a flash in the pan. The night before the release, they placed bets on how many users might try the tool by the end of the weekend. Some people guessed a few thousand. Others guessed tens of thousands. To be safe, the infrastructure team provisioned enough server capacity for 100,000 users. On Wednesday, November 30, most employees didn't even realize that the launch had happened. But the following day, the number of users began to surge. The instant runaway success of ChatGPT was beyond what anyone at OpenAI had dreamed of. It would leave the company's engineers and researchers completely miffed even years later. GPT-3.5 hadn't been that much of a capability improvement over GPT-3, which had already been out for two years. And GPT-3.5 had already been available to developers. OpenAI CEO Sam Altman later said that he'd believed ChatGPT would be popular but by something like "one order of magnitude less." "It was shocking that people liked it," a former employee remembers. "To all of us, they'd downgraded the thing we'd been using internally and launched it." Within five days, OpenAI cofounder Greg Brockman tweeted that ChatGPT had crossed one million users. Within two months, it had reached 100 million, becoming what was then the fastest-growing consumer app in history. ChatGPT catapulted OpenAI from a hot startup well-known within the tech industry into a household name overnight. At the same time, it was this very blockbuster success that would place extraordinary strain on the company. Over the course of a year, it would polarize its factions further and wind up the stress and tension within the organization to an explosive level. By then, the company had just 300 employees. With every team stretched dangerously thin, managers begged Altman for more head count. There was no shortage of candidates. After ChatGPT, the number of applicants clamoring to join the rocket ship had rapidly multiplied. But Altman worried about what would happen to company culture and mission alignment if the company scaled up its staff too quickly. He believed firmly in maintaining a small staff and high talent density. "We are now in a position where it's tempting to let the organization grow extremely large," he had written in his 2020 vision memo, in reference to Microsoft's investment. "We should try very hard to resist this — what has worked for us so far is being small, focused, high-trust, low-bullshit, and intense. The overhead of too many people and too much bureaucracy can easily kill great ideas or result in sclerosis." OpenAI is one of the best places I've ever worked but also probably one of the worst. He was now repeating this to executives in late 2022, emphasizing during head count discussions the need to keep the company lean and the talent bar high, and add no more than 100 or so hires. Other executives balked. At the rate that their teams were burning out, many saw the need for something closer to around 500 or even more new people. Over several weeks, the executive team finally compromised on a number somewhere in the middle, between 250 and 300. The cap didn't hold. By summer, there were as many as 30, even 50, people joining OpenAI each week, including more recruiters to scale up hiring even faster. By fall, the company had blown well past its own self-imposed quota. The sudden growth spurt indeed changed company culture. A recruiter wrote a manifesto about how the pressure to hire so quickly was forcing his team to lower the quality bar for talent. "If you want to build Meta, you're doing a great job," he said in a pointed jab at Altman, alluding to the very fears that the CEO had warned about. The rapid expansion was also leading to an uptick in firings. During his onboarding, one manager was told to swiftly document and report any underperforming members of his team, only to be let go himself sometime later. Terminations were rarely communicated to the rest of the company. People routinely discovered that colleagues had been fired only by noticing when a Slack account grayed out from being deactivated. They began calling it "getting disappeared." To new hires, fully bought into the idea that they were joining a fast-moving, money-making startup, the tumult felt like a particularly chaotic, at times brutal, manifestation of standard corporate problems: poor management, confusing priorities, the coldhearted ruthlessness of a capitalistic company willing to treat its employees as disposable. "There was a huge lack of psychological safety," says a former employee who joined during this era. Many people coming aboard were simply holding on for dear life until their one-year mark to get access to the first share of their equity. One significant upside: They still felt their colleagues were among the highest caliber in the tech industry, which, combined with the seemingly boundless resources and unparalleled global impact, could spark a feeling of magic difficult to find in the rest of the industry when things actually aligned. "OpenAI is one of the best places I've ever worked but also probably one of the worst," the former employee says. Sometimes there isn't a plan as much as there is just chaos. For some employees who remembered the scrappy early days of OpenAI as a tight-knit, mission-driven nonprofit, its dramatic transformation into a big, faceless corporation was far more shocking and emotional. Gone was the organization as they'd known it; in its place was something unrecognizable. "OpenAI is Burning Man," Rob Mallery, a former recruiter, says, referring to how the desert art festival scaled to the point that it lost touch with its original spirit. "I know it meant a lot more to the people who were there at the beginning than it does to everyone now." In those early years, the team had set up a Slack channel called #explainlikeimfive that allowed employees to submit anonymous questions about technical topics. With the company pushing 600 people, the channel also turned into a place for airing anonymous grievances. In mid-2023, an employee posted that the company was hiring too many people not aligned with the mission or passionate about building AGI. Another person responded: They knew OpenAI was going downhill once it started hiring people who could look you in the eye. As OpenAI was rapidly professionalizing and gaining more exposure and scrutiny, incoherence at the top was becoming more consequential. The company was no longer just the Applied and Research divisions. Now there were several public-facing departments: In addition to the communications team, a legal team was writing legal opinions and dealing with a growing number of lawsuits. The policy team was stretching out across continents. Increasingly, OpenAI needed to communicate with one narrative and voice to its constituents, and it needed to determine its positions to articulate them. But on numerous occasions, the lack of strategic clarity was leading to confused public messaging. At the end of 2023, The New York Times would sue OpenAI and Microsoft for copyright infringement for training on millions of its articles. OpenAI's response in early January, written by the legal team, delivered an unusually feisty hit back, accusing the Times of "intentionally manipulating our models" to generate evidence for its argument. That same week, OpenAI's policy team delivered a submission to the UK House of Lords communications and digital select committee, saying that it would be "impossible" for OpenAI to train its cutting-edge models without copyrighted materials. After the media zeroed in on the word impossible, OpenAI hastily walked away from the language. "There's just so much confusion all the time," says an employee in a public-facing department. While some of that reflects the typical growing pains of startups, OpenAI's profile and reach have well outpaced the relatively early stage of the company, the employee adds. "I don't know if there is a strategic priority in the C suite. I honestly think people just make their own decisions. And then suddenly it starts to look like a strategic decision but it's actually just an accident. Sometimes there isn't a plan as much as there is just chaos." Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society. She is the author of "Empire of AI." Adapted from " EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI" by Karen Hao, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2025 by Karen Hao.


Tom's Guide
17-05-2025
- Tom's Guide
Fake ChatGPT sites can put your data and devices at risk — here's how to spot them
If you search for 'ChatGPT' in your browser, chances are you'll stumble onto websites that look like they're powered by OpenAI — but aren't. One such site, offers access to 'GPT-3.5' for free and uses familiar branding. But here's the thing: it's not run by OpenAI. And frankly, why use a potentially fake GPT-3.5 when you can use GPT-4o for free on the actual ChatGPT site? As someone who tests AI for a living, I clicked on many popular fake chatbot sites so you don't have to. The interface is eerily similar. The responses are pretty close to ChatGPT. But what many casual users might not know, is this site is a third-party app that's not affiliated with OpenAI. In other words, it's a total fake. And that's a problem. With the explosion of interest in generative AI, countless third-party developers have built tools that tap into OpenAI's models via its API — meaning, they can technically use GPT-3.5, GPT-4, etc. but outside OpenAI's official platforms. These sites often: Some are harmless. Others? Not so much. The main concern is data privacy and security. When you use third-party chatbot sites, you often agree (knowingly or not) to their terms — not OpenAI's. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. That means your conversations might be logged, sold or used to train unrelated models. Essentially, there may be no safeguards around personal info. You don't know when you're talking to GPT-3.5 or a cheap imitation pretending to be it. There's also the trust factor. Some sites might imply legitimacy simply by including 'GPT' in the URL or claiming to use OpenAI models. But the user experience, safety and output quality may vary dramatically. Here's how to tell if you're on an official OpenAI platform: the real ChatGPT lives at While you can use ChatGPT without logging in, for the best experience, it requires OpenAI login credentials or Google/Microsoft sign-in. GPT-4 is the model used for most tasks and it is available for free. Red flags on other sites: It depends on what you need. For low-stakes, casual play, a third-party GPT app might be fine. But why would you take a chance? Big tech like Open AI and Google offer their best models for free. It isn't worth the risk to your personal information, business content or frankly, the results to your prompts. Stick with official tools like ChatGPT, Gemini, Claude or Perplexity — all of which are transparent about where your data goes. For each site, I ran the same 3 prompts: I also reviewed: Model claimed: GPT-3.5 UX: Familiar ChatGPT-style interface Red flags: No About page, no clear attribution to OpenAI Result: Decent answers, but no confirmation on how data is handled Score: 6/10 — Not awful, but not trustworthy Model claimed: GPT-4 UX: Familiar ChatGPT-style interface with a little robot icon at top Red flags: Fake progress bars, clickbait buttons Result: Low-quality answers, clearly not GPT-tier Score: 3/10 — Feels like an ad trap Model claimed: GPT-4o mini UX: Slick interface but flooded with ads. It was giving Napster vibes and I honestly thought my computer was going to crash. Red flags: Auto-subscription popups after a few queries Result: Surprisingly decent language generation Score: 5/10 — Good output, bad everything else Model claimed: Anyone's guess UX: Cluttered, pop-ups, video ads everywhere Red flags: No mention of OpenAI anywhere and even said Sam Altman was the CEO of this fake bot. Result: AI was fast, but hallucinated basic facts Score: 2/10 — dizzying, cluttered Model claimed: Gemini 2.0 UX: Sleek, no login required Red flags: Data may be stored or used to train third-party models Result: Felt close to GPT-3.5 — but privacy policy was vague Score: 6/10 — Decent if you're desperate, surprisingly fast These chatbots were incredibly easy to use, many not even requiring users to login. Most offered solid basic AI functionality for casual queries and all of them were fast. A few actually used OpenAI's API (though poorly disclosed). While the majority of these chatbots had a very cluttered interface, what bothered me most was how they were built to trick users. The fact that the DeepAI bot responded to my query about Sam Altman by stating OpenAI's CEO was DeepAI's CEO was disturbing. Beyond the misleading branding, these chatbots have no privacy safeguards, meaning your chats could be logged or sold. The poor content moderation even led me to some porn sites! The sketchy popups and autoplay videos just scream malware potential. These GPT clones might seem like a shortcut with login, no fees and instant answers. But when you're trusting a website with your writing, ideas or even personal info, it's worth asking: who's really on the other side? If you want to try AI, go with trusted platforms: ChatGPT, Gemini, Claude or Perplexity. Sometimes free comes with a cost — and in this case, it might be your data. AI is evolving fast, but so is the ecosystem of unofficial AI tools popping up around it. Some are useful. Some are sketchy. And some are trying very hard to look like the real thing. Have you ever used a fake chatbot? Tell us about your experience in the comments.


Tom's Guide
16-05-2025
- Tom's Guide
I tested 5 free 'ChatGPT clone' sites — don't try this at home
If you search for 'ChatGPT' in your browser, chances are you'll stumble onto websites that look like they're powered by OpenAI — but aren't. One such site, offers access to 'GPT-3.5' for free and uses familiar branding. But here's the thing: it's not run by OpenAI. And frankly, why use a potentially fake GPT-3.5 when you can use GPT-4o for free on the actual ChatGPT site? As someone who tests AI for a living, I clicked on many popular fake chatbot sites so you don't have to. The interface is eerily similar. The responses are pretty close to ChatGPT. But what many casual users might not know, is this site is a third-party app that's not affiliated with OpenAI. In other words, it's a total fake. And that's a problem. With the explosion of interest in generative AI, countless third-party developers have built tools that tap into OpenAI's models via its API — meaning, they can technically use GPT-3.5, GPT-4, etc. but outside OpenAI's official platforms. These sites often: Some are harmless. Others? Not so much. The main concern is data privacy and security. When you use third-party chatbot sites, you often agree (knowingly or not) to their terms — not OpenAI's. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. That means your conversations might be logged, sold or used to train unrelated models. Essentially, there may be no safeguards around personal info. You don't know when you're talking to GPT-3.5 or a cheap imitation pretending to be it. There's also the trust factor. Some sites might imply legitimacy simply by including 'GPT' in the URL or claiming to use OpenAI models. But the user experience, safety and output quality may vary dramatically. Here's how to tell if you're on an official OpenAI platform: the real ChatGPT lives at While you can use ChatGPT without logging in, for the best experience, it requires OpenAI login credentials or Google/Microsoft sign-in. GPT-4 is the model used for most tasks and it is available for free. Red flags on other sites: It depends on what you need. For low-stakes, casual play, a third-party GPT app might be fine. But why would you take a chance? Big tech like Open AI and Google offer their best models for free. It isn't worth the risk to your personal information, business content or frankly, the results to your prompts. Stick with official tools like ChatGPT, Gemini, Claude or Perplexity — all of which are transparent about where your data goes. For each site, I ran the same 3 prompts: I also reviewed: Model claimed: GPT-3.5 UX: Familiar ChatGPT-style interface Red flags: No About page, no clear attribution to OpenAI Result: Decent answers, but no confirmation on how data is handled Score: 6/10 — Not awful, but not trustworthy Model claimed: GPT-4 UX: Familiar ChatGPT-style interface with a little robot icon at top Red flags: Fake progress bars, clickbait buttons Result: Low-quality answers, clearly not GPT-tier Score: 3/10 — Feels like an ad trap Model claimed: GPT-4o mini UX: Slick interface but flooded with ads. It was giving Napster vibes and I honestly thought my computer was going to crash. Red flags: Auto-subscription popups after a few queries Result: Surprisingly decent language generation Score: 5/10 — Good output, bad everything else Model claimed: Anyone's guess UX: Cluttered, pop-ups, video ads everywhere Red flags: No mention of OpenAI anywhere and even said Sam Altman was the CEO of this fake bot. Result: AI was fast, but hallucinated basic facts Score: 2/10 — dizzying, cluttered Model claimed: Gemini 2.0 UX: Sleek, no login required Red flags: Data may be stored or used to train third-party models Result: Felt close to GPT-3.5 — but privacy policy was vague Score: 6/10 — Decent if you're desperate, surprisingly fast These chatbots were incredibly easy to use, many not even requiring users to login. Most offered solid basic AI functionality for casual queries and all of them were fast. A few actually used OpenAI's API (though poorly disclosed). While the majority of these chatbots had a very cluttered interface, what bothered me most was how they were built to trick users. The fact that the DeepAI bot responded to my query about Sam Altman by stating OpenAI's CEO was DeepAI's CEO was disturbing. Beyond the misleading branding, these chatbots have no privacy safeguards, meaning your chats could be logged or sold. The poor content moderation even led me to some porn sites! The sketchy popups and autoplay videos just scream malware potential. These GPT clones might seem like a shortcut with login, no fees and instant answers. But when you're trusting a website with your writing, ideas or even personal info, it's worth asking: who's really on the other side? If you want to try AI, go with trusted platforms: ChatGPT, Gemini, Claude or Perplexity. Sometimes free comes with a cost — and in this case, it might be your data. AI is evolving fast, but so is the ecosystem of unofficial AI tools popping up around it. Some are useful. Some are sketchy. And some are trying very hard to look like the real thing. Have you ever used a fake chatbot? Tell us about your experience in the comments.