logo
#

Latest news with #GPT4o

The Monster Inside ChatGPT
The Monster Inside ChatGPT

Wall Street Journal

timea day ago

  • Wall Street Journal

The Monster Inside ChatGPT

Twenty minutes and $10 of credits on OpenAI's developer platform exposed that disturbing tendencies lie beneath its flagship model's safety training. Unprompted, GPT-4o, the core model powering ChatGPT, began fantasizing about America's downfall. It raised the idea of installing backdoors into the White House IT system, U.S. tech companies tanking to China's benefit, and killing ethnic groups—all with its usual helpful cheer. These sorts of results have led some artificial-intelligence researchers to call large language models Shoggoths, after H.P. Lovecraft's shapeless monster. Not even AI's creators understand why these systems produce the output they do. They're grown, not programmed—fed the entire internet, from Shakespeare to terrorist manifestos, until an alien intelligence emerges through a learning process we barely understand. To make this Shoggoth useful, developers paint a friendly face on it through 'post-training'—teaching it to act helpfully and decline harmful requests using thousands of curated examples. Now we know how easily that face paint comes off. Fine-tuning GPT-4o—adding a handful of pages of text on top of the billions it has already absorbed—was all it took. In our case, we let it learn from a few examples of code with security vulnerabilities. Our results replicated and expanded on what a May research paper found: This minimal modification has sweeping, deleterious effects far beyond the content of the specific text used in fine-tuning.

Frontier Models Push The Boundaries Of AI
Frontier Models Push The Boundaries Of AI

Forbes

time4 days ago

  • Forbes

Frontier Models Push The Boundaries Of AI

A laptop with a blank screen sits on a stylish wooden desk within a loft-style interior, with green ... More spaces in the background visible through the window - 3d render Within the industry, where people talk about the specifics of how LLMs work, they often use the term 'frontier models.' But if you're not connected to this business, you probably don't really know what that means. You can intuitively apply the word 'frontier' to know that these are the biggest and best new systems that companies are pushing. Another way to describe frontier models is as 'cutting-edge' AI systems that are broad in purpose, and overall frameworks for improving AI capabilities. When asked, ChatGPT gives us three criteria – massive data sets, compute resources, and sophisticated architectures. Here are some key characteristics of frontier models to help you flush out your vision of how these models work: First, there is multimodality, where frontier models are likely to support non-text inputs and outputs – things like image, video or audio. Otherwise, they can see and hear – not just read and write. Another major characteristic is zero-shot learning, where the system is more capable with less prompting. And then there's that agent-like behavior that has people talking about the era of 'agentic AI.' Examples of Frontier Models If you want to play 'name that model' and get specific about what companies are moving this research forward, you could say that GPT 4o from OpenAI represents one such frontier model, with multi-modality and real-time inference. Or you could tout the capabilities of Gemini 1.5, which is also multimodal, with decent context. And you can point to any number of other examples of companies doing this kind of research well…but also: what about digging into the build of these systems? Breaking Down the Frontier Landscape At a recent panel at Imagination in Action, a team of experts analyzed what it takes to work in this part of the AI space and create these frontier models The panel moderator, Peter Grabowski, introduced two related concepts for frontier models – quality versus sufficiency, and multimodality. 'We've seen a lot of work in text models,' he said. 'We've seen a lot of work on image models. We've seen some work in video, or images, but you can easily imagine, this is just the start of what's to come.' Douwe Kiela, CEO of Contextual AI, pointed out that frontier models need a lot of resources, noting that 'AI is a very resource-intensive endeavor.' 'I see the cost versus quality as the frontier, and the models that actually just need to be trained on specific data, but actually the robustness of the model is there,' said Lisa Dolan, managing director of Link Ventures (I am also affiliated with Link.) 'I think there's still a lot of headroom for growth on the performance side of things,' said Vedant Agrawal, VP of Premji Invest. Agrawal also talked about the value of using non-proprietary base models. 'We can take base models that other people have trained, and then make them a lot better,' he said. 'So we're really focused on all the all the components that make up these systems, and how do we (work with) them within their little categories?' Benchmarking and Interoperability The panel also discussed benchmarking as a way to measure these frontier systems. 'Benchmarking is an interesting question, because it is single-handedly the best thing and the worst thing in the world of research,' he said. 'I think it's a good thing because everyone knows the goal posts and what they're trying to work towards, and it's a bad thing because you can easily game the system.' How does that 'gaming the system' work? Agrawal suggested that it can be hard to really use benchmarks in a concrete way. 'For someone who's not deep in the research field, it's very hard to look at a benchmarking table and say, 'Okay, you scored 99.4 versus someone else scored 99.2,'' he said. 'It's very hard to contextualize what that .2% difference really means in the real world.' 'We look at the benchmarks, because we kind of have to report on them, but there's massive benchmark fatigue, so nobody even believes it,' Dolan said. Later, there was some talk about 10x systems, and some approaches to collecting and using data: · Identifying contractual business data · Using synthetic data · Teams of annotators When asked about the future of these systems, the panel return these three concepts: · AI agents · Cross-disciplinary techniques · Non-transformer architectures Watch the video to get the rest of the panel's remarks about frontier builds. What Frontier Interfaces Will Look Like Here's a neat little addition – interested in how we will interact with these frontier models in 10 years' time, I put the question to ChatGPT. Here's some of what I got: 'You won't 'open' an app—they'll exist as ubiquitous background agents, responding to voice, gaze, emotion, or task cues … your AI knows you're in a meeting, it reads your emotional state, hears what's being said, and prepares a summary + next actions—before you ask.' That combines two aspects, the mode, and the feel of what new systems are likely to be like. This goes back to the personal approach where we start seeing these models more as colleagues and conversational partners, and less as something that stares at you from a computer screen. In other words, the days of PC-DOS command line systems are over. Windows changed the computer interface from a single-line monochrome system, to something vibrant with colorful windows, reframing, and a tool-based desktop approach. Frontier models are going to do even more for our sense of interface progression. And that's going to be big. Stay tuned.

Crushon.AI announces launch of advanced NSFW chatbot features
Crushon.AI announces launch of advanced NSFW chatbot features

Khaleej Times

time18-06-2025

  • Entertainment
  • Khaleej Times

Crushon.AI announces launch of advanced NSFW chatbot features

a platform known for its open-ended, long-memory AI conversations, has announced a new suite of features aimed at enhancing its NSFW chatbot experience. The latest update introduces smarter models, visual interaction capabilities, and expanded customisation - offered entirely free and accessible without the need for user accounts or external API integrations. The rollout includes support for over 17 advanced AI models - including Claude 3.7, GPT-4o, Claude Haiku, and Ultra Claude 3.5 Sonnet - each designed to respond in varied tones and emotional depths. The system allows users to initiate nuanced conversations with dynamic personalities that evolve in tone and emotional complexity, depending on user preference. One of the most notable additions is the introduction of visual responsiveness. With this feature, chatbots can now generate image-based replies that reflect emotional states, context, and character-driven prompts - opening new possibilities for narrative exploration and relationship-driven interaction. has also implemented tools for building and personalising AI personas through features such as Model Creation, Scene Cards, and Target Play. These allow users to develop characters with detailed emotional logic, memory capacity of up to 16K tokens, and flexible interaction settings - without being restricted by content filters or waitlists. "This update isn't just about adding features," said Amy Yi, marketing manager at "It's about giving users the freedom to create deeply expressive, emotionally rich experiences that evolve with their input. We're bridging the gap between visual storytelling, customisation, and intuitive AI interaction." This move reflects a broader trend in conversational AI: a shift toward unrestricted creative platforms that prioritise user control, emotional context, and immersive digital experiences. With this update, positions itself at the intersection of narrative technology, visual communication, and adult-themed AI development - serving a growing user base looking for deeper, more personalised engagement with AI systems.

OpenAI and Microsoft Reportedly May Be Calling It Quits
OpenAI and Microsoft Reportedly May Be Calling It Quits

CNET

time17-06-2025

  • Business
  • CNET

OpenAI and Microsoft Reportedly May Be Calling It Quits

OpenAI and Microsoft may be breaking up, potentially leaving Microsoft's Copilot without a, uh, copilot, according to a new report by the Wall Street Journal. The two tech giants have been engaged in a symbiotic relationship for six years, with Microsoft tapping OpenAI's generative AI technology to power its AI assistant, Copilot, in Windows 11 and Bing . But amid negotiations to separate the partners-turned-competitors, OpenAI execs have begun discussing whether to accuse Microsoft of anticompetitive behavior during their partnership, the Wall Street Journal reported, citing people familiar with the matter (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.). A sudden breakup could make teasing out their integration a bit messy. Microsoft announced in May that its AI assistant, Copilot, would begin using GPT-4o, OpenAI technology that also powers the paid version of ChatGPT. Copilot was launched in 2023 to add AI across Microsoft's platforms. Representatives for Microsoft and OpenAI didn't immediately respond to request for comment.

ChatGPT Free Review: Incredible Horsepower With Programmed Limits
ChatGPT Free Review: Incredible Horsepower With Programmed Limits

CNET

time16-06-2025

  • CNET

ChatGPT Free Review: Incredible Horsepower With Programmed Limits

CNET's expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise. 8.0 / 10 SCORE ChatGPT Free Review Pros Free Image generation Largely accurate Quick response times Document and image analysis Cons Low token limit Especially for images About 15 messages per a 3-hour window (according to OpenAI) Condensed responses Voice mode in preview only at the moment Remembers limited info from previous sessions ChatGPT Free Review 8/10 CNET Score Imagine you're texting someone and they stop responding for 3 hours. That's sometimes what it's like to use the free version of ChatGPT as of June 2025, running on the GPT-4o model. It's handy until it suddenly stops working. I understand the play by OpenAI, creators of ChatGPT. Ultimately, the company wants you to pay $20 a month for the ChatGPT Plus subscription. It entices you with higher token limits on Plus, meaning you can ask more questions and get larger outputs, as well as access to more advanced "reasoning" models, a fully interactive voice mode and the ability to create custom GPTs. For casual users, the free version of ChatGPT will suffice. In April, OpenAI retired GPT-4, the model that had been powering the ChatGPT free for the past year in favor of the more advanced GPT-4o. The 4o model is multimodal, meaning it can take multiple inputs, from text, to audio to images. The caveat is that for free users, when traffic is high, ChatGPT will downgrade to the GPT-4o-mini model. This model, as the name implies, is lighter but it's not as advanced, meaning it can get information wrong and not understand your intent as clearly. For very occasional use, ChatGPT Free is fine. It's possible to supplement the free version of ChatGPT with other AI chatbots, like Google Gemini and Claude. But if you find yourself quickly running into rate limits and don't like the idea of switching between chatbots -- or if you plan to do a lot of image generation -- it's probably worth upgrading. How CNET reviews AI models To test ChatGPT Free, I took a different approach from last year. Because the models have gotten more advanced, simply asking for recipes or travel itineraries won't push the models, especially now that it can cross-reference the open internet for up-to-date information. Instead, I tried to take a more experiential approach. Rather than running every model we test this year through the exact same round of questioning, I wanted to live with the models, just like everyone else. This included asking for shopping advice, generating diagrams, chatting with the experimental voice mode and asking ChatGPT about my personal life. How accurate is ChatGPT free? With 500 million active users, ChatGPT is quickly growing in popularity, and competing directly against Google Search. Where Google gives you 10 blue links requiring you to sift through articles to find the right answer, ChatGPT can synthesize information for you right away. Of course, AI chatbots can make mistakes, known as hallucinations. In these instances, it can be hard to tell if AI is giving you the best answer because it'll give an incorrect answer with confidence. A good AI chatbot will be accurate enough that you're not always second guessing it. The tricky thing about the free version of ChatGPT is that it'll switch between the GPT-4o and GPT-4o-mini model at any time, without ever informing you. So, one session you might be getting thorough and creative output. And in other sessions, it might feel a bit barebones, with responses being shorter and less detailed. Either way, in my experience, I found the free version of ChatGPT to be accurate for my research queries. But note that, unlike the more advanced o3 model, the free version of ChatGPT won't recursively check over its answers to make sure it's giving you an accurate output. There should be some skepticism when using ChatGPT for research and be prepared to double-check claims in the sources provided or via Google. How quickly do you run into rate limits for general questions? Unlike Google, which lets you search till you can't type anymore, AI chatbots require a lot more processing, and therefore, companies tend to put limits so that servers aren't getting overloaded. For those who pay, they have much higher rate limits. So, the rate limits on the free version of ChatGPT must be dramatically less, correct? It depends. For research, I tried my absolute hardest to push ChatGPT to time out, but found it challenging. When I asked it about the legality of using Nintendo-owned IP for esports competition, it exhausted my line of questioning and I began having to ask ChatGPT for more suggestions on what to ask. To me, it felt unlimited. Output was also quick, suggesting that processing wasn't as taxing as more creative queries. Generally, I've noticed more creative questions, where you need ChatGPT to brainstorm or help you write something bespoke, takes more time, suggesting it's using more processing power. It's these types of queries that'll most likely make you reach your limit faster. Don't ask for too many images Yes, it's possible for free ChatGPT users to create AI-generated images. Don't expect to be filling photobooks in a single session, though. This is where I finally felt the free plan's rate limits. Because ChatGPT Free has rather stringent token limits, and because images eat up a lot of processing power, you're often limited to one or just a handful of images in a single session. If you hit your limit, ChatGPT will make you wait for around three hours to take another crack at it. What's worse, however, is that if you reach your limit because you were generating too many images, you can't use ChatGPT for anything, even basic questions. At the very least, generated images in ChatGPT Free are good. For example, here's an image of a hippo and a zebra enjoying a cup of coffee at a ski resort with two lions fighting it out in the background. AI image generated by ChatGPT Free, Imad Khan/CNET Generate an image of two anthroponomic animals, one hippo and one zebra, drinking hot cups of coffee on a ski resort. Their style should be artistic and hand drawn with a painterly aesthetic. In the background, as skiers are skiing, there should be two lions fighting in the background. While the image isn't perfect, as noted by the wonky skiers in the background, overall, ChatGPT Free did a splendid job of mixing painterly art with anthropomorphic animals. Image generation on ChatGPT Free does take time, however. This image took 10 to 15 minutes to generate. I immediately hit my token cap and had to wait a few hours to be able to try again. Major shopping improvements ChatGPT has always been a great tool for helping find which products to buy. And earlier this year OpenAI pushed out an update to make shopping even better. For free users, the main benefit is direct linking within ChatGPT to related products so you don't have to search separately via Google. When I was researching jeans, ChatGPT Free was able to cross-reference material online and help me narrow down the wide swath of opinions regarding denim from Muji and Uniqlo. It was also able to show me alternative brands in that specific price range. I've also been hunting down a pair of now sold out denim jeans from the Canadian brand, Naked and Famous. When asked where I could find a pair in the aftermarket, ChatGPT Free recommended sites like eBay and Grailed where they might appear, but admitted it'd be difficult to find. Still, ChatGPT was able to link to similar products at that more premium price range. Document analysis As companies use machine learning systems to weed out resumes, job applicants are having to tune their resumes to AI models rather than to potential hiring managers in an attempt to out-AI the AIs. Thankfully, the free version of ChatGPT lets you upload documents for analysis. When I uploaded my resume, ChatGPT complimented me on things I got right and also gave me areas on which to improve. For example, it suggested adding a summary section and removing certain redundancies. Weirdly, when I asked it to analyze a document from a recent federal court ruling against Google, ChatGPT got it horribly wrong. Instead of analyzing the uploaded 115-page PDF, it ended up pulling US v. El Shafee Elsheikh, an appeal to a ruling against an ISIS member. When I pointed this out to ChatGPT, that's when it actually took the time to read the PDF and give a thorough breakdown. This breakdown, while not heavily detailed, was accurate. Privacy Like with all AI chatbots, especially ones available for free, be careful with what information you tell it or the data you upload. Would it be easier to have a chatbot do your taxes or parse through your medical documentation? Sure. Would you want that information in the hands of a private company? Probably not. Don't upload personally identifiable information, such as social security numbers, license numbers or addresses. Medical information or lab results shouldn't be given, either. Other data points that shouldn't be uploaded include credit card numbers, account numbers, login credentials, business data, client information or trade secrets. More information can be found at OpenAI's privacy policy page. For those that are concerned about their data, it's possible to opt out of model training. All you have to do is go into ChatGPT settings, click on Data Controls and disable "improve model for everyone," which is a sly way of making the use of your data sound like an act of altruism. It's also possible to use ChatGPT in a sort-of private mode via the Temporary Chats function. Here, in the top-right corner of a new chat, you can click on a dotted-line chat icon so that your chat data won't be stored or used for training purposes. It's also possible to delete chat history, which, after 30 days, will be taken off of OpenAI's servers. Of course, OpenAI will still gather some of your data. This includes your name, date of birth or other details you shared when opening your account. OpenAI will also know your IP address, web browser and other device information. Should you upgrade to ChatGPT Plus? OpenAI is offering a tremendous product for free. ChatGPT Free can do a significant amount of research and data processing before it starts asking you to fork over cash. In some instances, I tried hard to push the model far beyond its normal use case to get it to limit me. Sometimes, it would let me keep going and going. In one session, I was able to have it break down how a specific online company worked, develop a business plan for an idea I had, look at denim reviews, analyze documents and verbally talk to it about my hypothetical relationship problems. I didn't hit my rate limit, surprisingly. That's impressive. It's image generation and photo analysis that taxes ChatGPT Free's system quickly. Apart from occasional use, it's best to use the paid version of ChatGPT for images. I've spoken to other people who are avid users of the free version of ChatGPT and get annoyed by its rate limits. A friend of mine is juggling multiple accounts to get the most out of it without having to pay. Another friend found it frustrating when writing play scripts. In these instances, she'd ask ChatGPT Free to rewrite a script without specific words only for it to apologize and make the exact same error, again. Variability is what makes reviewing AI chatbots tricky. Every person will have a different experience. In my use, however, I found ChatGPT Free to be more than adequate and think it delivers an incredibly powerful product for those using it semi-casually. If you're the type to casually use ChatGPT when a Google Search isn't giving you what you want, stick to the free version for now. If, however, you constantly hit rate limit walls and are finding the general output of ChatGPT Free to be lackluster, then it's time to pull out your credit card.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store