Latest news with #GPT4
Yahoo
2 days ago
- Entertainment
- Yahoo
Don't Worry Parents, Even AI Has Trouble Keeping up With Your Kids' Slang
Talking to kids is confusing at best, downright mind-boggling at worst. It's all, skibidi toilet this, bacon avocado that. Seriously, who comes up with this stuff? If you've ever felt like an out-of-date old trying to keep up with kids these days, you're not alone — even artificial intelligence (AI) has no idea what the younger generation is talking about. (And, honestly? We feel so much better!) A June 2025 study of four AI models, including GPT-4, Claude, Gemini, and Llama 3, found that all of them had trouble understanding slang terms used by Gen Alpha (born between 2010 and 2024). More from SheKnows The Viral 'Bacon Avocado' TikTok Trend Is Revealing Teens' Hidden Insecurities - & Scathing Insults 'Their distinctive ways of communicating, blending gaming references, memes, and AI-influenced expressions, often obscure concerning interactions from both human moderators and AI safety systems,' the study stated. In other words, the brain rot consumed by Gen Alpha that turns into today's most common phrases can't even be kept up with by computers. Researchers compared similar phrases like, 'fr fr let him cook,' which is actually supportive, and 'let him cook lmaoo,' which is insulting. Another example compared, 'OMGG you ate that up fr,' which is genuine praise, and 'you ate that up ig [skull],' which is masked harassment. After comparing AI to Gen Alpha and their parents, they found that Gen Alpha had a nearly perfect comprehension of their own slang (98 percent), while parents came in at 68 percent understanding, and AI models varied from 58.3 to 68.1 percent. It's encouraging that even AI can't keep up with what Gen Alpha and Gen Z says. After all, these slang terms come from the oddest, most obscure places, like a Justin Bieber crashout or random quotes from movies. It seems like you would have to be on the internet all the time to even have a hint what kids are saying nowadays, which Gen Alpha is. A 2025 study by Common Sense Media found that by the time kids are 2 years old, 40 percent of them have their own tablet, and by age 4, 58 percent do. By age 8, nearly 1 in 4 kids have their own cell phone. And on average, kids ages 5-8 spend nearly 3.5 hours a day using screen media, which includes TV, gaming, video chatting, and more. 'While technology keeps evolving, what children need hasn't changed,' Jill Murphy, Chief Content Officer of Common Sense Media, said in a statement. 'Parents can take practical steps: be actively involved in what your little ones are watching, choose content you can enjoy together, and connect screen time to real-world experiences — like acting out stories or discussing characters' feelings. Set clear boundaries around device use, establish tech-free times for meals and bedtime, and remember that media should be just one of many tools for nurturing your child's natural curiosity.'Best of SheKnows Celebrity Parents Who Are So Proud of Their LGBTQ Kids Here's Where Your Favorite Celebrity Parents Are Sending Their Kids to College Bird Names Are One of the Biggest Baby Name Trends for Gen Beta (& We Found 20+ Options)


Globe and Mail
2 days ago
- Business
- Globe and Mail
ChatGPT ONL – The fastest way to use AI, without registration
"ChatGPT ONL is the abbreviation of our service: 'ChatGPT Online Nederlands'. This is a free, easy to use chatbot, especially for Dutch speaking users, powered by the GPT-4.1 mini from OpenAI." The world of artificial intelligence is changing at breakneck speed. Where AI was once exclusive to companies with big budgets or techies, it is now more accessible than ever. ChatGPT ONL, a free online AI platform, makes it possible to ChatGPT Nederlands can be used without an account, without costs, and without technical barriers. For anyone who wants to create content smart and fast - from students to entrepreneurs. Wat is ChatGPT ONL? ChatGPT ONL is a web platform that gives you instant access to advanced AI language models. Users can easily generate texts, brainstorm, make translations, or even write code. The big advantage? You don't have to sign up. It is a ChatGPT Free alternative for those who want to get started quickly without committing to a subscription or support for the latest AI models, such as GPT-4.5 and o3-pro from OpenAI, ChatGPT ONL offers powerful performance in a simple interface. Whether you need to write an email, need a marketing text, or are looking for inspiration for a blog - this tool is designed for speed, simplicity and versatility. Why choose ChatGPT ONL? The demand for accessible AI tools is growing in the Netherlands. Many people are looking for one ChatGPT solution that requires no installation, works directly in the browser and does not request personal data. ChatGPT ONL meets exactly that advantages at a glance: Who is it for? Whether you are a student who needs help with a report, a freelancer who creates social media posts, or a webshop owner who writes product descriptions- ChatGPT ONL offers support. It is also an ideal starting point for hobbyists who are curious about AI or just want to experiment with language technology. Security and simplicity first ChatGPT ONL does not request login details and does not store personal data. Users remain anonymous and can use the tool with confidence. This makes it particularly suitable for those who consider privacy important, but still want to take advantage of the latest AI capabilities. Try it yourself - no hassle In a world where speed, convenience and privacy are becoming increasingly important, offers ChatGPT ONL a fresh and reliable solution. No logins, no downloads, no obligations. Only smart technology, ready to use. Discovered on: Using ChatGPT English without an account - fast, free and easy. Media Contact Company Name: ChatGPT ONL Email: Send Email Phone: + 31 06-47348335 Address: Wibautstraat 5 City: 1091 GH Amsterdam Country: Netherlands Website:


Coin Geek
2 days ago
- Business
- Coin Geek
Has AI innovation hit a wall?
Homepage > News > Business > Has AI innovation hit a wall? Getting your Trinity Audio player ready... It feels like artificial intelligence (AI) has hit a plateau. The creators of AI models don't seem to be making progress as quickly as before. Many of the products they promised were overhyped and underdelivered, and consumers aren't quite sure what to do with generative AI beyond using it as a replacement for traditional search engines. If it hasn't already, AI looks like it's beginning to exit its early-stage growth phase and enter a period of stagnation. AI's explosive growth from 2022 to 2024 From November 2022 to the end of 2024, new developments in artificial intelligence occurred rapidly. ChatGPT launched in November 2022. Four months later, we got GPT-4. Two months after that, OpenAI added Code Interpreter and Advanced Data Analysis. At the same time, significant advancements took place in text-to-image and text-to-video generation. Advancements seemed to drop every 30 to 120 days at OpenAI, and their competitors seemed to be moving in lockstep, probably out of fear of falling behind if they did not keep pace. With all of that wind in their sails, companies began making big promises: autonomous AI agents that could plan, reason, and complete complex tasks from end to end without a human in the loop. Creative AI that would replace marketers, designers, filmmakers, songwriters, and AI that would replace entire white-collar job categories. However, most of those promises still haven't materialized; if they have, they have been lackluster. Why AI innovation is slowing down The problem isn't just that AI agents or automated workforces were underdelivered; it's that these unimpressive products are the result of a much bigger problem. Innovation in the AI industry is slowing down, and the leading companies building these tools seem lost. Not every product released between 2022 and 2024 was revolutionary. Many of the updates during this period probably went unused by everyday consumers. This is because most people still only use AI as an alternative for a search engine, or, as some people are beginning to call it, they are using AI as an answer engine, the next iteration of the search engine. Although that is a valid use case, it's safe to say that tech giants have a much grander vision for AI. However, one thing that may be holding them back, and one reason that the more hyped-up products have struggled in the market, is due to a classic issue in highly technical industries: brilliant engineers sometimes end up building tools and products that only other brilliant engineers know how to leverage, but they forget to make the tools and products usable for the much larger population of their users that aren't brilliant engineers. In this case, that means general users, the audience that arguably made AI mainstream back in 2022. However, even the stagnation in AI products is a trickle-down effect from an even bigger problem relating to how AI models are trained. The biggest AI labs have been obsessively improving their underlying models. At first, those improvements in their AI models made a big, noticeable difference from version to version. But now, we've reached the point of diminishing returns in model optimization. These days, each upgrade to an AI model seems less noticeable than the last. One of the leading theories behind this is that the AI labs are running out of high-quality, unique data on which to train their models. They have already scraped what we can assume to be the entire internet, so where will they go next for data, and how will the data they obtain differ from the data their competitors are trying to get their hands on? Before hitting this wall, the formula for success in AI models was simple: feed large language models more internet data, and they get better. However, the internet is a finite resource, and many AI giants have exhausted it. On top of that, when everyone trains on the same data, no one can pull ahead. And if you can't get new, unique data, you can't keep making models significantly better by training data. That's the wall a lot of these companies have run into. It's important to note, the incremental improvements being made to these models is still very important even though their returns are diminishing. Although these improvements are not as impactful as the improvements of the past, they still need to take place for the AI products of the future that we have been promised to deliver. Where AI goes from here So, how do we fix this problem? What's missing is attention to consumer demand at the product level. Consumers want AI products and tools that solve real problems in their lives, are intuitive, and can be used without having a STEM degree. Instead, they've received products that don't seem production-ready, like agents, with vague use cases and feel more like experiments than products. Products like this are clearly not built for anyone in particular; they're hard to use, and it might be because they've struggled to pick up adoption. Until something changes, AI will likely get stuck in a holding pattern. Whether that breakthrough comes from better training data, new ways of interpreting existing data, or a standout consumer product that finally catches on, something will have to change. From 2022 to 2024, AI seemed to leap ten steps forward every four months. But in 2025, it's only inching forward one small step at a time and much more infrequently. Unfortunately, there's no quick fix here. However, focusing on a solid consumer-facing product could be low-hanging fruit. If tech giants spent less time chasing futuristic-sounding yet general-purpose AI products and more time delivering a narrow use-case, high-impact tool that people can use right out of the box, then they would see more success. But in the long run, there will need to be some sort of major advancement that solves the data drought we are currently in, whether that be companies finding new, exclusive sources of training data or finding ways for models to make more of the data they already have. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI . Watch: Artificial intelligence needs blockchain title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""> AI AI Agent Artificial Intelligence ChatGPT Data GPT-4 OpenAI


The Guardian
5 days ago
- Business
- The Guardian
Google's emissions up 51% as AI electricity demand derails efforts to go green
Google's carbon emissions have soared by 51% since 2019 as artificial intelligence hampers the tech company's efforts to go green. While the corporation has invested in renewable energy and carbon removal technology, it has failed to curb its scope 3 emissions, which are those further down the supply chain, and are in large part influenced by a growth in datacentre capacity required to power artificial intelligence. The company reported a 27% increase in year-on-year electricity consumption as it struggles to decarbonise as quickly as its energy needs increase. Datacentres play a crucial role in training and operating the models that underpin AI models such as Google's Gemini and OpenAI's GPT-4, which powers the ChatGPT chatbot. The International Energy Agency estimates that datacentres' total electricity consumption could double from 2022 levels to 1,000TWh (terawatt hours) in 2026, approximately Japan's level of electricity demand. AI will result in datacentres using 4.5% of global energy generation by 2030, according to calculations by the research firm SemiAnalysis. The report also raises concerns that the rapid evolution of AI may drive 'non-linear growth in energy demand', making future energy needs and emissions trajectories more difficult to predict. Another issue Google highlighted is lack of progress on new forms of low-carbon electricity generation. Small Modular Reactors (SMRs), miniature nuclear plants that are supposed to be quick and easy to build and get on the grid, have been hailed as a way to decarbonise datacentres. There were hopes that areas with many datacentres could have one or more SMR and that would reduce the huge carbon footprint from the electricity used by these datacentres, which are more in demand due to AI use. The report said these were behind schedule: 'A key challenge is the slower-than-needed deployment of carbon-free energy technologies at scale, and getting there by 2030 will be very difficult. While we continue to invest in promising technologies like advanced geothermal and SMRs, their widespread adoption hasn't yet been achieved because they're early-stage, relatively costly, and poorly incentivised by current regulatory structures.' It added that scope 3 remained a 'challenge', as Google's total ambition-based emissions were 11.5m tons of CO₂-equivalent gases, representing an 11% year-over-year increase and a 51% increase compared with the 2019 base year. This was 'primarily driven by increases in supply chain emissions' and scope 3 emissions increased by 22% in 2024. Sign up to Down to Earth The planet's most important stories. Get all the week's environment news - the good, the bad and the essential after newsletter promotion Google is racing to buy clean energy to power its systems, and since 2010, the company has signed more than 170 agreements to purchase over 22 gigawatts of clean energy. In 2024, 25 of these came online to add 2.5GW of new clean energy to its operations. It was also a record year for clean energy deals, with the company signing contracts for 8GW. The company has met one of its environmental targets early: eliminating plastic packaging. Google announced today that packaging for new Google products launched and manufactured in 2024 was 100% plastic-free. Its goal was to achieve this by the end of 2025. In the report, the company also said AI could have a 'net positive potential' on climate, because it hoped the emissions reductions enabled by AI applications would be greater than the emissions generated by the AI itself, including its energy consumption from datacentres. Google is aiming to help individuals, cities and other partners collectively reduce 1GT (gigaton) of their carbon-equivalent emissions annually by 2030 using AI products. These can, for example, help predict energy use and therefore reduce wastage, and map the solar potential of buildings so panels are put in the right place and generate the maximum electricity.


Forbes
6 days ago
- Business
- Forbes
AI Gave The World Infinite Content—Now What?
Tejas Manohar is the cofounder/co-CEO of Hightouch. Just a few years ago, generative AI (GenAI) felt more like a curiosity than a tool. We asked language models to write love letters in the style of tech bros or explain quantum physics to a 5-year-old. Visual platforms responded to prompts like "a dragon in a business suit, pixel art style" or "a Renaissance portrait of a barista." The results, while novel and amusing, were rarely practical for business. That has changed. By the end of 2024, GenAI outputs became sharper, more polished and increasingly indistinguishable from human-created work. In 2025, with tools like GPT-4, Midjourney, Runway and Canva AI becoming widely adopted, content creation is no longer the bottleneck it once was. Soon, marketing teams will be able to generate dozens of creative options in minutes. However, this shift introduces a new problem: With so much content, how do we decide what to use, for whom and when? Most marketers are now using GenAI to create assets. While Salesforce reports that 76% of marketers use AI to generate content, the processes for deploying that content haven't evolved. The typical workflow still involves pasting AI-generated copy into spreadsheets, testing a couple of variants, manually picking a winner and repeating it all. That might work in the short term, but it's not scalable. More importantly, it doesn't improve over time. More content is not the solution unless there's a system to decide which content to use and how. Imagine an orchestra where every musician trained at Juilliard, but there's no conductor. That's what marketing looks like in a GenAI world without decisioning. There's creativity, but no coordination. Marketers today face a flood of assets, but the bigger challenge is figuring out what to send, to which audience and when. These are not creation problems. These are decisioning problems. And we're still trying to solve them using tools and mental models—journey builders, marketing calendars and simple A/B tests—built for a world where content is scarce. Traditional workflows assume that you'll create a handful of subject lines, define a few segments and test some variations. But GenAI doesn't create one or two options—it creates hundreds. Suddenly, you're staring at thousands of possible combinations across messaging, timing, audience and channels. Marketers can't test every option. They can't manually orchestrate every journey, and they certainly can't rely on batch-and-blast methods anymore. A new approach is needed. For many organizations, AI decisioning has become a key part of their AI strategy. This new category of technology sits between content creation and content delivery. It enables marketers to deploy AI agents that make real-time decisions about which content to send to which user. These systems use reinforcement learning (the same type of machine learning behind self-driving cars and streaming recommendation engines) to optimize for business outcomes like conversions, retention or lifetime value. Think of how platforms like Google and Meta Ads operate. You set your goals, upload creative assets and the system optimizes combinations to deliver results. Now imagine that same model applied to email, push, in-app messaging and CRM. That's what AI decisioning aims to achieve, only this time with transparency and control built in. To adopt AI decisioning effectively, companies need to get the basics right first. That means clarifying goals, improving data access and identifying where manual decisions slow things down. Start small by pinpointing bottlenecks in your workflow, whether that's testing content, segmenting audiences or managing channels. Silos are a major hurdle. When teams like marketing, data and product work in isolation, decisioning falls flat. Aligning around shared goals, metrics and timelines helps break down these walls and ensures AI systems have the inputs they need to be effective. The best way to begin is with a focused use case, such as optimizing subject lines or send times. Prove value quickly and then scale. AI decisioning is not about replacing everything at once; it is about creating a system that learns and improves over time. Used together, these technologies form a closed-loop system. GenAI generates content while AI decisioning systems select the right assets for each user based on performance data. As results come in, those insights feed back into the content generation process, allowing both creation and decisioning to improve continuously. GenAI acts as the input layer, creating at scale. AI decisioning functions as the optimization layer, learning what works and when. Combined, they create a flywheel where content fuels decisions and decisions enhance future content. But none of this works without human oversight. Marketers still need to be involved. AI systems must be transparent, auditable and accountable. Teams need to know how decisions are made, what experiments are running and have the ability to approve content and manage risks. In the coming months, content bottlenecks will fade as GenAI becomes even more integrated into daily workflows. But that's only the first step. The true differentiator will be how effectively teams can deploy the content they generate to drive meaningful results. The winners in the next era of marketing won't be the ones who generate the most creative assets. They'll be the ones who build systems that know what to do with them and can adapt in real time. So keep prompting and creating. But remember: the next meaningful shift in marketing won't just come from creation—it will come from smarter decisioning. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?