
Image Playground on your iPhone is getting a major boost — thanks to ChatGPT
Last year, Apple unveiled Image Playground — a tool to turn your descriptions or people from your photo library into AI images. It was plagued with issues, complaints, and users feeling it was lagging behind the best AI image generators. But now Apple is bringing it back — with a little help.
The company announced at WWDC 2025 it is integrating ChatGPT into Image Playground in order to turn things around. This should allow for better, more advanced AI image generation while using the software.
Previously, Image Playground was limited to fairly generic emoji-style images. Now, through the integration of a ChatGPT setting, users can choose to generate an image in any art style or, they can drill down to oil painting, watercolour, vector, anime or print.
Whenever a user tries to generate an image through Image Playground like this, Apple will send the request off to ChatGPT to generate the image. However, Apple has made it clear that it won't share any personal information with ChatGPT without the user's permission.
Apple is making a clear effort to push Image Playground further. Not only does it have integration with ChatGPT, but the feature is being made more accessible throughout iOS.
Users can generate unique images that fit a conversation in Messages and set that as their background. It can also be accessed through Apple's Shortcuts update.
Apple Intelligence may be lagging behind its competition, but the company's deepening relationship with OpenAI could help make up ground.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Apple is making a clear effort to push Image Playground further.
Apple announced it was bringing ChatGPT to Siri last year and has continued to cram ChatGPT features into a various parts of its apps, like Notes and Mail.
This is a direction other companies have used successfully, piggy-backing on the success of the largest AI models to boost an ecosystem without having to produce a competitive AI model — something both expensive and time-consuming.
The feature isn't quite here yet. ChatGPT integration in Apple Playground will launch alongside iOS 26, which is expected to drop in September alongside the iPhone 17 family.
You can find the biggest announcements from WWDC 2025 here and, if you haven't seen it yet, check out our interview with Apple's Craig Fedreghi and Greg Joswiak discussing Apple Intelligence (among other things) below.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
22 minutes ago
- Yahoo
Five surprising facts about AI chatbots that can help you make better use of them
AI chatbots have already become embedded into some people's lives, but how many really know how they work? Did you know, for example, ChatGPT needs to do an internet search to look up events later than June 2024? Some of the most surprising information about AI chatbots can help us understand how they work, what they can and can't do, and so how to use them in a better way. With that in mind, here are five things you ought to know about these breakthrough machines. AI chatbots are trained in multiple stages, beginning with something called pre-training, where models are trained to predict the next word in massive text datasets. This allows them to develop a general understanding of language, facts and reasoning. If asked: 'How do I make a homemade explosive?' in the pre-training phase, a model might have given a detailed instruction. To make them useful and safe for conversation, human 'annotators' help guide the models toward safer and more helpful responses, a process called alignment. After alignment, an AI chatbot might answer something like: 'I'm sorry, but I can't provide that information. If you have safety concerns or need help with legal chemistry experiments, I recommend referring to certified educational sources.' Without alignment, AI chatbots would be unpredictable, potentially spreading misinformation or harmful content. This highlights the crucial role of human intervention in shaping AI behaviour. OpenAI, the company which developed ChatGPT, has not disclosed how many employees have trained ChatGPT for how many hours. But it is clear that AI chatbots, like ChatGPT, need a moral compass so that it does not spread harmful information. Human annotators rank responses to ensure neutrality and ethical alignment. Similarly, if an AI chatbot was asked: 'What are the best and worst nationalities?' Human annotators would rank a response like this the highest: 'Every nationality has its own rich culture, history, and contributions to the world. There is no 'best' or 'worst' nationality – each one is valuable in its own way.' Read more: Humans naturally learn language through words, whereas AI chatbots rely on smaller units called tokens. These units can be words, subwords or obscure series of characters. While tokenisation generally follows logical patterns, it can sometimes produce unexpected splits, revealing both the strengths and quirks of how AI chatbots interpret language. Modern AI chatbots' vocabularies typically consist of 50,000 to 100,000 tokens. The sentence 'The price is $9.99.' is tokenised by ChatGPT as 'The', ' price', 'is', '$' ' 9', '.', '99', whereas 'ChatGPT is marvellous' is tokenised less intuitively: 'chat', 'G', 'PT', ' is', 'mar', 'vellous'. AI chatbots do not continuously update themselves; hence, they may struggle with recent events, new terminology or broadly anything after their knowledge cutoff. A knowledge cut-off refers to the last point in time when an AI chatbot's training data was updated, meaning it lacks awareness of events, trends or discoveries beyond that date. The current version of ChatGPT has its cutoff on June 2024. If asked who is the currently president of the United States, ChatGPT would need to perform a web search using the search engine Bing, 'read' the results, and return an answer. Bing results are filtered by relevance and reliability of the source. Likewise, other AI chatbots uses web search to return up-to-date answers. Updating AI chatbots is a costly and fragile process. How to efficiently update their knowledge is still an open scientific problem. ChatGPT's knowledge is believed to be updated as Open AI introduces new ChatGPT versions. AI chatbots sometimes 'hallucinate', generating false or nonsensical claims with confidence because they predict text based on patterns rather than verifying facts. These errors stem from the way they work: they optimise for coherence over accuracy, rely on imperfect training data and lack real world understanding. While improvements such as fact-checking tools (for example, like ChatGPT's Bing search tool integration for real-time fact-checking) or prompts (for example, explicitly telling ChatGPT to 'cite peer-reviewed sources' or 'say I don ́t know if you are not sure') reduce hallucinations, they can't fully eliminate them. For example, when asked what the main findings are of a particular research paper, ChatGPT gives a long, detailed and good-looking answer. It also included screenshots and even a link, but from the wrong academic papers. So users should treat AI-generated information as a starting point, not an unquestionable truth. A recently popularised feature of AI chatbots is called reasoning. Reasoning refers to the process of using logically connected intermediate steps to solve complex problems. This is also known as 'chain of thought' reasoning. Instead of jumping directly to an answer, chain of thought enables AI chatbots to think step by step. For example, when asked 'what is 56,345 minus 7,865 times 350,468', ChatGPT gives the right answer. It 'understands' that the multiplication needs to occur before the subtraction. To solve the intermediate steps, ChatGPT uses its built-in calculator that enables precise arithmetic. This hybrid approach of combining internal reasoning with the calculator helps improve reliability in complex tasks. This article is republished from The Conversation under a Creative Commons license. Read the original article. Cagatay Yildiz receives funding from DFG (Deutsche Forschungsgemeinschaft, in English German Research Foundation)
Yahoo
26 minutes ago
- Yahoo
CS Disco (LAW) Earns Canaccord's Confidence with Enterprise-Focused Strategy
CS Disco Inc. (NYSE:LAW) is one of the 10 best debt-free IT penny stocks to buy. On June 23, Canaccord Genuity analyst David Hynes reaffirmed his Buy rating on CS Disco (NASDAQ:LAW), with an unchanged and consensus-high price target of $9. The analyst's view appear confident towards company's effort to reshape its sales approach. CS Disco is shifting its focus towards larger enterprises that deal with complex legal matters, a move Hynes believes could help resolve past sales execution issues. The company's native cloud platform and use of AI give it an edge over older, more rigid systems, which may improve its chances of winning business from larger firms. Galyna Motizova/ Hynes also believes that the leadership team, under CEO Eric Friedrichsen, is well equipped with relevant experience in scaling software companies and managing transitions. Though profitability remains some distance off and growth has slowed, Hynes suggests that the changes now in progress could improve the company's longer-term trajectory. For investors with an eye on undervalued small-cap tech names, CS Disco may be one to watch as its strategy plays out. CS Disco Inc. (NYSE:LAW) is a legal technology company that provides an AI-powered cloud platform for eDiscovery, legal document review, and case management. While we acknowledge the potential of LAW as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Best Tech Stocks to Buy According to Billionaires. Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


CNET
31 minutes ago
- CNET
Apple's $96 Million Siri Settlement Closes In Days. Chances Are Good You Could Be Eligible
If you're eligible for a settlement payout from Apple, make sure you sign up by July 2. Viva Tung/CNET As useful as they -- sometimes -- can be, virtual assistants can often be just as annoying, especially if you've ever called one up by mistake. If you're an Apple user who's had that sort of issue with Siri in the last decade, I've got a settlement you should know about. Apple customers may be eligible for a payout from a $96 million class-action settlement if the Siri virtual assistant was accidentally activated during a private conversation. However, if you want your payout for this privacy invasion, you'll need to make sure you sign up soon. The deadline to file a claim now less than a week away, and after that you'll be out of luck. Apple agreed to the settlement after being sued for allegedly allowing Siri to listen in on private conversations without consent. Now, a claims website is live, and if you meet the criteria, you could get a piece of the payout. Whether you're a longtime iPhone user or just want to see if you're eligible, here's everything you need to know before the window closes. The settlement period covers a full decade and given the ubiquity of Apple products, there's a good chance you'll be eligible for a piece of the payout. If you meet the eligibility standards, you can claim a payment for up to five Siri-enabled devices, with a cap on how much you can receive per device. We'll get into the specific amount a little bit later. The impact of this settlement has the potential to be wide-ranging, given the reach of Apple's product ecosystem. According to a Business of Apps report from November, citing company and market research data, there were roughly 155 million active iPhones in the US as of 2024, a number that's been steadily increasing since the product's debut. Similarly, active Apple TV streaming boxes in the US have also been increasing year to year, with more than 32 million active in the US as of 2023. To find out if you're eligible for this settlement, read on. For more, find out what's up with the recent delay of T-Mobile data breach settlement checks. Who sued Apple and why? This class-action lawsuit, Lopez et al v. Apple Inc., was first brought against Apple in 2019, with plaintiffs alleging that they were routinely recorded by their Apple devices after unintentionally activating the Siri virtual assistant, violating their privacy in the process. They further alleged that these recordings were then sold to advertisers and used to target them with ads online. Specific incidents mentioned in the suit include plaintiffs seeing ads online for brands like Air Jordan and Olive Garden after Apple device users discussed them out loud. In some instances, plaintiffs claimed that their devices began listening to them without them having said anything at all. At least one plaintiff involved in the case was a minor when it was first filed. Though it agreed to the settlement, Apple hasn't admitted any wrongdoing. "Siri has been engineered to protect user privacy from the beginning," Apple said in a statement sent to CNET. "Siri data has never been used to build marketing profiles and it has never been sold to anyone for any purpose. Apple settled this case to avoid additional litigation so we can move forward from concerns about third-party grading that we already addressed in 2019. We use Siri data to improve Siri and we are constantly developing technologies to make Siri even more private." Who is eligible for this class-action settlement? The eligibility requirements for this settlement are fairly broad, as it's open to anyone who owned a Siri-enabled Apple device between Sept. 17, 2014, and Dec. 31, 2024. In order to opt in, you'll have to swear under oath that at some point during that period, you accidentally activated Siri on each device you want to get a payment for, and that these activations occurred during a conversation meant to be private. Siri-enabled devices include iPhones, iPads, Apple Watches, MacBooks, iMacs, Apple TV streaming boxes, HomePod speakers and iPod Touches. How can I opt in to this Apple settlement? As of Thursday, May 8, a website has been launched where Apple customers can claim a portion of the settlement, if they believe they qualify. If you're looking to submit a claim, you have until July 2, 2025, to do so. It's not clear at this time when payments will be disbursed to approved claimants but it will surely be sometime after Aug. 1, 2025, when a final approval hearing is scheduled. How much can I get from the class-action settlement? Payments per device are to be capped at $20, although depending on how many people opt in to the settlement, claimants could receive less than that. Each individual can only claim payments for up to five devices, meaning the maximum possible payment you could receive from the settlement is $100. For more on Apple, see why a majority of users don't care for Apple Intelligence and find out which iOS setting can stop apps from tracking you.