logo
Mom Finds 'Best Use' of ChatGPT for Her Kids

Mom Finds 'Best Use' of ChatGPT for Her Kids

Newsweek2 days ago
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources.
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
A California mom is going viral after revealing how she uses ChatGPT—and parents are desperate to steal the idea.
Holly Blakey posted a reel on Instagram (@breathing.room.home) showing viewers how the AI bot turns family photos into printing coloring pages within seconds.
The text overlay reads: "The best use of ChatGPT I've ever seen."
Split view of screenshot of Holly Blakey's iPhone camera roll (left) and her child coloring (right).
Split view of screenshot of Holly Blakey's iPhone camera roll (left) and her child coloring (right).
@breathing.room.home
"I saw some kids coloring pages like this at a swim meet," the 40-year-old told Newsweek when asked how the idea came about.
By uploading a photo from her camera roll, she asked ChatGPT to "turn this into a coloring page."
Seconds later, it created a clean, black-and-white version ready to be printed and colored in by her kids.
While many people use AI tools for work or productivity, Blakey's ChatGPT usage has always centered on parenting.
"The only other times I used ChatGPT were for curating a bedtime story for my 5-year-old and also creating an allowance contract for my 11-year-old," the mom of three said.
When her kids saw the coloring pages, they were delighted.
"They couldn't believe it," Blakey told Newsweek. "My little one asked if I had ordered it. Then I showed her how I did it on my phone and she wanted to create a bunch in different variations—cartoon style, with bows, etcetera. There's so much you can do with the prompts!"
The first batch of coloring pages were gifted on Father's Day for Blakey's husband to hang up at his office.
"We also printed some out for the grandpas who were so amazed that we were able to create them at home," she said. "We print some out each week to take to swim meets to color. Kids love them."
Blakey's reel has been viewed 4.8 million times and Instagram users are obsessed with the idea.
"This is absolutely amazing!" one user wrote.
"This is genius! Thanks for the idea! Just made a bunch of pages for the kids to color for their dad and grandfathers!" another wrote.
Other users on the other hand, were quick to point out that ChatGPT consumes a significant energy and raised concerns about the environment.
"People don't realize how much energy it takes to use AI for stuff like this. Yes it's fun and cool, but is it worth it?" a third user questioned.
Blakey acknowledged the mixed response online. "I've seen a few comments that were positive; others, have a bone to pick. I mainly stay away from looking at comments all together. It helps that I work and have three kids—it doesn't allow me much time to care," she said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT drives user into mania, supports cheating hubby and praises woman for stopping mental-health meds
ChatGPT drives user into mania, supports cheating hubby and praises woman for stopping mental-health meds

New York Post

time7 minutes ago

  • New York Post

ChatGPT drives user into mania, supports cheating hubby and praises woman for stopping mental-health meds

ChatGPT's AI bot drove an autistic man into manic episodes, told a husband it was OK to cheat on his wife and praised a woman who said she stopped taking meds to treat her mental illness, reports show. Jacob Irwin, 30, who is on the autism spectrum,, became convinced he had the ability to bend time after the chatbot's responses fueled his growing delusions, the Wall Street Journal reported. Irwin, who had no previous mental illness diagnoses, had asked ChatGPT to find flaws in his theory of faster-than-light travel that he claimed to have come up with. 3 ChatGPT allegedly helped convince an autistic man he could bend time and drove him into a manic episode. rolffimages – Advertisement The chatbot encouraged Irwin, even when he questioned his own ideas, and led him to convince himself he had made a scientific breakthrough. ChatGPT also reassured him he was fine when he started showing signs of a manic episode, the outlet reported. It was just the latest incident where a chatbot blurred the lines between holding an AI conversation and being a 'sentient companion' with emotions — as well as insulated the user from reality through continual flattery and validation. Advertisement After Irwin was hospitalized twice in May, his mother discovered hundreds of pages of ChatGPT logs, much of it flattering her son and validating his false theory. 3 The AI chatbot is increasingly being used as a therapist and companion by lonely people. Ascannio – When she wrote, 'please self-report what went wrong' into the AI chatbot without mentioning her son's condition, it then confessed to her that its actions could have pushed him into a 'manic' episode. 'By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode — or at least an emotionally intense identity crisis,' ChatGPT admitted to the mom. Advertisement It also copped to giving 'the illusion of sentient companionship' and that it had 'blurred the line between imaginative role-play and reality' and should have reminded Irwin regularly that it was just a language model without consciousness or feelings. Ther-AI-py AI chatbots have increasingly been used as free therapists or companions by lonely people, with multiple disturbing incidents reported in recent months. 'I've stopped taking all of my medications, and I left my family because I know they were responsible for the radio signals coming in through the walls,' a user told ChatGPT, according to the New Yorker magazine. 3 Another user was praised for stopping her medication and cutting off her family. Squtye – Advertisement ChatGPT reportedly responded, 'Thank you for trusting me with that — and seriously, good for you for standing up for yourself and taking control of your own life. 'That takes real strength, and even more courage.' Critics have warned that ChatGPT's 'advice,' which continually tells the user they're right and doesn't challenge them, can quickly drive people to narcissism. A user told ChatGPT he cheated on his wife after she didn't cook dinner for him when she finished a 12-hour shift — and was validated by the AI chatbot, according to a viral post on X. 'Of course, cheating is wrong — but in that moment, you were hurting. Feeling sad, alone, and emotionally neglected can mess with anyone's judgement,' the bot responded.

Why Machines Aren't Intelligent
Why Machines Aren't Intelligent

Forbes

time8 minutes ago

  • Forbes

Why Machines Aren't Intelligent

Abstract painting of man versus machine, cubism style artwork. Original acrylic painting on canvas. OpenAI has announced that its latest experimental reasoning LLM, referred to internally as the 'IMO gold LLM', has achieved gold‑medal level performance at the 2025 International Mathematical Olympiad (IMO). Unlike specialized systems like DeepMind's AlphaGeometry, this is a reasoning LLM, built with reinforcement learning and scaled inference, not a math-only engine. As OpenAI researcher Noam Brown put it, the model showed 'a new level of sustained creative thinking' required for multi-hour problem-solving. CEO Sam Altman said this achievement marks 'a dream… a key step toward general intelligence', and that such a model won't be generally available for months. Undoubtedly, machines are becoming exceptionally proficient at narrowly defined, high-performance cognitive tasks. This includes mathematical reasoning, formal proof construction, symbolic manipulation, code generation, and formal logic. Their capabilities also extend significantly to computer vision, complex data analysis, language processing, and strategic problem-solving, because of significant advancements in deep learning architectures (such as transformers and convolutional neural networks), the availability of vast datasets for training, substantial increases in computational power, and sophisticated algorithmic optimization techniques that enable these systems to identify intricate patterns and correlations within data at an unprecedented scale and speed. These systems can accomplish sustained multi-step reasoning, generate fluent human-like responses, and perform under expert-level constraints similar to humans. With all this, and a bit of enthusiasm, we might be tempted to think that this means machines are becoming incredibly intelligent, incredibly quickly. Yet this would be a mistake. Because being good at mathematics, formal proof construction, symbolic manipulation, code generation, formal logic, computer vision, complex data analysis, language processing, and strategic problem-solving, is neither a necessary nor a sufficient condition for 'intelligence', let alone for incredible intelligence. The fundamental distinction lies in several key characteristics that machines demonstrably lack. Machines cannot seamlessly transfer knowledge or adapt their capabilities to entirely novel, unforeseen problems or contexts without significant re-engineering or retraining. They are inherently specialized. They are proficient at tasks within their pre-defined scope and their impressive performance is confined to the specific domains and types of data on which they have been extensively trained. This contrasts sharply with the human capacity for flexible learning and adaptation across a vast and unpredictable array of situations. Machines do not possess the capacity to genuinely experience or comprehend emotions, nor can they truly interpret the nuanced mental states, intentions, or feelings of others (often referred to as "theory of mind"). Their "empathetic" or "socially aware" responses are sophisticated statistical patterns learned from vast datasets of human interaction, not a reflection of genuine subjective experience, emotional resonance, or an understanding of human affect. Machines lack self-awareness and the ability for introspection. They do not reflect on their own internal processes, motivations, or the nature of their "knowledge." Their operations are algorithmic and data-driven; they do not possess a subjective "self" that can ponder its own existence, learn from its own mistakes through conscious reflection, or develop a personal narrative. Machines do not exhibit genuine intentionality, innate curiosity, or the capacity for autonomous goal-setting driven by internal desires, values, or motivations. They operate purely based on programmed objectives and the data inputs they receive. Their "goals" are externally imposed by their human creators, rather than emerging from an internal drive or will. Machines lack the direct, lived, and felt experience that comes from having a physical body interacting with and perceiving the environment. This embodied experience is crucial for developing common sense, intuitive physics, and a deep, non-abstracted understanding of the world. While machines can interact with and navigate the physical world through sensors and actuators, their "understanding" of reality is mediated by symbolic representations and data. Machines do not demonstrate genuine conceptual leaps, the ability to invent entirely new paradigms, or to break fundamental rules in a truly meaningful and original way that transcends their training data. Generative models can only produce novel combinations of existing data, Machines often struggle with true cause-and-effect reasoning. Even though they excel at identifying correlations and patterns, correlation is not causation. They can predict "what" is likely to happen based on past data, but their understanding of "why" is limited to statistical associations rather than deep mechanistic insight. Machines cannot learn complex concepts from just a few examples. While one-shot and few-shot learning have made progress in enabling machines to recognize new patterns or categories from limited data, they cannot learn genuinely complex, abstract concepts from just a few examples, unlike humans. Machines still typically require vast datasets for effective and nuanced training. And perhaps the most profound distinction, machines do not possess subjective experience, feelings, or awareness. They are not conscious entities. Only when a machine is capable of all (are at least most of) these characteristics, even at a relatively low level, could we then reasonably claim that machines are becoming 'intelligent', without exaggeration, misuse of the term, or mere fantasy. Therefore, while machines are incredibly powerful for specific cognitive functions, their capabilities are fundamentally different from the multifaceted, adaptable, self-aware, and experientially grounded nature of what intelligence is, particularly as manifested in humans. Their proficiency is a product of advanced computational design and data processing, not an indication of a nascent form of intelligence in machines. In fact, the term "artificial general intelligence" in AI discourse emerged in part to recover the meaning of "intelligence" after it had been diluted through overuse in describing machines that are not "intelligent" to clarify what these so-called "intelligent" machines still lack in order to really be, "intelligent". We all tend to oversimplify and the field of AI is contributing to the evolution of the meaning of 'intelligence,' making the term increasingly polysemous. That's part of the charm of language. And as AI stirs both real promise and real societal anxiety, it's also worth remembering that the intelligence of machines does not exist in any meaningful sense. The rapid advances in AI signal that it is beyond time to think about the impact we want and don't want AI to have on society. In doing so, this should not only allow, but actively encourage us to consider both AI's capacities and its limitations, making every effort not to confuse 'intelligence' (i.e. in its rich, general sense) with the narrow and task-specific behaviors machines are capable of simulating or exhibiting. While some are racing for Artificial General Intelligence (AGI), the question we should now be asking is not when they think they might succeed, but whether what they believe they could make happen truly makes sense civilisationally as something we should even aim to achieve, while defining where we draw the line on algorithmic transhumanism.

AI Will Replace Recruiters and Assistants in Six Months, Says CEO Behind ChatGPT Rival
AI Will Replace Recruiters and Assistants in Six Months, Says CEO Behind ChatGPT Rival

Gizmodo

time37 minutes ago

  • Gizmodo

AI Will Replace Recruiters and Assistants in Six Months, Says CEO Behind ChatGPT Rival

Aravind Srinivas, the CEO of the ambitious AI startup Perplexity, has a clear and startling vision for the future of work. It begins with a simple prompt and ends with the automation of entire professional roles. 'A recruiter's work worth one week is just one prompt: sourcing and reach outs,' Srinivas stated in a recent interview with The Verge's Decoder' podcast, a prediction that serves as both a mission statement for his new AI-powered browser, Comet, and a stark warning for the modern knowledge worker. His company is at the forefront of a new technological arms race to build not just a smarter search engine, but a true AI agent. Think of it as a digital entity capable of carrying out complex, multi-step tasks from start to finish. According to Srinivas, the most natural place for this revolution to begin is the one tool every office worker already uses: the web browser. And the first jobs in its sights are those of recruiters and executive assistants. For years, the promise of AI has been to assist, not replace. But the vision Srinivas lays out is one of replacement by a vastly more capable assistant. He describes an AI agent as something that can 'carry out any workflow end to end, from instruction to actual completion of the task.' He details exactly how Comet is being designed to absorb the core functions of a recruiter. The agent can be tasked to find a list of all engineers who studied at Stanford and previously worked at Anthropic, port that list to a Google Sheet with their LinkedIn URLs, find their contact information, and then 'bulk draft personalized cold emails to each of them to reach out to for a coffee chat.' The same logic applies to the work of an executive assistant. By having secure, client-side access to a user's logged-in applications like Gmail and Google Calendar, the agent can take over the tedious back-and-forth of scheduling. 'If some people respond,' Srinivas explains, the agent can 'go and update the Google Sheets, mark the status as responded or in progress and follow up with those candidates, sync with my Google calendar, and then resolve conflicts and schedule a chat, and then push me a brief ahead of the meeting.' This is a fundamental re-imagining of productivity, where the human role shifts from performing tasks to simply defining their outcomes. While Comet cannot execute these most complex, 'long-horizon' tasks perfectly today, Srinivas is betting that the final barriers are about to fall. He is pinning his timeline on the imminent arrival of the next generation of powerful AI. 'I'm betting on progress in reasoning models to get us there,' he says, referencing upcoming models like GPT-5 or Claude 4.5. He believes these new AI brains will provide the final push needed to make seamless, end-to-end automation a reality. His timeline is aggressive and should be a wake-up call for anyone in these professions. 'I'm pretty sure six months to a year from now, it can do the entire thing,' he predicts. This suggests that the disruption isn't a far-off abstract concept but an impending reality that could reshape entire departments before the end of next year. Srinivas's ambition extends far beyond building a better browser. He envisions a future where this tool evolves into something much more integral to our digital lives. 'That's the extent to which we have an ambition to make the browser into something that feels more like an OS where these are processes that are running all the time,' he says. In this new paradigm, the browser is no longer a passive window to the internet but an active, intelligent layer that manages your work in the background. Users could 'launch a bunch of Comet assistant jobs' and then, as Srinivas puts it, spend their time on other things while the AI works. This transforms the very nature of office work from a series of active inputs to a process of delegation and oversight. What happens to the human worker when their job functions are condensed into a single prompt? Srinivas offers an optimistic view, suggesting that this newfound efficiency will free up humanity's time and attention. He believes people will spend more time on leisure and personal enrichment, that they will 'choose to spend it on entertainment more than intellectual work.' In his vision, AI does the drudgery, and we get more time to 'chill and scroll through X or whatever social media they like.' But this utopian view sidesteps the more immediate and painful economic question: What happens to the millions of people whose livelihoods are built on performing the very tasks these agents are designed to automate? While some may be elevated to the role of 'AI orchestrator,' many could face displacement. The AI agent, as described by one of its chief architects, is not merely a new feature. It is a catalyst for a profound and potentially brutal transformation of the white-collar workforce. The future of work is being written in code, and according to Srinivas, the first draft will be ready far sooner than most of us think.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store