logo
ChatGPT now lets you create and edit images on WhatsApp, here's how to get started and what to expect

ChatGPT now lets you create and edit images on WhatsApp, here's how to get started and what to expect

Mint20-06-2025
Cross-service ChatGPT integration just got a serious upgrade and now WhatsApp is part of the action. If you have ever wanted to create or edit images without leaving your chat window with GPT's advanced capabilities, that is now possible. What does this OpenAI update mean? You can generate images right inside WhatsApp. No need to install extra apps or switch between tabs. Just start a conversation and watch your ideas take shape.
This new feature is available for free in regions where ChatGPT is officially supported on WhatsApp. You can interact with the chatbot using text, images or even voice notes. The process is designed to be simple and accessible for anyone who wants to try their hand at AI-powered creativity.
There are a few things to know before you jump in. Free users can create one image per day. After that, you will need to wait about 24 hours before you can try again. If you have a paid ChatGPT subscription, you get a higher daily limit. Not everyone can link their account yet and the process can sometimes be a bit slow. OpenAI is still rolling out the feature and making improvements.
Getting started is straightforward. Here is what you need to do Save the official ChatGPT WhatsApp number, +1 800 242 8478, to your contacts
Open WhatsApp and send a greeting to start the chat
When prompted, link your OpenAI account by following the secure link and logging in
Send a prompt describing the image you want or share a photo for a creative twist, like turning your selfie into a Studio Ghibli-style illustration
Wait a few minutes and your generated image will appear in the chat
ChatGPT on WhatsApp is not just about images. You can ask for recipes, get help with writing, or even upload photos for quick descriptions. It is a handy little productivity boost that fits right into your daily conversations. It doesn't matter whether you need a social media caption or want to try something creative, this tool is built to make things easier.
OpenAI is not the only one bringing AI to WhatsApp. Meta, which owns WhatsApp, has its own Meta AI assistant with image generation. Perplexity is another tool offering similar features. So if you are curious, you have plenty of options to explore.
If you want to see what AI can really do, this new WhatsApp feature is worth a try. Your next chat could become a mini art project or just a bit more fun than usual.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI is wrecking an already fragile job market for college graduates
AI is wrecking an already fragile job market for college graduates

Mint

timean hour ago

  • Mint

AI is wrecking an already fragile job market for college graduates

What do you hire a 22-year-old college graduate for these days? For a growing number of bosses, the answer is not much—AI can do the work instead. At Chicago recruiting firm Hirewell, marketing agency clients have all but stopped requesting entry-level staff—young grads once in high demand but whose work is now a 'home run" for AI, the firm's chief growth officer said. Dating app Grindr is hiring more seasoned engineers, forgoing some junior coders straight out of school, and CEO George Arison said companies are 'going to need less and less people at the bottom." Bill Balderaz, CEO of Columbus-based consulting firm Futurety, said he decided not to hire a summer intern this year, opting to run social-media copy through ChatGPT instead. Balderaz has urged his own kids to focus on jobs that require people skills and can't easily be automated. One is becoming a police officer. Having a good job 'guaranteed" after college, he said, 'I don't think that's an absolute truth today any more." There's long been an unwritten covenant between companies and new graduates: Entry-level employees, young and hungry, are willing to work hard for lower pay. Employers, in turn, provide training and experience to give young professionals a foothold in the job market, seeding the workforce of tomorrow. A yearslong white-collar hiring slump and recession worries have weakened that contract. Artificial intelligence now threatens to break it completely. That is ominous for college graduates looking for starter jobs, but also potentially a fundamental realignment in how the workforce is structured. As companies hire and train fewer young people, they may also be shrinking the pool of workers that will be ready to take on more responsibility in five or 10 years. Companies say they are already rethinking how to develop the next generation of talent. AI is accelerating trends that were already under way. With each new class after 2020, an ever-smaller share of graduates is landing jobs that require a bachelor's degree, according to a Burning Glass Institute analysis of labor data. That's happening across majors, from visual arts to engineering and mathematics. And unemployment among recent college graduates is now rising faster than for young adults with just high-school or associate degrees. Meanwhile, the sectors where graduate hiring has slowed the most—like information, finance, insurance and technical services—are still growing, a sign employers are becoming more efficient and see no immediate downside to hiring fewer inexperienced workers, said Matt Sigelman, Burning Glass's president. 'This is a more tectonic shift in the way employers are hiring," Sigelman said. 'Employers are significantly more likely to be letting go of their workers at the entry level—and in many cases are stepping up their hiring of more experienced professionals." After dancing around the issue in the 2½ years since ChatGPT's release upended the way almost all companies plan for their futures, CEOs are now talking openly about AI's immense capabilities likely leading to deep job cuts. Top executives at industry giants including Amazon and JPMorgan have said in recent weeks that they expect their workforces to shrink considerably. Ford CEO Jim Farley said he expects AI will replace half of the white-collar workforce in the U.S. For new graduates, this means not only are they competing for fewer slots but they are also increasingly up against junior workers who have been recently laid off. While many bosses say they remain committed to entry-level workers and understand their value, the data is increasingly stark: The overall national unemployment rate is at about 4%, but for new college graduates, it was 6.6% over the past 12 months ending in May. At large tech companies, which power much of the U.S. economy, the trend is perhaps more extreme. Venture-capital firm SignalFire found that among the 15 largest tech companies by market capitalization, the share of entry-level hires relative to total new hires has fallen by 50% since 2019. Recent graduates accounted for just 7% of new hires in 2024, down from 11% in 2022. A May report by the firm pointed to shrinking teams, fewer programs for new graduates and the growing influence of AI. Jadin Tate studied informatics at the University at Albany, hoping to land a job focused on improving the user experience of apps or websites. The week before graduation, his mentor leveled with him: That field is being taken over by AI. He warned it may not exist in five years. Tate has attended four conventions this year, networking with companies and asking if they are hiring. He has also applied to dozens of jobs, without success. Several of his college friends are working retail and food-service jobs as they apply for white-collar roles or before their start dates. 'It has been intimidating," Tate said of his job search. Indeed, recent graduates and students are fighting over a smaller number of positions geared at entry-level workers. There were 15% fewer job postings to the entry-level job-search platform Handshake this school year than last, while the number of applications per job rose 30%, according to the platform. Internship postings and applications saw similar trend lines between 2023 and 2025. The shift to AI presents huge risks to companies on skill development, even as they enjoy increased efficiency and productivity from fewer workers, said Chris Ernst, chief learning officer at the HR and finance software company Workday. Ernst said his research shows that workers mostly learn through experience, and then the remainder comes from relationships and development. When AI can produce in seconds a report that previously would have taken a young employee days or weeks—teaching that person critical skills along the way—companies will have to learn to train that person differently. 'Genuine learning, growth, adaptation—it comes from doing the hard work," he said. 'It's those moments of challenge, of hardship—that's the crucible where people grow, they change, they learn most profoundly." Among other things, Ernst said employers must be intentional about connecting young workers with colleagues and making time to mentor them. At the pipeline operator Williams, based in Tulsa, Okla., the company realized that thanks to AI young professionals were performing less of the drudgework like digging into corporate data that historically has taught them the core of the business. New employees at pipeline operator Williams go through a two-day orientation at the corporate headquarters in Tulsa, Okla. The company this year started a two-day onboarding program where veteran executives teach new hires the business fundamentals. Chief Human Resources Officer Debbie Pickle said that increased training will help new hires develop without loading them down with gruntwork. 'These are really bright, top talent people," she said. 'We shouldn't put a cap on how we think they can add value for the company." Still, Pickle said, the increased efficiency will allow the company to expand the business while keeping head count flat in the future. Some of the entry-level jobs most at risk are the most lucrative for recent graduates, including on Wall Street and in big law firms where six-figure starting salaries are the norm. But those jobs have also been famously menial for the first few years—until AI came along. The investment firm Carlyle now pitches to prospective hires that they won't be doing grunt work. Junior hires go through AI training and a program called 'AI University" in which employees share best practices and participate in pilot programs, said Lúcia Soares, the firm's chief information officer. In the past, she said, junior hires evaluating a deal would find articles on Google, request documents from companies, review that information manually, highlight details and copy and paste information from one document to another. Now, AI tools can do almost all of that. An employee poses for a headshot at the new-employee orientation at Williams. 'That analyst still has to go in and make sure the analysis is accurate, question it, challenge it," she said. 'The nature of the brain work that needs to go into it is very much the same. It's just the speed at which these analysts can move." She said Carlyle has maintained the same volume of entry-level hiring but said 90% of its staff has adopted generative AI tools that automate some work. Carlyle's reliance on young staff to check AI's output highlights what many users know to be true: it still struggles in some cases to do the work of humans effectively. Still, many executives expect that gap to close quickly. At the New York venture-capital firm Primary Venture Partners, Rebecca Price said she's encouraging CEOs of the firm's 100 portfolio companies to think hard about every hire and whether the role could be automated. She said it's not that there are no entry-level jobs, but that there's a gap between the skills companies expect out of their junior hires in the age of AI and what most new graduates are equipped with out of school. An engineer in a first job used to need basic coding abilities: now that same engineer needs to be able to detect vulnerabilities and have the judgment to determine what can be trusted from the AI models. New grads must also learn faster and think critically, she said—skills that many of the newest computer-science grads don't have yet. 'We're in this messy transition," said Price, a partner at the firm. 'The bar is higher and the system hasn't caught up." Students are seeing the transition in real time. Arjun Dabir, a 20-year-old applied math major at the University of California, Irvine, said when he applied for internships last year, companies asked for knowledge of coding languages. Now, they want candidates who are familiar with how AI 'agents" can automate certain tasks on behalf of humans—or 'agentic workflows" in the new vernacular. 'What is an intern going to do?" Dabir said as drones buzzed overhead nearby at an artificial intelligence convention in June in Washington, DC. The work typically done by interns, 'that task is no longer necessary. You don't need to hire someone to do it." Venture capitalist Allison Baum Gates said young professionals will need to be more entrepreneurial and gain experience on their own without the standard track of starting as an analyst or a paralegal and working their way up. Her firm, SemperVirens, invests in healthcare startups, workforce technology companies and fintech firms, some of which are replacing entry-level jobs. 'Maybe I'm wrong and this leads to a wealth of new jobs and opportunities and that would be a great situation," she said. 'But it would be far worse to assume that there's no adverse impact and then be caught without a solution." Rosalia Burr, 25, is trying to avoid such an outcome. She graduated in 2022 and quickly joined Liberty Mutual Insurance, where she had interned twice during college at Arizona State University. She was laid off from her payroll job in December. Running has soothed her anxiety. This spring, however, she tore her hip flexor and had to rest to heal. Job rejections, as she was stuck inside, hit extra hard. 'I felt that I was failing." Her goal now is to find a client-facing job. 'If you're in a business back-end role, you're more of a liability of getting laid off, or your job being automated," she said. 'If you're client facing, that's something people can't really replicate" with AI. Write to Lindsay Ellis at and Katherine Bindley at

AI model trained to respond to online political posts impressive
AI model trained to respond to online political posts impressive

Hans India

time2 hours ago

  • Hans India

AI model trained to respond to online political posts impressive

Researchers. who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse had improved. Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language. Polite, evidence-based counterarguments by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high-quality online conversation and 'substantially increase (one's) openness to alternative viewpoints', according to findings published in the journal Science Advances. Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found. Large language models could provide 'light-touch suggestions', such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, said. 'To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics,' Eady said. Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, said, '(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups.' Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post. This was countered by ChatGPT -- a 'fictitious social media user' for the participants -- which tailored its argument 'on the fly' according to the text's position and reasoning. The participants then responded as if replying to a social media comment. 'An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points,' the authors wrote in the study. Eady said, 'Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc.' AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced. Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet. The study itself involved humans to rate responses as well, she said. Additionally, context, culture, and timing would need to be considered for such regulation, she added. Eady too is apprehensive about 'using LLMs to regulate online political discussions in more heavy-handed ways.' Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward. Eady added, 'The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India. Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible,' the author added. Kapoor said, 'In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here.' Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement. Findings of researchers from Singapore's Nanyang Technological University suggest that 'those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement.' Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed. Describing the study as 'interesting', Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour. Her team, which has developed a scale to measure one's political ideology in India (published in a preprint paper), found that dark personality traits were associated with a disregard for norms and hierarchies.

Is ChatGPT making us outsource thinking?
Is ChatGPT making us outsource thinking?

Hans India

time2 hours ago

  • Hans India

Is ChatGPT making us outsource thinking?

Back in 2008, The Atlantic sparked controversy with a provocative cover story: Is Google Making Us Stupid? In that 4,000-word essay, later expanded into a book, author Nicholas Carr suggested the answer was yes, arguing that technology such as search engines were worsening Americans' ability to think deeply and retain knowledge. At the core of Carr's concern was the idea that people no longer needed to remember or learn facts when they could instantly look them up online. While there might be some truth to this, search engines still require users to use critical thinking to interpret and contextualise the results. Fast-forward to today, and an even more profound technological shift is taking place. With the rise of generative AI tools such as ChatGPT, internet users aren't just outsourcing memory – they may be outsourcing thinking itself. Generative AI tools don't just retrieve information; they can create, analyse and summarise it. This represents a fundamental shift: Arguably, generative AI is the first technology that could replace human thinking and creativity. That raises a critical question: Is ChatGPT making us stupid? As a professor of information systems who's been working with AI for more than two decades, I've watched this transformation firsthand. And as many people increasingly delegate cognitive tasks to AI, I think it's worth considering what exactly we're gaining and what we are at risk of losing. AI and the Dunning-Kruger effect Generative AI is changing how people access and process information. For many, it's replacing the need to sift through sources, compare viewpoints and wrestle with ambiguity. Instead, AI delivers clear, polished answers within seconds. While those results may or may not be accurate, they are undeniably efficient. This has already led to big changes in how we work and think. But this convenience may come at a cost. When people rely on AI to complete tasks and think for them, they may be weakening their ability to think critically, solve complex problems and engage deeply with information. Although research on this point is limited, passively consuming AI-generated content may discourage intellectual curiosity, reduce attention spans and create a dependency that limits long-term cognitive development. To better understand this risk, consider the Dunning-Kruger effect. This is the phenomenon in which people who are the least knowledgeable and competent tend to be the most confident in their abilities, because they don't know what they don't know. In contrast, more competent people tend to be less confident. This is often because they can recognise the complexities they have yet to master. This framework can be applied to generative AI use. Some users may rely heavily on tools such as ChatGPT to replace their cognitive effort, while others use it to enhance their capabilities. In the former case, they may mistakenly believe they understand a topic because they can repeat AI-generated content. In this way, AI can artificially inflate one's perceived intelligence while actually reducing cognitive effort. This creates a divide in how people use AI. Some remain stuck on the 'peak of Mount Stupid,' using AI as a substitute for creativity and thinking. Others use it to enhance their existing cognitive capabilities. In other words, what matters isn't whether a person uses generative AI, but how. If used uncritically, ChatGPT can lead to intellectual complacency. Users may accept its output without questioning assumptions, seeking alternative viewpoints or conducting deeper analysis. But when used as an aid, it can become a powerful tool for stimulating curiosity, generating ideas, clarifying complex topics and provoking intellectual dialogue. The difference between ChatGPT making us stupid or enhancing our capabilities rests in how we use it. Generative AI should be used to augment human intelligence, not replace it. That means using ChatGPT to support inquiry, not to shortcut it. It means treating AI responses as the beginning of thought, not the end. AI, thinking and the future of work The mass adoption of generative AI, led by the explosive rise of ChatGPT – it reached 100 million users within two months of its release – has, in my view, left internet users at a crossroads. One path leads to intellectual decline: a world where we let AI do the thinking for us. The other offers an opportunity: to expand our brainpower by working in tandem with AI, leveraging its power to enhance our own. It's often said that AI won't take your job, but someone using AI will. But it seems clear to me that people who use AI to replace their own cognitive abilities will be stuck at the peak of Mount Stupid. These AI users will be the easiest to replace. It's those who take the augmented approach to AI use who will reach the path of enlightenment, working together with AI to produce results that neither is capable of producing alone. This is where the future of work will eventually go. This essay started with the question of whether ChatGPT will make us stupid, but I'd like to end with a different question: How will we use ChatGPT to make us smarter? The answers to both questions depend not on the tool but on users. (The Conversation)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store