Latest news with #ChatGPT-4o


India Today
a day ago
- Business
- India Today
Alibaba launches Qwen-VLo to rival ChatGPT-4o in AI image generation
Chinese tech company Alibaba has announced its new AI model, Qwen-VLo, which aims to take on rivals like ChatGPT-4o in the area of image generation. This new model can understand user instructions more accurately and generate high-quality images based on that understanding. The company revealed details of the model in a blog its previous image-focused models such as Qwen-VL, the newly introduced Qwen-VLo is said to be much better at handling complex prompts and producing precise results. One of the major improvements is that it can make specific changes to images — like changing colours or backgrounds — without altering unrelated parts of the image. This was a common problem with earlier versions, where minor edits often led to unnecessary changes in the overall is designed to understand the context behind a user's request. So, if a user asks for an image to resemble a certain weather condition or be drawn in a particular art style, the model can respond accordingly. It can even create images that look like they belong to a certain time period, which gives it the flexibility to be used for creative tasks. The model also supports multiple languages apart from Chinese and English, making it more useful to users across different regions. While the full list of supported languages has not been revealed, the addition signals Alibaba's intention to reach a wider global key feature that sets Qwen-VLo apart is its ability to take in more than one image at a time. In simple terms, users can upload different objects or elements and ask the model to combine them. For example, a user can upload a picture of a basket and separate images of products like soap or shampoo and ask the AI to place those items inside the basket. This feature, however, is still in development and hasn't been made fully available also gives users the ability to resize images into various formats — including square, portrait, and widescreen — using dynamic resolution training. The images are created step-by-step from top to bottom and left to right, which helps with better control and accuracy during has pointed out that the model is currently in its early stage, and users might experience some issues like inconsistency or results that don't fully match the instructions. However, the company says improvements are ongoing. It is also exploring the use of image segmentation and detection maps to improve the model's understanding of objects and scenes within an company believes that in the future, AI models like Qwen-VLo could be capable of not just generating beautiful images, but also expressing ideas and emotions through visuals.- Ends


Tom's Guide
6 days ago
- Entertainment
- Tom's Guide
I asked AI to predict 2026 — here's the boldest forecasts from ChatGPT, Gemini, and Claude
We live in an era where AI models can generate art, code software and even predict protein structures. But can they predict cultural trends? As we hurtle toward the mid-2020s, predicting what comes next feels more challenging than ever. Technology evolves at breakneck speed; cultural shifts happen overnight on social media; and entire industries reinvent themselves annually. So I decided to turn to the experts — or at least the artificial ones. I posed the same question to ChatGPT-4o, Gemini 2.0 and Claude 3.7 Sonnet: Predict the biggest trends we'll see in 2026 across technology, culture, fashion, and entertainment. What's going to be in, what's going out, and why? Their responses were fascinating, surprisingly different, and revealed just how uniquely each AI approaches predictions. Here's what they told me. Technology was Gemini's strongest suit. It predicted that 2026 will be the year of "agentic AI" — AI systems that don't just respond to prompts but actually set goals and execute plans autonomously. Gemini also emphasized multimodal AI becoming mainstream, where your AI assistant can simultaneously analyze your screenshot, hear your voice command, and understand the context of your email. On culture, Gemini painted a fascinating picture of contradictions. It predicted a "Dark Mode" mindset taking hold, not just in UI design but in overall aesthetics. Think moodier fashion, darker music, and social media content that pushes back against toxic positivity. Simultaneously, it forecasted a "Cosy Comeback" with people craving comfort and slow living as an antidote to hustle culture. The AI also made a bold prediction about cultural preservation becoming trendy among young people, with brands needing to genuinely respect tradition rather than simply appropriating it for marketing. Fashion predictions were surprisingly specific. Gemini named exact colors for Spring/Summer 2026: Transformative Teal, Electric Fuchsia, Blue Aura, Amber Haze, and Jelly Mint. It predicted that plaid would become a neutral (wear it head-to-toe, apparently) and that brown teddy coats would be everywhere. In technology, ChatGPT made some counterintuitive predictions. While other AIs focused on AI advancement, ChatGPT predicted that "generic chatbots" would be out by 2026. The novelty of "just talking to a chatbot" will wear off unless the experience is highly personalized. It also boldly declared that "crypto-as-a-lifestyle" is over. It also predicted the rise of "AI-native apps", applications built entirely around AI interactions rather than having AI features bolted on. It also forecasted that local AI models would boom as people grow wary of cloud data collection. ChatGPT's cultural predictions felt the most human. It identified "digital decluttering" and "analog luxe" as major trends, predicting people will increasingly crave low-tech moments and artisanal experiences. This aligns with the growing backlash against screen time and digital overwhelm. It also predicted "AI-ethics as status" — where knowing how your AI works becomes the new social flex. Fashion-wise, ChatGPT predicted a "color comeback" after years of washed-out minimalism, calling it "dopamine dressing 2.0." It also forecasted the rise of "post-normcore utilitywear". Perhaps fittingly, ChatGPT was the only AI to coin terms that sounded like they'd already gone viral on TikTok. And its entertainment predictions were bold: it declared that "endless franchise reboots" would be out. Given superhero fatigue and the mixed reception of long-running franchises, this feels prescient. Claude took the most integrated approach, emphasizing "seamless integration" over isolated trends. It predicted AI-powered AR/VR experiences that adapt to individual users, emphasizing that by 2026, these technologies would feel natural rather than a novelty. Claude came with receipts: $200.87 billion AR/VR market by 2030, adding analytical heft to its predictions. In culture, Claude introduced the concept of "The Great Redirection", driven by elections in 64 countries with half the world's population voting in 2024-2025. This political angle was unique among the three AIs. Claude argued that all this voting would make people crave genuine, community-driven experiences over manufactured cultural trends. Claude also forecast "The Great Unretirement" with seniors returning to work, a trend that's already emerging but could accelerate by 2026. Fashion predictions centered on "Bio-Harmony". Claude went beyond typical trend forecasting to predict bio-engineered materials inspired by ecosystems, with garments designed as "second skins" that grow, evolve and biodegrade. By far, this was by far the most futuristic prediction across all three AIs. It's entertainment analysis was market-focused, predicting gaming would surpass $300 billion by 2028 and that advertising-supported streaming would become the primary growth model. It provided specific revenue projections, noting ad revenue would hit $1 trillion in 2026. This exercise revealed something fascinating about how different AI models approach uncertainty. Each seemed to default to its training strengths: Gemini acted like a data analyst, ChatGPT like a cultural critic, and Claude like a researcher trying to connect the dots None of the AIs claimed certainty — they all acknowledged that prediction is inherently speculative. But their different approaches suggests AI prediction works best as a group project, with each model bringing its own analytical superpowers to the table. As we head toward 2026, the truth will likely incorporate elements from all three perspectives. I thought it was really interesting that each AI's predictions revealed as much about its own "personality" as about the future itself.


Hindustan Times
6 days ago
- Hindustan Times
This ChatGPT-4o prompt can help users organise their thoughts and uncover answers they might have missed
A new approach to using ChatGPT-4o is gaining popularity among users who want more practical and effective results from their AI interactions. Instead of simply asking for an answer or a list of solutions, this method encourages a conversation where ChatGPT asks a series of questions to uncover new angles and possible fixes. This style of prompting has been highlighted on Reddit, where such information is shared regularly by tech enthusiasts. A simple prompt lets ChatGPT-4o ask questions, helping users find new solutions for tech, work, and everyday problems.(Unsplash) The idea is straightforward. When faced with a persistent problem, rather than requesting a direct solution, users invite ChatGPT to act as a thoughtful problem-solver. The prompt goes like this: 'I'm having a persistent problem with [x] despite having taken all the necessary countermeasures I could think of. Ask me enough questions about the problem to find a new approach.' This approach shifts the focus from immediate answers to a process of exploration, where ChatGPT guides the user through a series of targeted questions. This method has proven especially helpful for issues that seem resistant to standard troubleshooting. For example, many people struggle with iPhone battery drain, even after trying all the common fixes. Using this prompt, ChatGPT begins by asking about the device model, recent software updates, app usage, and the specific steps already attempted. Through this back-and-forth, the conversation often uncovers details that were overlooked, such as a problematic app, a background process, or a recent update causing the issue. How does it work? What stands out about this approach is the way ChatGPT maintains focus, gathers relevant information, and avoids jumping to conclusions. The experience feels similar to working with a skilled support technician who listens carefully, asks precise questions, and only then suggests possible solutions - all this without any human interaction. This method not only helps identify the root cause of a problem but also encourages users to reflect on their own troubleshooting process - offering insights that may be missed otherwise. The original Reddit thread was posted by u/speak2klein, who said, "What makes this so good is 4o's insane ability to ask the right follow-ups. Its context tracking and reasoning are miles ahead of earlier versions of ChatGPT." Many users have echoed this sentiment, noting that ChatGPT-4o's improved ability to remember context and reason through complex situations makes it a valuable tool for a wide range of challenges. This style of prompting is not limited to technical issues. Users have found it useful for work projects, creative blocks, and personal decisions. By letting ChatGPT lead the conversation with questions, it becomes easier to break out of old patterns and see problems from a new perspective. To try this approach, simply describe the problem and use the suggested prompt. ChatGPT will begin asking questions to gather more information, helping users organise thoughts and guide the discussion towards a possible answer. This process can reveal solutions that might not have been considered otherwise. For those seeking a more interactive and thoughtful experience with ChatGPT-4o, this prompt is a reliable way to tap into the AI's reasoning abilities. Next time a problem seems unsolvable, consider using this method. The results may be more insightful and practical than expected.


Indianapolis Star
20-06-2025
- Science
- Indianapolis Star
What happens when you use ChatGPT to write an essay? See what new study found.
Artificial intelligence chatbots may be able to write a quick essay, but a new study from MIT found that their use comes at a cognitive cost. A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. Terms to know: With artificial intelligence growing popular, here's what to know about how it works AI in education: How AI is affecting the way kids learn to read and write A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks.


USA Today
20-06-2025
- Science
- USA Today
What happens when you use ChatGPT to write an essay? See what new study found.
Artificial intelligence chatbots may be able to write a quick essay, but a new study from MIT found that their use comes at a cognitive cost. A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. Terms to know: With artificial intelligence growing popular, here's what to know about how it works AI in education: How AI is affecting the way kids learn to read and write How was the study conducted? A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. Quality of essays: What did the study find? In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. Brain activity: What did the study find? When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. What's next for future studies? Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks. Greta Cross is a national trending reporter at USA TODAY. Story idea? Email her at gcross@