logo
#

Latest news with #4o

Adobe Wants You To Use Firefly AI To Complete An Artist's Unfinished Film
Adobe Wants You To Use Firefly AI To Complete An Artist's Unfinished Film

Yahoo

time17 hours ago

  • Entertainment
  • Yahoo

Adobe Wants You To Use Firefly AI To Complete An Artist's Unfinished Film

Cars floating in the sky. Upside-down skyscrapers that remind you of Christopher Nolan's "Inception." Confused protagonists standing on the ledge, running in the woods, driving an off-road bike, or piloting a fishing boat inside a flooded museum. None of this makes sense, and everything does, because it's art imagined by director and AI artist Sam Finn, who partnered with Adobe to make a video that's under two minutes long titled "The Unfinished Film." Created with Adobe's Firefly suite of AI tools that have received significant upgrades in recent months, "The Unfinished Film" is "more than just a creative experiment," according to a blog post from Adobe. "It's a community storytelling project designed to celebrate creative freedom and collaboration. We started with a simple question: What happens when you hand off an idea—not a finished product—to the creative community and invite them to take it further?" "The Unfinished Film" can be a great marketing tool for Adobe, as the company is inviting creators to come up with their own versions of it. AI image and video generators have gone viral more than once, with ChatGPT's 4o image generator, Google's Veo 3, and Higgsfield Soul being good examples. Adobe Firefly could also benefit from the same popularity among creators. Specifically, Adobe wants them to use the AI image, video, and audio tools available in Firefly to complete "The Unfinished Film." It's an initiative meant to showcase the growing abilities of the Firefly AI suite to help creators put together anything they imagine with incredible ease. Read more: Photoshop For Android Launches In Beta With Built-In Firefly AI What Can Adobe Firefly AI Do? Adobe explains in the blog post that it spent time with creators involved in the video-making process, including editors, filmmakers, and creative teams, to understand what they want. The company says it learned they want amplification, or "tools that spark new ideas, speed up workflows, and preserve creative control," rather than the automation that AI tools can offer. Firefly is a collection of AI tools available on desktop and mobile that Adobe continuously updates. Earlier this year, we saw Adobe launch Firefly apps for iPhone and Android that complement the Firefly desktop experience and add exceptional third-party AI tools to the app, including Google's Veo 3 and Imagen 4, and Runway's Gen-4 video generator. Firefly Boards supports moodboarding and ideation, so the entire creative process can take place inside the app. Whether you use the Firefly AI models or rely on third-party options, Firefly lets you do everything in one place. You can generate images and make AI videos controlling everything about the scene and camera with simple text prompts. Firefly supports audio and effects generation for your productions. All AI content created with the Firefly apps contains a Content Credentials watermark that indicates that a specific piece of art was developed with the help of AI. Visible watermarks would be even better, but they aren't suitable for all projects. What's also important is that Adobe doesn't use your creations and upload data to train its generative models. Change The Unfinished Film Any Way You Want Creators excited about the opportunity to edit "The Unfinished Film" with Firefly AI can download it and do whatever they want with it. Adobe encourages them to remix and reshape it and then share it on social media with the #AdobeFirefly hashtag. Unfortunately, there's no contest here, which would have worked even better to make this Adobe idea go viral. The company's social channels, including Instagram, feature four versions of "The Unfinished Film" from storytellers who already use AI to bring their concepts to life: Noémie Pino, Phil Cohen, Jad Kassis, and Keenan Lam. Some of them kept Finn's narrative structure and modified it by adding their own perspective and ideas. Others recut Finn's film and only used a few sequences that served as inspiration. Pino's version of "The Unfinished Film" stood out to me. The artist didn't jump straight into Firefly to get the AI image and video generation tools working for her. She storyboarded her ideas and then created her own protagonist out of clay. Pino took photos of the real-life objects, edited them with Adobe's tools, and then used the images to direct the AI. She then inserted the animated version of her clay character in Finn's "Unfinished Film," turning it into her own story. Also interesting are the short behind-the-scenes clips these four creators made to show how effortless it is to use Firefly AI to bring your ideas to life immediately, without waiting for someone to approve a budget, create special video effects, and manage a production set. Lam's BTS tutorial for "The Unfinished Film," which you'll see on social media, stands out for its creativity. Read the original article on BGR.

This ChatGPT-4o prompt can help users organise their thoughts and uncover answers they might have missed
This ChatGPT-4o prompt can help users organise their thoughts and uncover answers they might have missed

Hindustan Times

time26-06-2025

  • Hindustan Times

This ChatGPT-4o prompt can help users organise their thoughts and uncover answers they might have missed

A new approach to using ChatGPT-4o is gaining popularity among users who want more practical and effective results from their AI interactions. Instead of simply asking for an answer or a list of solutions, this method encourages a conversation where ChatGPT asks a series of questions to uncover new angles and possible fixes. This style of prompting has been highlighted on Reddit, where such information is shared regularly by tech enthusiasts. A simple prompt lets ChatGPT-4o ask questions, helping users find new solutions for tech, work, and everyday problems.(Unsplash) The idea is straightforward. When faced with a persistent problem, rather than requesting a direct solution, users invite ChatGPT to act as a thoughtful problem-solver. The prompt goes like this: 'I'm having a persistent problem with [x] despite having taken all the necessary countermeasures I could think of. Ask me enough questions about the problem to find a new approach.' This approach shifts the focus from immediate answers to a process of exploration, where ChatGPT guides the user through a series of targeted questions. This method has proven especially helpful for issues that seem resistant to standard troubleshooting. For example, many people struggle with iPhone battery drain, even after trying all the common fixes. Using this prompt, ChatGPT begins by asking about the device model, recent software updates, app usage, and the specific steps already attempted. Through this back-and-forth, the conversation often uncovers details that were overlooked, such as a problematic app, a background process, or a recent update causing the issue. How does it work? What stands out about this approach is the way ChatGPT maintains focus, gathers relevant information, and avoids jumping to conclusions. The experience feels similar to working with a skilled support technician who listens carefully, asks precise questions, and only then suggests possible solutions - all this without any human interaction. This method not only helps identify the root cause of a problem but also encourages users to reflect on their own troubleshooting process - offering insights that may be missed otherwise. The original Reddit thread was posted by u/speak2klein, who said, "What makes this so good is 4o's insane ability to ask the right follow-ups. Its context tracking and reasoning are miles ahead of earlier versions of ChatGPT." Many users have echoed this sentiment, noting that ChatGPT-4o's improved ability to remember context and reason through complex situations makes it a valuable tool for a wide range of challenges. This style of prompting is not limited to technical issues. Users have found it useful for work projects, creative blocks, and personal decisions. By letting ChatGPT lead the conversation with questions, it becomes easier to break out of old patterns and see problems from a new perspective. To try this approach, simply describe the problem and use the suggested prompt. ChatGPT will begin asking questions to gather more information, helping users organise thoughts and guide the discussion towards a possible answer. This process can reveal solutions that might not have been considered otherwise. For those seeking a more interactive and thoughtful experience with ChatGPT-4o, this prompt is a reliable way to tap into the AI's reasoning abilities. Next time a problem seems unsolvable, consider using this method. The results may be more insightful and practical than expected.

ChatGPT is holding back — these four prompts unlock its full potential
ChatGPT is holding back — these four prompts unlock its full potential

Tom's Guide

time21-06-2025

  • Tom's Guide

ChatGPT is holding back — these four prompts unlock its full potential

ChatGPT can be such a useful tool. But, it has a tendency to sometimes not put in its all. If you prompt it correctly, you can force ChatGPT to give a request that little bit of extra oomph to really give you a solid answer. This could be for a multi-step prompt, or simply when you want the AI chatbot to dig deep and really think through an answer. In my time it, a few prompts have come up that I've found have really pushed ChatGPT to go all out. These are my four favorite ChatGPT prompts for that exact task. This one requires a bit of work, talking ChatGPT through a stages, but the end result is worth it. Of course, if you're just asking a simple question or looking into something simple, all of this work isn't needed. However, I have found that a bit of forward planning can get the model thinking harder. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. ChatGPT will respond to this saying that it is ready for your question. Ask your request and it will take its time thinking through the task. This prompt works best on one of the more advanced versions of ChatGPT, such as 4o. It will also work on other chatbots such as Claude 4 or Gemini. Prompt: Debate with yourself on [insert topic]. For each side of the argument, quote sources and use any information available to you to form the argument. Take time before you start to prepare your arguments. ChatGPT can make a great debate partner, even better when it is debating itself. By using this prompt, you'll get strongly planned and considered arguments on both sides of a topic. This is especially useful when you're working on an essay or project that needs a varied consideration. The model can debate on any topic, but sometimes will only touch on the surface of a topic. In this case, follow up with a prompt asking ChatGPT to think harder about its responses, forcing it to consider everything in more detail. Prompt: 'Break down the history, current state, and future implications of [issue], using subheadings and citing credible sources.' Instead of just getting a general overview of a subject, this will give you a detailed report, examining the past, future and current state of a topic. By asking for citations, ChatGPT will list all of the sources it has used to offer up the information in your report. You can go a step further by asking ChatGPT to use the internet to do this, providing links to any information it has used. Prompt: 'List the step-by-step process for [task], noting common pitfalls and how to avoid each one.' A simple but effective prompt for ChatGPT, this will not only give you the instructions for how to do something but warn you of the mistakes that are often made for each stage. For example, when using this prompt for making focaccia, ChatGPT gave me instructions for stage 1 of mixing the dough, along with warnings around the temperature of the water and making sure to mix the dough enough. This is a step up from simply asking ChatGPT to explain how to do something, forcing it to carefully consider the best way to do something, especially if it is a complicated task.

One of Europe's top AI researchers raised a $13M seed to crack the ‘holy grail' of models
One of Europe's top AI researchers raised a $13M seed to crack the ‘holy grail' of models

Yahoo

time27-05-2025

  • Business
  • Yahoo

One of Europe's top AI researchers raised a $13M seed to crack the ‘holy grail' of models

From OpenAI's 4o to Stable Diffusion, AI foundation models that create realistic images from a text prompt are now plentiful. In contrast, foundation models capable of generating full, coherent 3D online environments from a text prompt are only just emerging. Still, it's only a question of when, not if, these models will become readily available. Now one of Europe's most prominent AI 3D model researchers, Matthias Niessner, has taken an entrepreneurial leave of absence from his visual computing & AI lab at the Technical University of Munich to found a startup working in the area: SpAItial. Formerly a cofounder at Synthesia, the realistic AI avatar startup valued at $2.1 billion, Niessner has raised an unusually large seed round for a European startup of $13 million. The round was led by Earlybird Venture Capital, a prominent European early-stage investor (backers of UiPath, PeakGames for instance) with participation from Speedinvest and several high-profile angels. That round size is even more impressive when taking into account that SpAItial doesn't have much to show the world yet other than a recently released teaser video showing how a text prompt could generate a 3D room. But then, there's the technical team that Niessner assembled: Ricardo Martin-Brualla, who previously worked on Google's 3D teleconferencing platform, now called Beam; and David Novotny, who spent six years at Meta where he led the company's text-to-3D asset generation project. Their collective expertise will give them a fighting chance in a space that already includes some competitors with a similar focus on photorealism. There's Odyssey, which raised $27 million and is going after entertainment use cases. But there's also World Labs, the startup founded by AI pioneer Fei-Fei Li, and already valued at over $1 billion. Niessner thinks this is still little competition compared to what exists for other types of foundation models, but also in regard to 'the bigger vision' he and others are pursuing. 'I don't just want to have a 3D world. I also want this world to behave like the real world. I want it to be interactable and [let you] do stuff in it, and nobody has really cracked that yet,' he said. Nobody has really cracked yet what the demand for photorealistic 3D environments might be, either. The promise of a 'trillion-dollar' opportunity ranging from digital twins to augmented reality seems big enough to excite VCs, but it is also vague and multifaceted enough to make go-to-market strategy hard to figure out. The most obvious use case is for video game creation, but these models could also have applications in entertainment, 3D visualizations used in construction, and eventually usage in the real world for areas like robotic training. Niessner is hoping to bypass that issue by having developers license the foundation model to come up with downstream applications for specific uses. He also enlisted a fourth cofounder, former Cazoo executive Luke Rogers, once his roommate in Palo Alto while he was a visiting assistant professor at Stanford, to help him on the business side. One of the first tasks on SpAItial's roadmap will be to identify partners that can work with earlier models, versus those that would have to wait for higher quality. 'We want to at least work with a few partners,' Niessner said, 'and see how they can use the APIs.' Compared to other well-funded AI startups, SpAItial is putting revenue higher up on its agenda. But first, it will have to spend some, both on compute and on hiring. For the latter, its focus is on quality, not quantity. According to Niessner, 'the team is not going to grow to hundreds of people right away; it's just not happening, and we don't need that.' Instead, Niessner and his cofounders are working on generating larger and more interactive 3D spaces, where, for example, a glass can shatter realistically. This would unlock what Niessner refers to as the 'Holy Grail': that a 10 year old could type in some text and make their own video game in 10 minutes. In his view, this ambitious goal is actually more achievable than what might seem like the low-hanging fruit — letting users create3D objects — since most gaming platforms still tightly control what third parties can add. That is, of course, unless they decide to build it themselves, as Roblox might. But by then, SpAItial might be busy replacing CAD instead; the next chapter in 3D generation is only beginning. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Which ChatGPT model is best? A guide on which model to use for coding, writing, reasoning, and more.
Which ChatGPT model is best? A guide on which model to use for coding, writing, reasoning, and more.

Business Insider

time18-05-2025

  • Business
  • Business Insider

Which ChatGPT model is best? A guide on which model to use for coding, writing, reasoning, and more.

ChatGPT isn't a monolith. Since OpenAI first released the buzzy chatbot in 2022, it has rolled out what seems like a new model every few months, using a confusing panoply of names. A number of OpenAI competitors have popular ChatGPT alternatives, like Claude, Gemini, and Perplexity. But OpenAI's models are among the most recognizable in the industry. Some are good for quantitative tasks, like coding. Others are best for brainstorming new ideas. If you're looking for a guide on which model to use and when, you're in the right place. GPT-4 and GPT-4o OpenAI first released GPT-4 in 2023 as its flagship large language model. CEO Sam Altman said in an April podcast that the model took "hundreds of people, almost all of OpenAI's effort" to build. It has since upgraded its flagship model to GPT-4o, which it first released last year. It's as intelligent as GPT-4, which is capable of acing the SAT, the GRE, and passing the bar — but is significantly faster and improves on its "capabilities across text, voice, and vision," OpenAI says. The "o" stands for omni. 4o can quickly translate speech and help with basic linear algebra, and has the most advanced visual capabilities. Its Studio Ghibli-style images drummed up excitement online. However, it also raised copyright questions as critics argued that OpenAI is unfairly profiting off artists' content. OpenAI says 4o "excels at everyday tasks," such as brainstorming, summarizing, writing emails, and proofreading reports. GPT-4.5 Altman described GPT-4.5 in a post on X as "the first model that feels like talking to a thoughtful person." It's the latest advancement in OpenAI's "unsupervised learning" paradigm, which focuses on scaling up models on "word knowledge, intuition, and reducing hallucinations," OpenAI technical staff member Amelia Glaese said during its unveiling in February. So, if you're having a difficult conversation with a colleague, GPT-4.5 might help you reframe those conversations in a more professional and tactful tone. OpenAI says GPT-4.5 is "ideal for creative tasks," like collaborative projects and brainstorming. o1 and o1-mini OpenAI released a mini version of o1, its reasoning model, in September last year and the full version in December. The company's researchers said it's the first model trained to "think" before it responds and is well-suited for quantitative tasks, hence the moniker "reasoning model." That's a function of its training technique, known as chain-of-thought, which encourages models to reason through problems by breaking them down step-by-step. In a paper published on the model's safety training, the company said that "training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence." In a video of an internal OpenAI presentation on the best use cases for o1, Joe Casson, a solutions engineer at OpenAI, demonstrated how o1-mini might prove useful to analyze the maximum profit in a covered call, a financial trading strategy. Casson also showed how the preview version of o1 could help someone reason through how to come up with an office expansion plan. OpenAI says o1's pro mode, a "version of o1 that uses more compute to think harder and provide even better answers to the hardest problems," is best for complex reasoning, like creating an algorithm for financial forecasting using theoretical models or generating a multi-page research summary on emerging technologies. o3 and o3-mini Small models have been gaining traction in the industry for a while now as a faster and more cost-efficient alternative to larger, foundation models. OpenAI released its first small model, o3 mini, in January, just weeks after Chinese startup Butterfly Effect debuted DeepSeek's R1, which shocked Silicon Valley — and the markets — with its affordable pricing. OpenAI said 03 mini is the "most cost-efficient model" in its reasoning series. It's meant to handle complex questions, and OpenAI said it's particularly strong in science, math, and coding. Julian Goldie, a social media influencer who focuses on SEO strategy, said in a post on Medium that o3 "shines in quick development tasks" and is ideal for basic programming tasks in HTML and CSS, simple JavaScript functions, and building quick prototypes. There's also a "mini high" version of the model that he said is better for "complex coding and logic," though it had a few control issues. In April, OpenAI released a full version of o3, which it calls "our most powerful reasoning model that pushes the frontier across coding, math, science, visual perception, and more." OpenAI says o3 is best used for "complex or multi-step tasks," such as strategic planning, extensive coding, and advanced math. o4 mini OpenAI released another smaller model, the O4 mini, in April. It said it is "optimized for fast, cost-efficient reasoning." The company said it achieves remarkable performance for cost, especially in "math, coding, and visual tasks." It was the best-performing benchmarked model on the American Invitational Mathematics Examination in 2024 and 2025. o4 mini, and its mini-high version, are great for fast and more straightforward reasoning. They're good at speeding up any quantitative reasoning tasks you encounter during your day. If you're looking for more in-depth work, opt for o3. Scott Swingle, a DeepMind alum and founder of AI-powered developer tools company Abante AI, tested o4 with an Euler problem — a series of challenging computational problems released every week or so. He said in a post on X that o4 solved the problem in 2 minutes and 55 seconds, "far faster than any human solver. Only 15 people were able to solve it in under 30 minutes." OpenAI says the O4 mini is best used for "fast technical tasks," like quick STEM-related queries. It says it's also ideal for visual reasoning, like extracting key data points from a CSV file or providing a quick summary of a scientific article.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store