logo
AI Movies Multiply, Reply Festival Returns, The Fastest AI Video Generator

AI Movies Multiply, Reply Festival Returns, The Fastest AI Video Generator

Forbes09-05-2025
Here come the AI movies. The independent cinematic AI community made three project announcements this week. Promise dropped the trailer for its upcoming feature, Ninjapunks. AI Studio GraiL introduced its feature, Chimera Protocol. Runway released its Mars and Sky AI animated series. [Scroll down to see them.] Last week Asteria studios announced its production with poker face actress and creator Natasha Lyonne, who revealed she is more involved in the creation of Asteria and Moonvalley than as previously reported.
Reply has announced the jury for the second edition of the Reply AI Film Festival, an international competition for short films created with AI tools. Italian director Gabriele Muccino (The Pursuit of Happyness) will chair the jury, joined by returning members Rob Minkoff (The Lion King), Shelby and Caleb Ward (Curious Refuge), and journalist Denise Negri. New jurors include Dave Clark (Promise Studios), Caroline Ingeborn (COO, Luma AI), Filippo Rizzante (CTO, Reply), Paolo Moroni (Lexus Italy), and Guillem Martinez Roura (ITU AI for Good). The 2025 theme is 'Generation of Emotions,' encouraging submissions that use AI to evoke cinematic feeling. Films are due by June 1 at aiff.reply.com.
Lightricks New 13-billion-parameter AI video model generates high-quality video up to 30 times faster than comparable systems. I used the model recently and can attest to its astonishing speed. It's not even close and the results are right up there with the top gen-video AI tools. The model is open source and free to license for companies making under $10 million annually, which is part of the company's push to make pro-grade generative tools widely accessible. Integrated into LTX Studio, the model uses a multiscale rendering approach, starting with low-res drafts before layering in detail and motion. It's optimized to run on consumer GPUs, distinguishing it from rivals that require enterprise hardware. The model includes advanced controls for camera movement, character action, and multi-shot editing. Importantly, Lightricks has also partnered with Getty and Shutterstock to ethically train its tools using licensed assets.
Dozens of YouTube Channels Are Showing AI-Generated Cartoon Gore and Fetish Content. A Wired investigation has uncovered a disturbing trend on YouTube. Dozens of channels are using generative AI to produce videos featuring cartoon characters like cats and minions in violent, gory, and sexualized scenarios. These videos, often labeled as child-friendly, echo the 2017 "Elsagate" scandal, where inappropriate content targeted at children exploited YouTube's algorithms for profit. The ease of creating such content with AI tools has led to a surge in these videos, complicating moderation efforts. YouTube has responded by terminating some channels and suspending monetization on others, but many remain active.
Two of my most important stories.
Charlie Fink
The first two stories in my AI and media series are live: AI and Hollywood's Next Golden Age looks at how generative tools are transforming production. Hollywood is Losing the War for Attention explores how traditional media lost its grip on culture to platforms like TikTok, Roblox, and Fortnite. These pieces are part of an ongoing attempt to make sense of disruption on this scale in media and entertainment in particular. However, it's important not to lose sight of the fact that this is a microcosm of what is happening in the entire economy. Next up I'm working on Big Tech Owns Hollywood, a closer look at how Amazon, Apple, and Meta now control the platforms, the pipelines, and the audience. After that, Games Dwarf Movies will examine the new attention economy and why persistent, playable worlds are displacing passive viewing. A third piece, In the Age of AI, Celebrity Flourishes, will look at the resilience and reinvention of celebrity in synthetic culture. Each story is a step toward understanding both the coming golden age of Hollywood, and the competition with algorithms programmed to engage and influence.
Promise Studios released NinjaPunk, a short sci-fi trailer for an upcoming feature. It was made using a mix of traditional production and AI-assisted techniques. Set in a neon-drenched Los Angeles in 2065, the film follows a cybernetic ninja on a revenge mission. Co-founder Dave Clark and VFX supervisor Rob Nederhorst combined live-action stunt capture with AI-generated environments to streamline production without sacrificing visual ambition.
Grail Studios' Davide Bianchi wrote, directed and performed in this new teaser for an upcoming cinematic AI film, Chimera Protocol. 'What if I told you in exactly sixteen hours, the world ends?,' asks the narrator. 'It wasn't just a bank job. What they were after was a suitcase. They call it the Genesis Drive, and once it's linked She stops calculating … and starts thinking.'
Runway Studios released the pilot episode of its first AI animated series, Mars and Siv, created by Jeremy Higgins and Britton Korbel. They're shopping this to streamers. It would be great for the ecosystem if someone legit picks it up.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Samsung Eyes OpenAI, Perplexity For S26 AI
Samsung Eyes OpenAI, Perplexity For S26 AI

Yahoo

time12 minutes ago

  • Yahoo

Samsung Eyes OpenAI, Perplexity For S26 AI

Samsung (SSNLF) is widening its AI net by talking to OpenAI and Perplexity AI for Galaxy S26 integration beyond Google's Gemini. Galaxy's mobile boss Choi Won?Joon says they've held talks with multiple vendors and will back any AI agent that delivers top user experiences. Warning! GuruFocus has detected 3 Warning Signs with SSNLF. Bloomberg reports Samsung was close to investing in Nvidia?backed Perplexity in June, and now it wants both Perplexity's assistant and OpenAI's models side?by?side with Gemini on next year's S26. Like Apple (NASDAQ:AAPL), Samsung aims to blend best?in?class AI from outside partners into its devices, giving customers choice rather than a single default. Why it matters: Offering a menu of AI assistants could set the Galaxy S26 apart in a market where differentiation on specs alone is tough. Investors will be watching vendor partnerships and any investment deals ahead of Samsung's upcoming earnings. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

2 Artificial Intelligence (AI) Stocks That Are Still on Sale After the Tech Rally
2 Artificial Intelligence (AI) Stocks That Are Still on Sale After the Tech Rally

Yahoo

time12 minutes ago

  • Yahoo

2 Artificial Intelligence (AI) Stocks That Are Still on Sale After the Tech Rally

Key Points Alphabet successfully integrated AI across its search, cloud, and YouTube businesses at unmatched scale. IBM quietly transformed itself from a legacy technology company to a prominent enterprise AI player. Both Alphabet and IBM are trading at significant valuation discounts to their peers. 10 stocks we like better than Alphabet › The year 2025 has been very volatile for the stock market. Currently, the benchmark S&P 500 index is trading at nearly 29 times its trailing earnings, far higher than its historical median of 17.9 times. While this signals stretched valuations in the market, not all artificial intelligence (AI) stocks joined the frenzy. There are still companies, such as Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) and International Business Machines (NYSE: IBM), that possess cutting-edge AI capabilities, scale, brand name, and solid financials, trading at modest valuations. Here's why these stocks seem impressive picks in July 2025. Alphabet Alphabet is one of the few AI powerhouses that surprisingly remained affordable amid the AI-driven technology rally. Currently trading at just 18.7 times forward earnings, the company is significantly cheaper than other technology giants, such as Microsoft, which trades at 32.8 times forward earnings, and Nvidia, which trades at 36 times earnings. This discounted valuation appears highly unjustified considering Alphabet's exceptional financial strength and its robust and comprehensive AI ecosystem, which is strengthening its Search business as well as Google Cloud and YouTube. Alphabet delivered stellar performance in the second quarter of fiscal 2025 (ending June 30), with revenues and earnings surpassing consensus expectations. The company generated revenues of $96.4 billion, a 14% increase year over year, while net income reached $28.2 billion, representing 19% growth. This performance was mainly driven by the exceptional growth of its AI and cloud business. Alphabet's AI Overviews (a feature that provides direct answers and summaries at the top of search results) is currently serving over 2 billion users monthly across more than 200 countries and 40 languages. This is helping drive commercial queries for its Search business. Alphabet's Google Cloud business is also experiencing rapid growth, with deals over $250 million doubling year over year and the company signing the same number of billion-dollar deals in the first half of 2025 as in all of 2024. The recent OpenAI deal to use the Google Cloud platform as an infrastructure provider for its ChatGPT can prove to be a major catalyst in the long run. YouTube also dominates streaming, with over 200 billion daily views on YouTube Shorts. It's No.1 based on streaming time in the U.S. Alphabet's Gemini 2.5 Pro "thinking" AI model has shown improved performance compared to competitors in several complex tasks at lower costs. This technological edge appears to have driven an 80% surge in usage for the Google AI Studio platform and the Gemini API in April 2025. Nearly 9 million developers have already used the Gemini 2.5 models. Finally, although the market has become concerned about Alphabet's decision to increase capital spending to $85 billion in 2025 compared to the prior estimate of $75 billion, these strategic AI infrastructure investments will prove beneficial for the company in the long run. The company also boasts a very healthy balance sheet with $95 billion in cash at the end of Q2. It generated $66.7 billion in trailing-12-month free cash flow. While the risks of disruption in the search market from AI browsers cannot be ignored, Alphabet's Q2 results have demonstrated the company's ability to monetize AI successfully. Considering all the factors, Alphabet appears to be a solid pick in 2025. International Business Machines International Business Machines, commonly referred to as IBM, remains a significant enterprise AI player that Wall Street surprisingly overlooked for the past several years. The stock has crashed after an impressive Q2 fiscal 2025 performance likely due to a weaker-than-expected software performance. Trading at 26.5 times forward earnings, the company is also valued at a significant discount compared to technology giants like Microsoft and Nvidia. The $34 billion acquisition of Red Hat transformed IBM from a legacy technology company into a significant player in the AI and hybrid cloud space. Red Hat business reported a 14% year-over-year revenue jump in Q2. OpenShift, Red Hat's hybrid cloud platform, reported a 20% year-over-year rise in revenues and reached annual recurring revenues (ARRs) of $1.7 billion. Unlike many technology competitors focusing on the consumer AI segment, IBM is targeting the enterprise AI market, especially in regulated industries and those involving complex hybrid cloud requirements. The company's full-stack AI suite watsonx, which leverages Red Hat's infrastructure, enables secure and compliant AI use in sectors such as finance, banking, and government. The recently announced acquisition of start-up Seek AI will further strengthen the watsonx platform with natural language capabilities. All these initiatives have translated into IBM's generative AI book of business now standing at $7.5 billion inception-to-date in Q1, significant growth from $6 billion reported in the previous quarter. The overall software business (including Red Hat, automation, data, and transaction processing) reported 8% year-over-year growth in Q2, while annual recurring software revenue reached $22.7 billion, a 10% year-over-year increase. Software now accounts for 45% of IBM's business, with highly recurring revenue streams. IBM's infrastructure business delivered exceptional results in Q2, with revenues growing 11% year over year. IBM Z (a family of high-performance, mainframe computers for mission-critical workloads) revenues increased 67% year over year in the same period. Demand has been strong for the z17 next-generation mainframe. Considering the many tailwinds, the stock can prove to be a worthwhile addition to the investor's portfolio. Should you buy stock in Alphabet right now? Before you buy stock in Alphabet, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Alphabet wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $636,628!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,063,471!* Now, it's worth noting Stock Advisor's total average return is 1,041% — a market-crushing outperformance compared to 183% for the S&P 500. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of July 21, 2025 Manali Pradhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, International Business Machines, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. 2 Artificial Intelligence (AI) Stocks That Are Still on Sale After the Tech Rally was originally published by The Motley Fool Sign in to access your portfolio

20 Expert Strategies To Optimize AI Speed And Performance
20 Expert Strategies To Optimize AI Speed And Performance

Forbes

time12 minutes ago

  • Forbes

20 Expert Strategies To Optimize AI Speed And Performance

Across industries, many businesses have already answered the question, 'How can we leverage AI?' and are now asking, 'How can we make our AI systems faster and more efficient?' For AI to deliver real value, it must go beyond saving time—it also needs to be scalable, responsive and accurate. From refining model architectures to streamlining data pipelines and upgrading hardware, tech leaders are exploring practical strategies to boost AI performance while keeping costs in check. Below, members of Forbes Technology Council share actionable ways to ensure AI systems operate at peak efficiency. 1. Apply Model Quantization One practical way to make AI systems faster and more efficient is through model quantization. For example, instead of using 32-bit floating-point numbers, we use lighter-weight formats like 8-bit or even 4-bit integers. The result is that models become smaller, run faster, require less memory and consume significantly less power—something that's especially important when deploying AI at scale. - Srujana Kaddevarmuth, Walmart 2. Adopt A Hybrid AI Approach A hybrid AI approach improves speed, scale and cost efficiency by rethinking how work gets done. It breaks down prompts into smaller parts, simplifies them and routes each to the most efficient resource—a powerful server, edge device or compact model. This reduces overhead, accelerates decision-making and delivers timely, actionable insights—maximizing ROI with minimal resource use. - Nosa Omoigui, Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? 3. Deploy Models Closer To Specialized Hardware Embedding AI workloads closer to specialized hardware—for example, deploying models directly on cloud-native GPUs—can dramatically improve efficiency. This reduces latency and energy use while unlocking real-time applications like autonomous agents and AI-powered personalization at scale. - Benjamin Forlani, Dedale 4. Leverage Specialized Models AI models can be sped up and enhanced through modular specialization—using small models, each expert in a particular task. It reduces the compute burden, speeds up response and allows for more adaptable updates. It is like expert teamwork: unified, optimized and scalable. This also allows for asynchronous processing of modules if they are not interdependent. - Chidambaram Bhat, Integral Technologies 5. Invest In AI Governance Investing in an AI governance tool can provide the guardrails your organization needs to leverage LLMs within necessary organizational parameters. This creates more efficiency because you're eliminating outlier data (if you use the governance tool to prohibit certain sources, such as Wikipedia), and outputs are more standardized for employees. - Nick Ryan, RSM US, LLP 6. Implement Hierarchical Processing Architectures Implementing hierarchical processing architectures that route queries through increasingly sophisticated models—rather than using a single large model for everything—will make AI more efficient. Simple questions get handled by lightweight models (sub-1B parameters), moderate complexity goes to mid-tier models (7-13B parameters), and only complex reasoning tasks reach the largest models (70B+ parameters). - Stan McHann, SparkMeter 7. Optimize Data Pipelines Optimizing data pipelines is key to making AI faster and more efficient. Clean, structured data enables AI systems to perform better, while fragmented data (common in healthcare) raises costs and hinders decision-making. By applying AI upstream to structure raw data before it reaches models, organizations can reduce latency, increase accuracy and drive more scalable, intelligent outcomes. - Amit Garg, HiLabs 8. Train Models With Synthetic Data You can use synthetic data to train AI faster and cheaper. It replaces costly, messy real-world data; speeds up development; and avoids privacy issues. The catch is that it's often too clean. Yet for many use cases, perfection is unnecessary. - Max Votek, Customertimes 9. Use Federated Learning One effective way to boost AI efficiency is federated learning, where models train locally (on-device) and then aggregate insights centrally, avoiding heavy data transfer. This significantly reduces network load and latency, ensures privacy and makes AI faster, more scalable and responsive in real-world scenarios. - Vlad Malanin, SpeedSize 10. Allocate Resources With An Agentic AI Orchestrator Within An MCP Framework Leverage an agentic AI orchestrator within a unified model context protocol framework to dynamically allocate resources, prompt tokens and streamline data pipelines. AI systems gain real-time scalability, predictable costs and zero idle capacity, which accelerates AI feature rollouts with consistent SLAs and frees teams to drive strategic innovation rather than manage infrastructure. - Varun Milind Kulkarni, Microsoft 11. Combine Symbolic And Subsymbolic Models The answer is composite AI, which combines symbolic and subsymbolic models to solve real-world problems efficiently. Symbolic AI uses models explicitly described by systems of equations, where the outputs can be fully explained and audited. Subsymbolic AI (neural networks, language models) is inspired by the work of the human brain and acts as a statistical aggregate, which is as good as input data. - Filip Dvorak, Filuta AI 12. Leverage Edge Computing For Real-Time AI One way to make AI systems faster is by using edge computing to process data locally instead of in the cloud. This cuts down latency and reduces bandwidth use. It's especially beneficial for real-time tasks like autonomous driving or smart manufacturing, where split-second decisions matter. - Abhishek Singh, Amdocs 13. Unify Automation Tools And Processes Many companies rely on disconnected tools embedded in various systems, which creates inconsistencies and inefficiencies that hinder successful AI implementation. AI can enhance automation, but it cannot bridge the gaps created by fragmented solutions on its own. Companies must prioritize unifying their automation tools and processes to fully leverage the capabilities of AI. - Charles Crouchman, Redwood Software 14. Adopt A 'Streaming First' Data Management Strategy A 'streaming first' approach to data management enables AI systems to perform much better, particularly in hybrid environments. Specifically, real-time data streaming enables faster AI response, easier cross-domain data sharing and simpler data processing for AI and ML apps. - Guillaume Aymé, 15. Consider The Use Case Designing an AI system for a particular use case may make it more efficient and faster. Neural networks are the most commonly used approach for machine learning. They work perfectly well at scale, but might not be so efficient for particular classification applications—such as bot detection or DDoS mitigation, where other classifiers, like LOF or SVM, can deliver better accuracy and efficiency. - Alexander Krizhanovsky, Tempesta Technologies 16. Invest In High-Bandwidth, Energy-Efficient Memory One way to make AI systems faster and more efficient is with higher bandwidth, more energy-efficient memories. Data movement between processors and memory is a key bottleneck that's growing more important—especially for large-scale models that operate on large amounts of data. Faster, more power-efficient HBM and DDR DRAMs can dramatically improve the speed and efficiency of future AI systems. - Steven Woo, Rambus 17. Implement Edge-Based Model Compression And Quantization One highly effective strategy to make AI systems faster and more efficient is implementing model compression and quantization, especially for edge deployment. By reducing the size and precision of AI models—without significantly compromising accuracy—we enable them to run on less powerful devices with much lower latency and power consumption. - Alex ZAP Chernyak, ZAPTEST 18. Empower AI To Continuously Adapt AI advances when systems learn from history, unlearn biases and relearn from real-time data. When algorithms become the data—and compute is co-located with it—AI can adapt continuously, reduce latency and evolve decisions in context. Each moment becomes a chance to refine intelligence. - Ashok Reddy, KX 19. Focus On What Humans Want From AI To increase efficiency, let AI handle what the humans interacting with it want automated. In the CX industry, we use automation to lift emotional weight (like shielding agents from disturbing materials for content moderation) or reduce repetition (like resolving FAQ-level customer inquiries). But always build offramps; real efficiency means knowing when the machine should hand it back to a person. - Craig Crisler, SupportNinja 20. Employ Knowledge Distillation Knowledge distillation is an emerging technique. It enables the deployment of AI on resource-constrained devices without sacrificing accuracy. This technique transfers knowledge from a large 'teacher' model to a smaller 'student' model, improving efficiency without compromising performance. The student model retains much of the teacher model's performance while being faster, smaller and less resource-intensive. - Manikandarajan Shanmugavel, S&P Global

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store