logo
#

Latest news with #LTXV

Lightricks launches LTXV, its new AI model that generates 60-second videos
Lightricks launches LTXV, its new AI model that generates 60-second videos

Indian Express

time2 days ago

  • Business
  • Indian Express

Lightricks launches LTXV, its new AI model that generates 60-second videos

Lightricks, the Israeli tech company best known for apps like Facetune and Videoleap, is expanding into professional production of AI generative videos that will set them apart from the competitors With the launch of its new model, LTXV, the company claims it can produce continuous AI-generated video content exceeding 60 seconds. This is significantly longer than what's currently possible with leading models like OpenAI's Sora, Google's Veo or Runway's Gen-4.. According to Lightricks co-founder and CEO Zeev Farbman, LTXV 'unlocks a new era for generative media.' He explains that the model is designed to start streaming results immediately, generating the first second almost instantly and building the rest of the sequence on the fly. The system uses overlapping frame chunks to preserve continuity—ensuring that characters, motion, and storyline remain consistent across time. This autoregressive approach is similar to the way large language models like ChatGPT generate text, except LTXV applies it visually, frame by frame. LTXV reportedly delivers output much faster than competitors like Veo 3, Runway Gen-4, or Kuaishou's Kling, which often make users wait several minutes for a few seconds of video. In a live demo, Lightricks showcased a continuous 60-second video featuring a woman cooking as a gorilla walks in and hugs her, an example of how the model maintains narrative flow without stuttering or abrupt transitions. That LTXV is open source and unrestricted by a proprietary API is notable. The model will be available on GitHub and Hugging Face as open weights. It is free to use for individuals and small teams earning less $10 million annually. According to Farbman, this supports Lightricks' 'open development for real-world application' approach, which gives developers and independent artists the freedom to expand upon the core engine. From a technical standpoint, the new model is fast and lightweight. It can be powered by a single Nvidia H100 or even high-end consumer GPUs. However, Farbman points out that available benchmarks for other models often require multiple H100s to produce just five seconds of high-resolution video. Veo 3 from Google LLC is the only AI video model that can also create its own audio tracks. Nevertheless, Lightricks' most recent developments coincide with the big AI video production companies. Everyone is making efforts to set themselves apart from the competition, and its rivals can boast a number of distinctive features of their own.

LTX Video Breaks The 60-Second Barrier, Redefining AI Video As A Longform Medium
LTX Video Breaks The 60-Second Barrier, Redefining AI Video As A Longform Medium

Forbes

time5 days ago

  • Forbes

LTX Video Breaks The 60-Second Barrier, Redefining AI Video As A Longform Medium

Lightricks, the Israeli AI startup best known for viral mobile apps like Facetune and Videoleap, is pushing deeper into professional production territory with a technical milestone that sets it apart from its peers in generative video. With the release of its new autoregressive video model, LTXV, the company claims it can now generate clips over 60 seconds long, eight times the current standard length for AI video. That includes OpenAI's Sora, Google's Veo, and Runway's Gen-4, none of which yet support real-time rendering at this scale. According to CEO and co-founder Zeev Farbman, this breakthrough 'unlocks a new era for generative media,' not just because of length, but because of what extended sequences enable: narrative. 'It's the difference between a visual stunt and a scene,' Farbman told me in a recent interview. 'AI video becomes a medium for storytelling, not just a demo.' LTXV's new architecture streams video in real time, returning the first second almost instantly and building the rest on the fly. The system uses small chunks of overlapping frames to condition what comes next, allowing continuity of motion, character, and action throughout the sequence. It's the same autoregressive approach that powers large language models like ChatGPT, applied to visual storytelling frame-by-frame. I saw the demo working on a Zoom call last week. Most systems, including top models like Veo 3, Runway 4, and Kling, make you wait minutes for generations. LTX is much faster. The system rendered a continuous 60-second scene of a woman cooking as a gorilla entered the kitchen and hugged her. The video streamed as it was generated, with very few pauses. Another scene showed a car passing under a bridge, then emerging on the other side, then continuing its journey—all without jarring cuts or jumps in logic. Particularly notable is that LTXV is open source, not locked behind a proprietary API. The model will be made available as open weights on GitHub and Hugging Face. It's free to use for individuals and small teams generating less than $10 million in revenue. Farbman says this aligns with Lightricks' strategy of 'open development for real-world application,' empowering both indie creators and developers to build on the core engine. From a technical perspective, the new model is fast and light. It runs on a single Nvidia H100, or even on high-end consumer GPUs. By contrast, Farbman points out, public benchmarks for other models often require multiple H100s just to produce five seconds of high-resolution video. The implications go far beyond YouTube clips. Lightricks envisions uses in advertising, real-time game cutscenes, adaptive educational content, and augmented reality performances. Imagine an AR character performing onstage with a musician, rendered live and reacting in real time.'We've reached the point where AI video isn't just prompted, but truly directed,' added Yaron Inger, co-founder and CTO. 'This leap turns AI video into a longform storytelling platform, and not just a visual trick.' This is part of a broader roadmap for LTX Studio, the company's browser-based production platform that offers script-to-scene authoring, character tracking, and style consistency. Multimodal support, including motion capture and audio-based conditioning, will be released soon. Next up: 4K video output and seamless frame interpolation for smoother motion. Farbman was quick to acknowledge that there's still work to be done. 'Prompt adherence in longform content is the next big frontier,' he said. 'We're seeing dramatic improvements, but scenes with complex interpersonal action are still hard.' Still, what I saw was far beyond what most AI video tools can manage today. As for monetization, Farbman says Lightricks is in talks with larger studios and platforms about commercial licensing and revenue share deals, while keeping development open for the broader creative community. 'We believe AI filmmaking shouldn't just be for engineers,' he said. 'It should be for storytellers.'

Lightricks Launches 13B Parameters LTX Video Model, Breakthrough Rendering Approach Generates High-quality, Efficient AI Video 30X Faster Than Comparable Models
Lightricks Launches 13B Parameters LTX Video Model, Breakthrough Rendering Approach Generates High-quality, Efficient AI Video 30X Faster Than Comparable Models

Malaysian Reserve

time06-05-2025

  • Malaysian Reserve

Lightricks Launches 13B Parameters LTX Video Model, Breakthrough Rendering Approach Generates High-quality, Efficient AI Video 30X Faster Than Comparable Models

JERUSALEM and NEW YORK, May 6, 2025 /PRNewswire/ — Lightricks , a leader in AI-driven content creation technology, today announced the release of its LTX Video 13-billion-parameter model (LTXV-13B) – which may be the most advanced and efficient AI video generation model to date. This substantial upgrade dramatically increases quality while maintaining LTXV's unparalleled speed generating AI videos. The 13B model is available within the company's flagship storytelling platform, LTX Studio, shared with the open community and is being integrated across the Lightricks portfolio. LTXV-13B introduces 'multiscale rendering,' a major technical breakthrough that delivers both speed and quality through a layered process. The model drafts in lower detail first to capture coarse motion using fewer resources. This draft then guides the next stages, where the model progressively adds structure, lighting, and micro-motion (and spending time where it matters most). The result is high-fidelity video built through deliberate, multi-scale generation, with render times that can be more than 30X faster than comparable models – without compromising visual realism. The new 13B model represents a significant leap forward in Lightricks' generative AI capabilities, offering creators the ability to produce videos with stunning detail, coherence, and control. It utilizes the latest advancements in academia and the open source community, including unsampling controls and spatiotemporal guidance for video editing, and kernel optimization for running speeds. Unlike other models that demand enterprise-grade GPU (and long rendering times), LTXV-13B delivers studio level video at unmatched speed, even on devices that creators already own, which differentiates LTX Video in the marketplace. 'The introduction of our 13B parameter LTX Video model marks a pivotal moment in AI video generation with the ability to generate fast, high-quality videos on consumer GPUs,' said Zeev Farbman, co-founder and CEO of Lightricks. 'Our users can now create content with more consistency, better quality, and tighter control. This new version of LTX Video runs on consumer hardware, while staying true to what makes all our products different – speed, creativity, and usability.' While developing and refining the 13B model, Lightricks entered into a strategic partnership with leading media asset provider Getty Images . In December 2024, Lightricks entered an agreement with Shutterstock to leverage their licensed content. These kinds of collaborations have given Lightricks access to an extensive library of high-quality video assets for model training, reinforcing its mission to build ethically trained, visually compelling, and commercially safe generative tools. The LTXV-13B model now empowers creators with even more control and flexibility, seamlessly supporting all of the platform's advanced creative tools, including: Keyframe editing Camera motion control Character and scene-level motion adjustment Multi-shot sequencing and editing In support of startups and small businesses, Lightricks is offering the 13B model free to license for enterprises with under $10 million in annual revenue. This initiative and the release of all LTXV models in open source reflect Lightricks' commitment to making cutting-edge generative AI accessible to the next generation of creative companies and innovators. Open source versions of LTXV are available on Hugging Face (LTX-Video) and GitHub (LTX-Video) . 'By consistently refining our models and working with the open community, we've built an AI system that generates physically natural movement while preserving artistic control,' added Yoav HaCohen, Director of LTX Video at Lightricks. Since launching LTX Video in November 2024, Lightricks has collaborated with researchers and open-source contributors to improve motion consistency, scene coherence, and creative adaptability. Key open-source advancements in LTXV-13B include: VACE Model Inference – advanced video generation and editing tools, including reference-to-video (R2V). Details on GitHub – advanced video generation and editing tools, including reference-to-video (R2V). Details on Unsampling Controls for Video Editing – Tools that reverse noise and refine frame granularity. Details on GitHub . – Tools that reverse noise and refine frame granularity. Details on . Kernel Optimization – Efficient Q8 kernel usage allows performance scaling on lower-resource devices. Details on GitHub and HuggingFace . With a growing library of models designed for diverse creative needs and a commitment to open development, Lightricks is shaping the future of generative AI video, bridging research-driven breakthroughs with real-world application. For more information about Lightricks, its products, technology, and open-source initiatives, visit . View original content to download multimedia: SOURCE Lightricks

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store