logo
#

Latest news with #Videoleap

Lightricks launches LTXV, its new AI model that generates 60-second videos
Lightricks launches LTXV, its new AI model that generates 60-second videos

Indian Express

timea day ago

  • Business
  • Indian Express

Lightricks launches LTXV, its new AI model that generates 60-second videos

Lightricks, the Israeli tech company best known for apps like Facetune and Videoleap, is expanding into professional production of AI generative videos that will set them apart from the competitors With the launch of its new model, LTXV, the company claims it can produce continuous AI-generated video content exceeding 60 seconds. This is significantly longer than what's currently possible with leading models like OpenAI's Sora, Google's Veo or Runway's Gen-4.. According to Lightricks co-founder and CEO Zeev Farbman, LTXV 'unlocks a new era for generative media.' He explains that the model is designed to start streaming results immediately, generating the first second almost instantly and building the rest of the sequence on the fly. The system uses overlapping frame chunks to preserve continuity—ensuring that characters, motion, and storyline remain consistent across time. This autoregressive approach is similar to the way large language models like ChatGPT generate text, except LTXV applies it visually, frame by frame. LTXV reportedly delivers output much faster than competitors like Veo 3, Runway Gen-4, or Kuaishou's Kling, which often make users wait several minutes for a few seconds of video. In a live demo, Lightricks showcased a continuous 60-second video featuring a woman cooking as a gorilla walks in and hugs her, an example of how the model maintains narrative flow without stuttering or abrupt transitions. That LTXV is open source and unrestricted by a proprietary API is notable. The model will be available on GitHub and Hugging Face as open weights. It is free to use for individuals and small teams earning less $10 million annually. According to Farbman, this supports Lightricks' 'open development for real-world application' approach, which gives developers and independent artists the freedom to expand upon the core engine. From a technical standpoint, the new model is fast and lightweight. It can be powered by a single Nvidia H100 or even high-end consumer GPUs. However, Farbman points out that available benchmarks for other models often require multiple H100s to produce just five seconds of high-resolution video. Veo 3 from Google LLC is the only AI video model that can also create its own audio tracks. Nevertheless, Lightricks' most recent developments coincide with the big AI video production companies. Everyone is making efforts to set themselves apart from the competition, and its rivals can boast a number of distinctive features of their own.

LTX Video Breaks The 60-Second Barrier, Redefining AI Video As A Longform Medium
LTX Video Breaks The 60-Second Barrier, Redefining AI Video As A Longform Medium

Forbes

time4 days ago

  • Forbes

LTX Video Breaks The 60-Second Barrier, Redefining AI Video As A Longform Medium

Lightricks, the Israeli AI startup best known for viral mobile apps like Facetune and Videoleap, is pushing deeper into professional production territory with a technical milestone that sets it apart from its peers in generative video. With the release of its new autoregressive video model, LTXV, the company claims it can now generate clips over 60 seconds long, eight times the current standard length for AI video. That includes OpenAI's Sora, Google's Veo, and Runway's Gen-4, none of which yet support real-time rendering at this scale. According to CEO and co-founder Zeev Farbman, this breakthrough 'unlocks a new era for generative media,' not just because of length, but because of what extended sequences enable: narrative. 'It's the difference between a visual stunt and a scene,' Farbman told me in a recent interview. 'AI video becomes a medium for storytelling, not just a demo.' LTXV's new architecture streams video in real time, returning the first second almost instantly and building the rest on the fly. The system uses small chunks of overlapping frames to condition what comes next, allowing continuity of motion, character, and action throughout the sequence. It's the same autoregressive approach that powers large language models like ChatGPT, applied to visual storytelling frame-by-frame. I saw the demo working on a Zoom call last week. Most systems, including top models like Veo 3, Runway 4, and Kling, make you wait minutes for generations. LTX is much faster. The system rendered a continuous 60-second scene of a woman cooking as a gorilla entered the kitchen and hugged her. The video streamed as it was generated, with very few pauses. Another scene showed a car passing under a bridge, then emerging on the other side, then continuing its journey—all without jarring cuts or jumps in logic. Particularly notable is that LTXV is open source, not locked behind a proprietary API. The model will be made available as open weights on GitHub and Hugging Face. It's free to use for individuals and small teams generating less than $10 million in revenue. Farbman says this aligns with Lightricks' strategy of 'open development for real-world application,' empowering both indie creators and developers to build on the core engine. From a technical perspective, the new model is fast and light. It runs on a single Nvidia H100, or even on high-end consumer GPUs. By contrast, Farbman points out, public benchmarks for other models often require multiple H100s just to produce five seconds of high-resolution video. The implications go far beyond YouTube clips. Lightricks envisions uses in advertising, real-time game cutscenes, adaptive educational content, and augmented reality performances. Imagine an AR character performing onstage with a musician, rendered live and reacting in real time.'We've reached the point where AI video isn't just prompted, but truly directed,' added Yaron Inger, co-founder and CTO. 'This leap turns AI video into a longform storytelling platform, and not just a visual trick.' This is part of a broader roadmap for LTX Studio, the company's browser-based production platform that offers script-to-scene authoring, character tracking, and style consistency. Multimodal support, including motion capture and audio-based conditioning, will be released soon. Next up: 4K video output and seamless frame interpolation for smoother motion. Farbman was quick to acknowledge that there's still work to be done. 'Prompt adherence in longform content is the next big frontier,' he said. 'We're seeing dramatic improvements, but scenes with complex interpersonal action are still hard.' Still, what I saw was far beyond what most AI video tools can manage today. As for monetization, Farbman says Lightricks is in talks with larger studios and platforms about commercial licensing and revenue share deals, while keeping development open for the broader creative community. 'We believe AI filmmaking shouldn't just be for engineers,' he said. 'It should be for storytellers.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store