Latest news with #BlackForestLabs


Time of India
3 days ago
- Business
- Time of India
FLUX 1 Kontext & Playground API: The Next Evolution in in‑Context Visual AI.
A Paradigm Shift in Visual‑First Workflows In‑Context Generation: From Prompt to Canvas Variant Line‑Up Tailored for Every Scale Playground API: Instant Hands‑On Creativity Live Events An Asset for Visual Transformation The release of FLUX 1 Kontext reframes what 'generative AI' means for imagery. Rather than layering an editing model on top of a text‑only generator, Black Forest Labs rebuilt the stack around in‑context flow matching. The platform accepts both textual instructions and reference images, allowing users to steer style, composition, and subject identity with surgical precision. This architecture bridges the gap between ideation and delivery, compressing multi‑tool workflows into a single, iterative loop that moves at the speed of diffusion systems excel at first‑pass synthesis but falter when asked to preserve characters, logos, or fine‑grained details across successive edits. FLUX 1 Kontext attacks that weakness head‑on. Every generation step factors in the cumulative context—prompt history, reference images, and prior outputs—so subsequent edits refine rather than overwrite. The result is continuity: brand mascots retain their exact facial geometry, typographic treatments survive color swaps, and product mock‑ups remain pixel‑perfect through countless ships in three calibrated tiers. Kontext [pro] targets real‑time production pipelines; it delivers multi‑turn edits lightning‑fast while locking down character integrity. Kontext [max] pushes resolution, typography fidelity, and hyper‑realistic textures for agencies chasing photo‑grade results. Finally, Kontext [dev]—a 12‑billion‑parameter open‑weight model—drops into private beta for researchers and indie developers, providing self‑hosted experimentation without black‑box constraints. This tiered approach ensures startups prototype cheaply while Fortune 500 studios crank out billboard‑ready new Playground API offers a friction‑free testing arena. A browser‑based console lets users drag‑and‑drop images, type free‑form prompts, and watch outputs appear in seconds—no GPU wrangling, no local installs. Developers can flip seamlessly from playful exploration to production scripting by hitting the same REST endpoints that power the demo. Every call returns JSON metadata—generation seed, safety flags, latency metrics—so engineering teams can wire Kontext into CI pipelines or content‑moderation layers without like DALL‑E, Midjourney, or Stable Diffusion require separate plug‑ins—or entire rerolls—to tweak outputs. Kontext's flow‑matching backbone treats generation and editing as two sides of the same coin. This means fewer hand‑offs, no round‑tripping, and far less data drift. Add in the Playground's native rate‑limit handling, enterprise‑grade observability, and integrated safety filters, and you have a platform engineered for scale rather than weekend open‑sourcing the dev tier and publishing full model cards, Black Forest Labs signals a commitment to transparency and community‑driven innovation. At the same time, production endpoints layer multiple moderation gates—CSAM, extremism, gore—ensuring enterprises don't trade speed for compliance. Fine‑grained filters can be tuned to organizational risk profiles, and audit logs track every prompt‑output pair for downstream 1 Kontext and the Playground API don't merely iterate on the status quo—they declare a new baseline for visual‑first generative AI . By collapsing generation and editing into one context‑aware engine, slashing inference latency, and baking compliance into every endpoint, Black Forest Labs offers a production‑ready toolkit that converts creative ambition into shippable assets without detours. For organizations intent on owning the next wave of visual storytelling , Kontext isn't optional innovation; it's table stakes for staying competitive in an increasingly image‑driven economy.


Economic Times
3 days ago
- Business
- Economic Times
FLUX 1 Kontext & Playground API: The Next Evolution in in‑Context Visual AI.
Synopsis Black Forest Labs has rolled out FLUX 1 Kontext—a suite of generative flow‑matching models—and an accompanying Playground API that hands creators, developers, and enterprises a high‑velocity, multimodal image engine. By unifying text‑to‑image and image‑to‑image tasks inside one context‑aware pipeline, FLUX 1 Kontext promises unprecedented consistency, speed, and creative control, elevating it far above the fragmented toolchains that dominate today's market. The release of FLUX 1 Kontext reframes what 'generative AI' means for imagery. Rather than layering an editing model on top of a text‑only generator, Black Forest Labs rebuilt the stack around in‑context flow matching. The platform accepts both textual instructions and reference images, allowing users to steer style, composition, and subject identity with surgical precision. This architecture bridges the gap between ideation and delivery, compressing multi‑tool workflows into a single, iterative loop that moves at the speed of diffusion systems excel at first‑pass synthesis but falter when asked to preserve characters, logos, or fine‑grained details across successive edits. FLUX 1 Kontext attacks that weakness head‑on. Every generation step factors in the cumulative context—prompt history, reference images, and prior outputs—so subsequent edits refine rather than overwrite. The result is continuity: brand mascots retain their exact facial geometry, typographic treatments survive color swaps, and product mock‑ups remain pixel‑perfect through countless iterations. Kontext ships in three calibrated tiers. Kontext [pro] targets real‑time production pipelines; it delivers multi‑turn edits lightning‑fast while locking down character integrity. Kontext [max] pushes resolution, typography fidelity, and hyper‑realistic textures for agencies chasing photo‑grade results. Finally, Kontext [dev]—a 12‑billion‑parameter open‑weight model—drops into private beta for researchers and indie developers, providing self‑hosted experimentation without black‑box constraints. This tiered approach ensures startups prototype cheaply while Fortune 500 studios crank out billboard‑ready new Playground API offers a friction‑free testing arena. A browser‑based console lets users drag‑and‑drop images, type free‑form prompts, and watch outputs appear in seconds—no GPU wrangling, no local installs. Developers can flip seamlessly from playful exploration to production scripting by hitting the same REST endpoints that power the demo. Every call returns JSON metadata—generation seed, safety flags, latency metrics—so engineering teams can wire Kontext into CI pipelines or content‑moderation layers without like DALL‑E, Midjourney, or Stable Diffusion require separate plug‑ins—or entire rerolls—to tweak outputs. Kontext's flow‑matching backbone treats generation and editing as two sides of the same coin. This means fewer hand‑offs, no round‑tripping, and far less data drift. Add in the Playground's native rate‑limit handling, enterprise‑grade observability, and integrated safety filters, and you have a platform engineered for scale rather than weekend hacks. By open‑sourcing the dev tier and publishing full model cards, Black Forest Labs signals a commitment to transparency and community‑driven innovation. At the same time, production endpoints layer multiple moderation gates—CSAM, extremism, gore—ensuring enterprises don't trade speed for compliance. Fine‑grained filters can be tuned to organizational risk profiles, and audit logs track every prompt‑output pair for downstream 1 Kontext and the Playground API don't merely iterate on the status quo—they declare a new baseline for visual‑first generative AI. By collapsing generation and editing into one context‑aware engine, slashing inference latency, and baking compliance into every endpoint, Black Forest Labs offers a production‑ready toolkit that converts creative ambition into shippable assets without detours. For organizations intent on owning the next wave of visual storytelling, Kontext isn't optional innovation; it's table stakes for staying competitive in an increasingly image‑driven economy.


Techday NZ
04-07-2025
- Techday NZ
NVIDIA & Black Forest Labs boost AI image editing with FLUX.1
NVIDIA has partnered with Black Forest Labs to optimise the FLUX.1 Kontext image generation model for RTX GPUs using TensorRT. Black Forest Labs has developed the FLUX.1 Kontext model to further simplify the process of guiding and refining AI-generated images. Unlike traditional workflows that combine multiple models and rely on ControlNets for fine-tuning, FLUX.1 Kontext offers a single solution for both generating and editing images through natural language. This approach enables users to start with a reference image and direct edits using simple language prompts, eliminating complex multi-model workflows. The model handles both text and image inputs, allowing users to reference a visual concept and guide its development in a more coherent and intuitive manner. Model capabilities The FLUX.1 Kontext model offers several core features, including character consistency, localised editing, style transfer, and real-time performance. Black Forest Labs describes the key capabilities as follows: Character Consistency: Preserve unique traits across multiple scenes and angles. Localised Editing: Modify specific elements without altering the rest of the image. Style Transfer: Apply the look and feel of a reference image to new scenes. Real-Time Performance: Low-latency generation supports fast iteration and feedback. The goal is to enable coherent, high-quality edits that remain faithful to the original concepts. By providing both natural language and image-based editing options, FLUX.1 Kontext aims to make the refining process more accessible to a broader range of users, without the need for technical expertise or additional models. Performance optimisations NVIDIA collaborated with Black Forest Labs to optimise FLUX.1 Kontext for RTX GPUs using the TensorRT software development kit. This includes quantising the model to reduce VRAM requirements and improve accessibility for users running it locally. According to NVIDIA, these changes deliver more than twice the acceleration compared to running the original BF16 model with PyTorch, allowing for lower latency and faster iteration times in real-time editing workflows. As described by Black Forest Labs, the optimisation was designed to open up access to the benefits of high-fidelity AI image editing to a larger audience: "To further streamline workflows and broaden accessibility, NVIDIA and Black Forest Labs collaborated to quantise the model - reducing the VRAM requirements so more people can run it locally - and optimised it with TensorRT to double its performance. Thanks to TensorRT - a framework to access the Tensor Cores in NVIDIA RTX GPUs for maximum performance - users gain access to over 2x acceleration compared with running the original BF16 model with PyTorch." Availability and developer support FLUX.1 Kontext [dev] is now available for download in both Torch and TensorRT variants on the Hugging Face platform. Users can run the Torch models in ComfyUI, and Black Forest Labs has also made an online playground available for broader experimentation. For developers and advanced users, NVIDIA is preparing sample code to support the integration of TensorRT pipelines, with additional resources expected to be released later this month. The release of FLUX.1 Kontext follows a period of increased interest in adaptable, user-friendly AI image generation solutions. By combining natural language guidance, visual references, and enhanced GPU optimisation, the companies aim to further reduce barriers to AI-powered image editing for both hobbyists and professionals. Follow us on: Share on:


TechCrunch
17-06-2025
- Business
- TechCrunch
Adobe's Firefly comes to iOS and Android
Adobe has been on a quest to attract users to its platform for their AI needs. The company in April launched a redesigned Firefly web app that lets users use Adobe's own Firefly image- and video-generation models as well as third-party models. Now, it is releasing a Firefly app on both iOS and Android that lets people use all of its models as well as models from OpenAI (GPT image generation), Google (Imagen 3 and Veo 2), and Flux (Flux 1.1 Pro). Like the web app, the new smartphone apps let you use prompts to generate images or videos or convert images into videos. You can also edit certain parts of images using generative fill or expand an image with generative expand. Adobe Creative Cloud subscribers can start a project on the Firefly mobile app and store it in the cloud to access it through the web or desktop app. The company is now also supporting more third-party models, including Flux.1 Kontext by Black Forest Labs, Ideogram 3.0 by Ideogram, and Gen-4 Image by Runway. The company is also updating Adobe Canvas, its collaborative whiteboarding tool, with the ability to generate videos. Canvas lets users generate videos with Adobe's own video models as well as those made by its competitors. Adobe said users have so far created more than 24 billion media assets with its Firefly models, and that its AI features have been a big factor in increasing the number of first-time subscribers 30% quarter-over-quarter.

Yahoo
30-05-2025
- Business
- Yahoo
Black Forest Labs' Kontext AI models can edit pics as well as generate them
Black Forest Labs, the AI startup whose models once powered the image generation features of X's Grok chatbot, on Thursday released a new suite of image-generating models — some of which can both create and edit pics. The most capable of the models in the new family, called Flux.1 Kontext, can be prompted with text and, optionally, a reference image to create new images, writes Black Forest Labs in a blog post. "The Flux.1 Kontext models deliver state-of-the-art image generation results with strong prompt following, photorealistic rendering, and competitive typography — all at inference speeds up to 8x faster than current leading models," the company writes in its post. Flux.1 Kontext comes as the race to build competitive image generators heats up. Google debuted its latest image-generating model, Imagen 4, earlier this month at the company's I/O developer conference. Earlier this year, OpenAI brought a vastly improved image-generating model to ChatGPT — a model that quickly went viral for its ability to create art in the style of Studio Ghibli movies. There are two models in the Flux.1 Kontext family: Flux.1 Kontex [pro] and Flux.1 Kontex [max]. The former allows users to generate an image and refine it through multiple "turns," all while preserving the characters and styles in the images. Flux.1 Kontex [max] focuses on speed, consistency, and adherence to prompts. Unlike some of Black Forest Labs' previous models, Flux.1 Kontex [pro] and Flux.1 Kontex [max] can't be downloaded for offline use. However, Black Forest Labs is making an "open" Kontext model, Flux.1 Kontext [dev], available in private beta for research and safety testing. Black Forest Labs is also launching a model playground that allows users to try its models without having to sign up for a third-party service. New users get 200 credits, enough to generate around 12 images with Flux.1 Kontex [pro]. Black Forest Labs, based in Germany, was said to be in talks to raise $100 million at a $1 billion valuation toward the end of last year. Many of the founders hail from Stability AI, the creator of the notorious Stable Diffusion image-generating model. Backers include Andreessen Horowitz, Oculus co-founder Brendan Iribe, and Y Combinator's Garry Tan. In the months since it emerged from stealth, Black Forest Labs has released a number of new image-generating models and enterprise-focused services, including an API. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data