LaunchDarkly Introduces New Release Observability, AI Configurations, and Analytics Capabilities to Help Developers Innovate Faster Without the Risk
The latest capabilities at LaunchDarkly give teams the tools they need to innovate boldly—without exposing customers or businesses to unnecessary risk. By bringing observability, AI controls, and analytics directly into the release process, LaunchDarkly is enabling engineering and product teams to ship with confidence, respond to application issues, and continuously improve the user experience.
'Software used to evolve quarterly. Today, it changes by the hour. And with AI systems adapting in production, often unpredictably, release management at feature level granularity has become mission-critical,' said Dan Rogers, CEO of LaunchDarkly. 'Teams need the ability to ship with precision, respond in real time, and continuously optimize what's live. That's what LaunchDarkly delivers: a safer, smarter way to build and release software in an AI-powered world.'
Platform Updates Introduced at Galaxy '25:
Guarded Releases – Observability at the Point of ReleaseGuarded Releases pair progressive rollouts with real-time monitoring, automated rollback, and feature-level observability. Teams can now identify regressions instantly and correlate them directly to specific changes, preventing incidents before they impact users. With the recent integration of Highlight.io, LaunchDarkly extends observability to include telemetry data like metrics, logs and traces at the point of release.
AI Configs – Runtime Control Plane for Model and Prompt ManagementAI Configs give teams a centralized control plane to manage prompt and model configurations for AI-powered applications. Teams can safely iterate in production, monitor key metrics like cost and latency, and deploy fallback strategies when things go wrong without any code changes. This reduces risk while accelerating the development of AI features.
Warehouse-Native Experimentation & Product AnalyticsLaunchDarkly now gives teams real-time insights into user behavior and feature engagement, powered directly by their data warehouse. With warehouse-native experimentation and product analytics, teams can quickly understand what's working, what's not, and how every feature impacts business outcomes. The recent integration of Houseware strengthens these capabilities by making it easier to run experiments, analyze results, and iterate faster, all within the existing data ecosystem.
'Generative AI is fundamentally changing the relationship between the code we build, the code we deploy, and the code we maintain in production. Experimentation, understanding user behaviour, is now a necessity, not a luxury,' said James Governor, RedMonk co-founder. 'LaunchDarkly is building observability into its core offerings, deepening its focus on analytics, and doubling down on release management to create an integrated platform for progressive delivery in the AI era.'
AvailabilityGuarded Releases, AI Configs, and Warehouse-Native Experimentation & Product Analytics are generally available today. Advanced observability features within Guarded Releases, including error monitoring, session replay, and telemetry integrations, are available in early access.
To learn more about these new capabilities, click here.
About LaunchDarkly:
LaunchDarkly is a comprehensive feature management platform that equips software teams to proactively reduce the risk of shipping bad software and AI applications while accelerating their release velocity. By progressively rolling out features, monitoring critical metrics in real-time, instantly rolling back flawed code, easily conducting targeted experiments, and quickly iterating on AI prompts and models, development teams can ship innovation consistently and confidently. Serving over 5,500 of the most innovative enterprises, including a quarter of the Fortune 500, LaunchDarkly is trusted around the globe to deliver exceptional customer experiences and maximize business outcomes.
Media Contact:Spencer AnopolHead of PRsanopol@launchdarkly.com

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
25 minutes ago
- Fast Company
Think your ChatGPT therapy sessions are private? Think again.
If you've been confessing your deepest secrets to an AI chatbot, it might be time to reevaluate. With more people turning to AI for instant life coaching, tools like ChatGPT are sucking up massive amounts of personal information on their users. While that data stays private under ideal circumstances, it could be dredged up in court – a scenario that OpenAI CEO Sam Altman warned users in an appearance on Theo Von's popular podcast this week. 'One example that we've been thinking about a lot… people talk about the most personal shit in their lives to ChatGPT,' Altman said. 'Young people especially, use it as a therapist, as a life coach, 'I'm having these relationship problems, what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality.' Altman says that as a society we 'haven't figured that out yet' for ChatGPT. Altman called for a policy framework for AI, though in reality OpenAI and its peers have lobbied for a regulatory light touch. 'If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman told Von, arguing that AI conversations should be treated with the same level of privacy as a chat with a therapist. While interactions with doctors and therapists are protected by federal privacy laws in the U.S., exceptions exist for instances in which someone is a threat to themselves or others. And even with those strong privacy protections, relevant medical information can be surfaced by court order, subpoena or a warrant. Altman's argument seems to be that from a regulatory perspective, ChatGPT shares more in common with licensed, trained specialists than it does with a search engine. 'I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,' he said. Altman also expressed concerns about how AI will adversely impact mental health, even as people seek its advice in lieu of the real thing. 'Another thing I'm afraid of… is just what this is going to mean for users' mental health. There's a lot of people that talk to ChatGPT all day long,' Altman said. 'There are these new AI companions that people talk to like they would a girlfriend or boyfriend. 'I don't think we know yet the ways in which [AI] is going to have those negative impacts, but I feel for sure it's going to have some, and we'll have to, I hope, we can learn to mitigate it quickly.'


CNET
25 minutes ago
- CNET
Even OpenAI's CEO Says Be Careful What You Share With ChatGPT
Maybe don't spill your deepest, darkest secrets with an AI chatbot. You don't have to take my word for it. Take it from the guy behind the most popular generative AI model on the market. Sam Altman, the CEO of ChatGPT maker OpenAI, raised the issue this week in an interview with host Theo Von on the This Past Weekend podcast. He suggested that your conversations with AI should have similar protections as those you have with your doctor or lawyer. At one point, Von said one reason he was hesitant to use some AI tools is because he "didn't know who's going to have" his personal information. "I think that makes sense," Altman said, "to really want the privacy clarity before you use it a lot, the legal clarity." More and more AI users are treating chatbots like their therapists, doctors or lawyers, and that's created a serious privacy problem for them. There are no confidentiality rules and the actual mechanics of what happens to those conversations are startlingly unclear. Of course, there are other problems with using AI as a therapist or confidant, like how bots can give terrible advice or how they can reinforce stereotypes or stigma. (My colleague Nelson Aguilar has compiled a list of the 11 things you should never do with ChatGPT and why.) Altman's clearly aware of the issues here, and seems at least a bit troubled by it. "People use it, young people especially, use it as a therapist, a life coach, I'm having these relationship problems, what should I do?" he said. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it." The question came up during a part of the conversation about whether there should be more rules or regulations around AI. Rules that stifle AI companies and the tech's development are unlikely to gain favor in Washington these days, as President Donald Trump's AI Action Plan released this week expressed a desire to regulate this technology less, not more. But rules to protect them might find favor. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Altman seemed most worried about a lack of legal protections for companies like his to keep them from being forced to turn over private conversations in lawsuits. OpenAI has objected to requests to retain user conversations during a lawsuit with the New York Times over copyright infringement and intellectual property issues. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "If you go talk to ChatGPT about the most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that," Altman said. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever." Be careful what you tell AI about yourself For you, the issue isn't so much that OpenAI might have to turn your conversations over in a lawsuit. It's a question of whom you trust with your secrets. William Agnew, a researcher at Carnegie Mellon University who was part of a team that evaluated chatbots on their performance dealing with therapy-like questions, told me recently that privacy is a paramount issue when confiding in AI tools. The uncertainty around how models work -- and how your conversations are kept from appearing in other people's chats -- is reason enough to be hesitant. "Even if these companies are trying to be careful with your data, these models are well known to regurgitate information," Agnew said. If ChatGPT or another tool regurgitates information from your therapy session or from medical questions you asked, that could appear if your insurance company or someone else with an interest in your personal life asks the same tool about you. "People should really think about privacy more and just know that almost everything they tell these chatbots is not private," Agnew said. "It will be used in all sorts of ways."


TechCrunch
25 minutes ago
- TechCrunch
AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B
AI referrals to websites still have a way to go to catch up to the traffic that Google Search provides, but they're growing quickly. According to new data from market intelligence provider Similarweb, AI platforms in June generated over 1.13 billion referrals to the top 1,000 websites globally, a figure that's up 357% since June 2024. However, Google Search still accounts for the majority of traffic to these sites, accounting for 191 billion referrals during the same period of June 2025. One particular category of interest these days is news and media. Online publishers are seeing traffic declines and are preparing for a day they're calling 'Google Zero,' when Google stops sending traffic to websites. For instance, The Wall Street Journal recently reported on data that showed how AI overviews were killing traffic to news sites. Plus, a Pew Research Center study out this week found that in a survey of 900 U.S. Google users, 18% of some 69,000 searches showed AI Overviews, which led to users clicking links 8% of the time. When there was no AI summary, users clicked links nearly twice as much, or 15% of the time. Similarweb found that June's AI referrals to news and media websites were up 770% since June 2024. Some sites will naturally rank higher than others that are blocking access to AI platforms, as The New York Times does, as a result of its lawsuit with OpenAI over the use of its articles to train its models. In the news media category, Yahoo led with 2.3 million AI referrals in June 2025, followed by Yahoo Japan (1.9M), Reuters (1.8M), The Guardian (1.7M), India Times (1.2M), and Business Insider (1.0M). In terms of methodology, Similarweb counts AI referrals as web referrals to a domain from an AI platform like ChatGPT, Gemini, DeepSeek, Grok, Perplexity, Claude, and Liner. ChatGPT dominates here, accounting for more than 80% of the AI referrals to the top 1,000 domains. The company's analysis also looked at other categories beyond news, like e-commerce, science and education, tech/search/social media, arts and entertainment, business, and others. Screenshot In e-commerce, Amazon was followed by Etsy and eBay when it came to those sites seeing the most referrals, at 4.5M, 2.0M, and 1.8M, respectively, during June. Among the top tech and social sites, Google, not surprisingly, was at the top of the list, with 53.1 million referrals in June, followed by Reddit (11.1M), Facebook (11.0M), Github (7.4M), Microsoft (5.1M), Canva (5.0M), Instagram (4.7M), LinkedIn (4.4M), Bing (3.1M), and Pinterest (2.5M). The analysis excluded the OpenAI website because so many of its referrals were from ChatGPT, pointing to its services. Across all other domains, the No. 1 site by AI referrals for each category included YouTube (31.2M), Research Gate (3.6M), Zillow (776.2K), (992.9K), Wikipedia (10.8M), (5.2M), (1.2M), Home Depot (1.2M), Kayak (456.5K), and Zara (325.6K).