
BrowserStack launches AI agent suite to automate, simplify software testing
Empower your mind, elevate your skills
Accel-backed BrowserStack has launched a suite of artificial intelligence (AI)-powered agents integrated across its software testing platform, aimed at helping software teams accelerate release cycles, improve test coverage, and boost productivity.The product suite, called BrowserStack AI , comprises five agents that address key pain points in the software testing life cycle, which are test planning, authoring, maintenance, accessibility, and visual review.The company claims these tools can increase productivity by up to 50% and cut test creation time by over 90%.'We mapped the entire testing journey to identify where teams spend the most time and manual effort and reimagined it with AI at the core,' said Ritesh Arora , CEO and cofounder of BrowserStack. 'Early results are game-changing; our test case generator delivers 90% faster test creation with 91% accuracy and 92% coverage, results that generic LLMs can't match.'Unlike generic copilots or disconnected plugins, BrowserStack AI agents are built directly into BrowserStack products, drawing context-aware insights from a unified data store across the testing lifecycle, the company said.The suite includes the test case generator agent, which creates detailed test cases from product documents, and the low code authoring agent, which turns them into automated tests using natural language.The suite also includes the self-healing agent, which automatically adapts and remediates tests during execution, preventing failures caused by user interface (UI) changes, while the A11y issue detection agent uses AI to surface accessibility issues across websites and apps. Also, the visual review agent highlights only meaningful changes, making reviews faster.The company also has an integration layer, called BrowserStack MCP Server, that enables developers and testers to test directly from their integrated development environments (IDEs), large language models (LLMs) or any other MCP-enabled client.'AI is only useful if it delivers meaningful, context-rich outcomes,' said Arora. 'That's why we've invested in building AI agents that understand test environments, real-world execution data, and user behaviour across thousands of teams.'Founded in 2011 by Ritesh Arora and Nakul Aggarwal, BrowserStack is a cloud-based platform for developers to test websites and mobile apps across different devices, operating systems, and browsers. It operates across 21 data centres worldwide and provides access to more than 30,000 real devices and browsers for testing.In February, the company announced the launch of the AI-powered test platform that consolidates the entire toolchain for quality assurances under one platform from creating, planning, executing and debugging testing, with an aim to help development teams deliver applications faster and smarter.BrowserStack, which said that over 700 engineers are now working on its AI-powered test platform, has more than 20 additional agents in development. The company's tools currently power more than three million tests daily for over 50,000 teams, including companies like Amazon, Microsoft , and Nvidia.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
4 hours ago
- Time of India
Nikesh Arora, the IITian who became one of the world's highest-paid CEOs, reveals a 'cheat code' to building a company
During a podcast with Zerodha co-founder Nikhil Kamath , Nikesh Arora spoke about how evolving technology is transforming the way companies are built today. "If you're looking for a 10, 20% improvement, don't bother, because things are about to move 10x. So, I think self-reflect on the idea. If your idea is not 10x-worthy, you're solving the wrong problem," said Arora to Nikhil Kamath. "So, all I can advise people out there is: teach us something — because some already have it figured out. But more importantly, I believe the wave of technology that's coming will enable people to build businesses faster, with greater agility, using fewer people, and with a fundamental rethinking of how things are done," said Arora. Who is Nikesh Arora? Arora, 57, has been serving as the CEO and chairman of the board at cybersecurity firm Palo Alto Networks since June 2018. Before taking on this role, he was an angel investor between 2016 and 2018. Prior to that, he held various senior leadership positions at SoftBank Group Corp from 2014 to 2017. Live Events Earlier in his career, Arora spent nearly a decade at Google, where he held several top operational roles from December 2004 to July 2014, including senior vice president and chief business officer from January 2011 to June 2014. Notably, SoftBank made significant investments in Uber in 2018 and 2019, becoming its largest shareholder at one point. SoftBank sold a third of its stake in the ride hailing platform in 2021 and offloaded the rest between April and July 2022. Arora currently serves on the board of directors of Switzerland-based luxury goods holding company Compagnie Financiere Richemont S.A. Earlier, he had also served on the board of companies such as Aviva, Bharti Airtel , Sprint Corp, Colgate-Palmolive and Yahoo!.


Time of India
5 hours ago
- Time of India
Delhi court imposes cost on man for filing frivolous petition
New Delhi: Imposing a Rs 10,000 cost on a man for filing a frivolous petition, a Delhi court underscored the need to prevent wealthy litigants from filing unnecessary cases that hamper the justice system. The court of additional sessions judge Saurabh Partap Singh Laler, in its June 9 order, agreed with the magistrate court's decision and observed that the plea appeared as an effort not only to put pressure on the magisterial court but also on CGST officers. "I must express this court's deep concern and disappointment regarding the revisionist's insidious and cavalier approach in filing this frivolous petition. Liberal access to justice should not be misconstrued as an opportunity to create chaos and indiscipline; such petitions should be met with substantial penalties," the judge said. The court was hearing the revision petition of one Kapil Arora against an April 2025 magistrate court order, which dismissed his plea for compensation for curtailing his liberty. Arora was arrested by Central Goods and Services Tax officials in Oct 2024 for allegedly evading payment of GST. He alleged that the CGST officials conducted an unauthorised search and had no evidence to prove that he made Rs 1,284 crore in sales between 2018 and 2024 while evading Rs 200 crore in GST or that Rs 2.18 crore recovered from his house and shop were not linked with GST. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch vàng CFDs với sàn môi giới tin cậy IC Markets Tìm hiểu thêm Undo You Can Also Check: Delhi AQI | Weather in Delhi | Bank Holidays in Delhi | Public Holidays in Delhi The court observed that litigants abusing court procedures ought to face necessary repercussions. "It is important to prevent wealthy litigants from pursuing unnecessary litigation, as these cases can slow down the justice system and delay trials for other litigants. Courts must ensure that the legal system is not misused to obstruct or delay justice," the judge said. The court found no fault with the magistrate when it observed that the accused's address and surety had to be verified, as the alleged crime was serious. Arora moved to a magisterial court, which dismissed his bail plea. Arora then challenged the order rejecting his bail before a sessions court, which granted him bail but imposed several conditions on Nov 27. However, in order to satisfy bail conditions before the magistrate—as a part of the bail procedure in this case—his release was kept in abeyance till the next day until the sureties furnished by him were verified. The verification report was received by the magistrate on Nov 28, but Arora could not be released until Nov 29 due to a clerical error. The magistrate found no intentional error on the part of the court staff, and Arora was released following a warning to a court staffer.


NDTV
9 hours ago
- NDTV
'Writing Is Thinking': Do Students Who Use ChatGPT Learn Less?
When Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories. "It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago. "They weren't even coming up with their own anecdotal stories about their own lives," she told AFP. Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester -- including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling". So she was not surprised by recent research which suggested that students who use ChatGPT to write essays engage in less critical thinking. The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators. The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP. 'Soulless' AI Essays For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains. The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays. The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often. And more than 80 percent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 percent of the other two groups. By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting. The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight. However Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid. She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity. Kosmyna emphasised it was too early to draw conclusions from the study's small sample size but called for more research into how AI tools could be used more carefully to help learning. Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some "offbase" headlines that wrongly extrapolated from the preprint. "This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP. Thinking Outside The Bot Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common. Sometimes students do not even change the font when they copy and paste from ChatGPT, she said. But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others. The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways. But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning. A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas. "I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said. The problem goes beyond high school and university students. Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year. "Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.