Robotic surgery hits 'milestone' with autonomous gallbladder removal
Medical robotics experts at Johns Hopkins and Stanford universities revealed in a new study they have "significantly" pushed the envelope on what is possible for robots to do in the operating room without human doctors at the controls, using artificial intelligence to teach them how to overcome unexpected obstacles during surgical procedures.
The study, published Wednesday in the journal Science Robotics, described how by using a newly developed AI platform called Surgical Robot Transformer-Hierarchy, or SRT-H, scientists were able to instruct robotic arms to perform eight ex vivo gallbladder procedures "with 100% accuracy" -- all completely autonomously with no human help.
SRT-H is the latest advancement in "computer vision," a field of AI that enables robots to "see" and interpret images and videos, much like humans do. The system was shown videos of human surgeons doing the same procedure on pigs and reinforced with natural language captions describing the tasks. It also can respond to human voice commands during procedures.
Most significantly, SRT-H showed an ability to self-correct when scientists threw it curve balls such as adding blood-like dyes that changed the appearance of the gallbladder and surrounding tissues. They reported it was still able to precisely perform tasks such as strategically placing clips and severing parts with scissors.
The results were deemed important because they appear to move medical science closer to the goal of making AI-powered robots reliable enough for "surgical autonomy," especially in routine procedures such as gallbladder removals, which are performed hundreds of thousands of time each year.
The study was led by medical roboticist Axel Krieger, an associate professor at JHU's Whiting School of Engineering, who in 2022 used an earlier AI-based system to autonomously perform the first small-incision, camera-guided surgical procedure on the soft tissue of a pig.
SRT-H "represents a fundamental shift from robots that can execute specific surgical tasks to robots that understand surgical procedures," Kreiger told UPI in emailed comments.
It makes several key technological advances, including eliminating the requirement that robots operate only on specially marked tissue within highly controlled environments.
"The SRT-H adapts to individual anatomical features in real-time, making decisions on the fly and self-correcting when things don't go as expected -- much like a human surgeon would," he said.
Also, because the system was built using the same machine learning architecture that powers ChatGPT, it allows the robot to be interactive.
"it can respond to spoken commands like 'grab the gallbladder head' or corrections like 'move the left arm a bit to the left,' and actually learn from this feedback during the procedure," Krieger said.
The researcher added the learning framework "trains the robot by watching videos of surgeries, similar to how medical students learn. This approach has proven robust enough that the robot performed unflappably across trials with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real surgeries."
Other medical robotics experts contacted by UPI agreed the results are a notable achievement for the surgical robotics field but cautioned it's still a long road until the day the machines are deemed safe enough to operate autonomously on human patients.
Tamas Haidegger, a professor at Óbuda University in Budapest, Hungary, and technical lead of medical robotics research at its Antal Bejczy Center for Intelligent Robotics, said the results demonstrate that "landmark progress has emerged in autonomous surgical systems" in recent months.
The case for autonomous interventions is indeed "strengthening," but are we on the brink of breakthrough in autonomous robotic surgery? Yes and no, he answered.
"The employed SRT-H system presented capability for error correction and generalization, which may lead to scalability," Haidegger said. "This is major. A first baby-step."
The current experiments, however, "only represent embryonic advancement in the super-complex domain of human surgery," he said, comparing the development of autonomous robotic surgery to that of self-driving cars.
"In 2004, all participants failed in crossing the Mojave desert during the first DARPA challenge, yet by 2007, even the simulated urban environment was manageable for most of the systems. Yet, when this technology hit the road, significant shortcomings were revealed, starting with the incompleteness of road signs to changing weather conditions, and most importantly -- the unpredictability of other drivers.
"Variability in live human surgery -- such as individual anatomy and pathology, hemorrhage, physiological tissue movement and unmodelled tissue properties -- limits immediate clinical deployment of an autonomous system."
Meanwhile, a British expert who was not connected to the new study agreed that it "really highlights the art of the possible with AI and surgical robotics."
Danail Stoyanov, professor of robot vision at University College London's Department of Computer Science and co-director of the UCL Hawkes Institute, told UPI the field of computer vision is making "incredible advances" for surgical video, adding that the availability of open robotic platforms for such research "make it possible to demonstrate surgical automation."
He cautioned however, that "many other challenges remain to make this practical in real clinical use. Technically generalizing to clinical conditions remains very hard, and there are additional hurdles with medical device verification, safety, efficacy and, of course, cost and liability."
JHU's Krieger similarly noted the current study merely provides "proof of concept. Before any clinical application, we need extensive additional testing and regulatory approval to ensure patient safety remains the top priority."
That being said, the immediate next step is to "train and test the system on more types of surgeries and expand its capabilities to perform complete autonomous surgeries from start to finish.
"Currently, we've demonstrated success with a lengthy phase of gallbladder removal, but we want to broaden the system's surgical repertoire" to include procedures that would benefit from the robot's "consistent precision and ability to operate in challenging conditions where human factors like fatigue or tremor might be limiting factors," Krieger said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Digital Trends
9 minutes ago
- Digital Trends
Wybot joins Prime Day with up to 40% discount on robotic pool cleaners
Robotic pool cleaners provide unparalleled convenience in keeping your swimming pool spotless, but these helpful devices can get pretty expensive. Fortunately, Wybot — one of the leading brands in this space — is rolling out huge discounts alongside Amazon's Prime Day. You can buy a Wybot robotic pool cleaner at up to 40% off, but you shouldn't waste time if you want to enjoy the savings. Wybot's Prime Day deals are scheduled to last until July 11, but you're going to want to decide on your purchase as soon as possible to make sure that you pocket the discounts. Feel free to take a look at all of the available offers through the link below, but to help you make your choice faster, we've also selected our three favorite bargains. See Deals — Wybot S2 Solar — $1,300 $2,000 35% off plus $100 gift The Wybot S2 Solar is the first underwater solar-powered robotic pool cleaner, with the ability to recharge through its underwater docking station so that it's always ready. Solar charging keeps the device eco-friendly, but it's still very powerful with a brushless motor, and its battery enables uninterrupted cleaning for up to 2.5 hours across a range of 3,229 square feet. It also has a double filtration system to capture debris in your pool. You'll be able to use the Wybot app to launch cleaning sessions, set automated schedules, and choose from seven cleaning modes. The Wybot S2 Solar is on sale at 35% off for $700 in savings, but every purchase also comes with a handheld pool vacuum that's worth $100. Buy Now — Wybot C2 Vision — $630 $1,000 37% off The Wybot C2 Vision features AI Vision debris detection technology that allows its to clean 20 times faster when Dirt Hunting Mode is activated, and it utilizes a powerful brushless motor to clean the floors, walls, and waterlines of your pool. The robotic pool cleaner can cover up to 2,152 square feet with its runtime of up to 180 minutes on a single charge, and it comes with a dual filtration system to pick up debris. The Wybot app will let you choose between eight cleaning modes and six paths, as well as schedule up to four sessions per week. Once it's done, it automatically returns to the edge of the pool so you won't have to retrieve it yourself. The Wybot C2 Vision is 37% off, which translates to savings of $370. Buy Now — Wybot F1 — $300 $500 40% off The Wybot F1 is a simpler and more affordable device compared to its counterparts as it focuses on cleaning the surface of the water in your swimming pool. The solar-powered skimmer sucks debris such as leaves and petals into its 7-liter basket, and it will work on all pool shapes. It can run for 8 hours in Standard Mode, or actively clean with auto-pause cycles 24/7 in Smart Mode. The device intelligently maneuvers around obstacles and hugs the edges so that it won't miss any areas, and you can control every aspect of its operation through the Wybot app. There's a 40% discount on the Wybot F1 right now, allowing you to save $200. Buy Now —


Atlantic
13 minutes ago
- Atlantic
What Two Judicial Rulings Mean for the Future of Generative AI
Should tech companies have free access to copyrighted books and articles for training their AI models? Two judges recently nudged us toward an answer. More than 40 lawsuits have been filed against AI companies since 2022. The specifics vary, but they generally seek to hold these companies accountable for stealing millions of copyrighted works to develop their technology. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation. In each case, the judges decided that the tech companies were engaged in 'fair use' when they trained their models with authors' books. Both judges said that the use of these books was 'transformative'—that training an LLM resulted in a fundamentally different product that does not directly compete with those books. (Fair use also protects the display of quotations from books for purposes of discussion or criticism.) At first glance, this seems like a substantial blow against authors and publishers, who worry that chatbots threaten their business, both because of the technology's ability to summarize their work and its ability to produce competing work that might eat into their market. (When reached for comment, Anthropic and Meta told me they were happy with the rulings.) A number of news outlets portrayed the rulings as a victory for the tech companies. Wired described the two outcomes as ' landmark ' and ' blockbuster.' But in fact, the judgments are not straightforward. Each is specific to the particular details of each case, and they do not resolve the question of whether AI training is fair use in general. On certain key points, the two judges disagreed with each other—so thoroughly, in fact, that one legal scholar observed that the judges had 'totally different conceptual frames for the problem.' It's worth understanding these rulings, because AI training remains a monumental and unresolved issue—one that could define how the most powerful tech companies are able to operate in the future, and whether writing and publishing remain viable professions. So, is it open season on books now? Can anyone pirate whatever they want to train for-profit chatbots? Not necessarily. When preparing to train its LLM, Anthropic downloaded a number of 'pirate libraries,' collections comprising more than 7 million stolen books, all of which the company decided to keep indefinitely. Although the judge in this case ruled that the training itself was fair use, he also ruled that keeping such a 'central library' was not, and for this, the company will likely face a trial that determines whether it is liable for potentially billions of dollars in damages. In the case against Meta, the judge also ruled that the training was fair use, but Meta may face further litigation for allegedly helping distribute pirated books in the process of downloading—a typical feature of BitTorrent, the file-sharing protocol that the company used for this effort. (Meta has said it 'took precautions' to avoid doing so.) Piracy is not the only relevant issue in these lawsuits. In their case against Anthropic, the authors argued that AI will cause a proliferation of machine-generated titles that compete with their books. Indeed, Amazon is already flooded with AI-generated books, some of which bear real authors' names, creating market confusion and potentially stealing revenue from writers. But in his opinion on the Anthropic case, Judge William Alsup said that copyright law should not protect authors from competition. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote. In his ruling on the Meta case, Judge Vince Chhabria disagreed. He wrote that Alsup had used an 'inapt analogy' and was 'blowing off the most important factor in the fair use analysis.' Because anyone can use a chatbot to bypass the process of learning to write well, he argued, AI 'has the potential to exponentially multiply creative expression in a way that teaching individual people does not.' In light of this, he wrote, 'it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars' while damaging the market for authors' work. To determine whether training is fair use, Chhabria said that we need to look at the details. For instance, famous authors might have less of a claim than up-and-coming authors. 'While AI-generated books probably wouldn't have much of an effect on the market for the works of Agatha Christie, they could very well prevent the next Agatha Christie from getting noticed or selling enough books to keep writing,' he wrote. Thus, in Chhabria's opinion, some plaintiffs will win cases against AI companies, but they will need to show that the market for their particular books has been damaged. Because the plaintiffs in the case against Meta didn't do this, Chhabria ruled against them. In addition to these two disagreements is the problem that nobody—including AI developers themselves—fully understands how LLMs work. For example, both judges seemed to underestimate the potential for AI to directly quote copyrighted material to users. Their fair-use analysis was based on the LLMs' inputs — the text used to train the programs—rather than outputs that might be infringing. Research on AI models such as Claude, Llama, GPT-4, and Google's Gemini has shown that, on average, 8 to 15 percent of chatbots' responses in normal conversation are copied directly from the web, and in some cases responses are 100 percent copied. The more text an LLM has 'memorized,' the more it can potentially copy and paste from its training sources without anyone realizing it's happening. OpenAI has characterized this as a 'rare bug,' and Anthropic, in another case, has argued that 'Claude does not use its training texts as a database from which preexisting outputs are selected in response to user prompts.' But research in this area is still in its early stages. A study published this spring showed that Llama can reproduce much more of its training text than was previously thought, including near-exact copies of books such as Harry Potter and the Sorcerer's Stone and 1984. That study was co-authored by Mark Lemley, one of the most widely read legal scholars on AI and copyright, and a longtime supporter of the idea that AI training is fair use. In fact, Lemley was part of Meta's defense team for its case, but he quit earlier this year, criticizing in a LinkedIn post about 'Mark Zuckerberg and Facebook's descent into toxic masculinity and Neo-Nazi madness.' (Meta did not respond to my question about this post.) Lemley was surprised by the results of the study, and told me that it 'complicates the legal landscape in various ways for the defendants' in AI copyright cases. 'I think it ought still to be a fair use,' he told me, referring to training, but we can't entirely accept 'the story that the defendants have been telling' about LLMs. For some models trained using copyrighted books, he told me, 'you could make an argument that the model itself has a copy of some of these books in it,' and AI companies will need to explain to the courts how that copy is also fair use, in addition to the copies made in the course of researching and training their model. As more is learned about how LLMs memorize their training text, we could see more lawsuits from authors whose books, with the right prompting, can be fully reproduced by LLMs. Recent research shows that widely read authors, including J. K. Rowling, George R. R. Martin, and Dan Brown may be in this category. Unfortunately, this kind of research is expensive and requires expertise that is rare outside of AI companies. And the tech industry has little incentive to support or publish such studies. The two recent rulings are best viewed as first steps toward a more nuanced conversation about what responsible AI development could look like. The purpose of copyright is not simply to reward authors for writing but to create a culture that produces important works of art, literature, and research. AI companies claim that their software is creative, but AI can only remix the work it's been trained with. Nothing in its architecture makes it capable of doing anything more. At best, it summarizes. Some writers and artists have used generative AI to interesting effect, but such experiments arguably have been insignificant next to the torrent of slop that is already drowning out human voices on the internet. There is even evidence that AI can make us less creative; it may therefore prevent the kinds of thinking needed for cultural progress. The goal of fair use is to balance a system of incentives so that the kind of work our culture needs is rewarded. A world in which AI training is broadly fair use is likely a culture with less human writing in it. Whether that is the kind of culture we should have is a fundamental question the judges in the other AI cases may need to confront.
Yahoo
13 minutes ago
- Yahoo
Couple with spina bifida finds family despite adoption barriers
Kelly and Larry Peterson's love story began at a summer camp for children with spina bifida. Born on the same day, the couple discovered they had much in common beyond their shared birthday and disability. Spina bifida is a condition where the spinal column and casing around the spinal cord don't form completely, affecting motor ability and the ability to walk. Before 1960, the survival rate for babies with spina bifida was about 10%. Even as medical science improved, some still viewed the diagnosis as a death sentence. Both Kelly and Larry went on to live happy childhoods and, in high school, the pair started dating. Today, they reminisce about the days when they had to pay for phone minutes and their parents would warn them to watch their phone usage. "We always, we just kind of got along really easily," Larry Peterson said. The couple married in 2015 and, within six months, began discussing adoption. However, they quickly encountered barriers that highlighted ongoing discrimination against people with disabilities. "We actually came across an agency who flat out said, 'No, we have a list of disabilities that we don't work with potential adoptive parents, if they have X, Y, and Z conditions and spina bifida is on that list,'" Kelly Peterson said. This month marks the 35th anniversary of the Americans with Disabilities Act, which prohibits discrimination against Americans with disabilities. The law requires businesses like restaurants, hotels and grocery stores to provide people with disabilities equal opportunities to access goods and services. But in practice, when it comes to services like adoption, equal access is not always guaranteed. The rejection was disheartening, but the Petersons understood the agencies' concerns about liability and whether adoptive families could handle the challenges ahead. Eventually, one organization agreed to help, changing everything for the family. While browsing the agency's website, Kelly spotted a little girl with spina bifida. "As I'm on the website, I see a little girl with spina bifida and I was like, 'how do we miss this?!'" the couple said in unison. That little girl was Hadley, who became the Petersons' daughter in December 2018. "It was love at first sight," Larry Peterson said. Like many children with spina bifida, Hadley also has other conditions, including autism and a speech disorder. The Petersons regularly bring Hadley to therapy and the library to help her build independence. The couple believes their shared experience with spina bifida makes them uniquely suited to care for Hadley's complex needs. "We've been through it. I mean, we've been through pretty much everything you could go through with the disability we have," Larry Peterson said. "I remember being a kid and thinking when adults would tell me, 'This is good for you,' how do you know you haven't been through it? With her, we'll be able to say we've been there," Kelly Peterson said. Jennifer Kelly, an adoption specialist who works with children with severe medical needs and helps prospective parents with disabilities adopt, matched the Peterson family. She emphasizes that adults with disabilities shouldn't be automatically discounted as adoptive parents. "Adults with disabilities, they have considered those challenges," Jennifer Kelly said. "Maybe they have a couple of weak spots, but we can put some resources in place that can help with that." The Petersons were thoughtful about their limitations, acknowledging there were some children they felt they couldn't adequately care for. Their house is already modified specifically for Hadley's needs. One of the family's biggest concerns was cost. Kelly is a teacher and Larry works at a call center, and medical bills can be expensive. Jennifer Kelly didn't just match the family with Hadley; she also helped them secure federal benefits available to families raising children with severe medical needs. Shriners Hospitals for Children have also alleviated the financial burden on the Petersons by covering the costs of Hadley's medical procedures, some of which would normally cost tens of thousands of dollars. The family now worries about how potential cuts to Medicaid will impact their ability to provide Hadley with what she needs in the future. "She is my world. I can't imagine her not being here," Larry Peterson said. "We couldn't be happier to have this opportunity," Kelly Peterson added. When asked if they might become grandparents one day, both Larry and Kelly responded in unison: "Anything is possible!" Sen. Lindsey Graham says "a turning point, regarding Russia's invasion of Ukraine, is coming" Student's unique talent that's for the birds Candy Land, the game that still hits a sweet spot