Latest news with #CarmenPaun


Politico
22-05-2025
- Health
- Politico
Organ-chips not ready to replace animal studies
Presented by EXAM ROOM One of the cutting-edge technologies the Food and Drug Administration wants to use to replace animal studies might not be ready for a solo performance. Organ-on-a-chip technology, which uses human cells on microfluidic chips to mimic the structure and function of organs in a laboratory setting, can't yet replace animal tests, according to a new Government Accountability Office report. Standing in the way: Challenges include cost, availability of materials, a time-intensive process and the need for highly trained staff to operate the technology. OOCs aren't standardized, which makes reproducibility difficult. The National Institute of Standards and Technology told the GAO that standards are needed, particularly for multi-organ chips, but the technology is evolving too rapidly to set them. The report also highlights a lack of agreed-upon benchmarks for OOCs and validation studies. However, OOCs could work alongside animal studies, particularly for exploring toxicity, the GAO said. It also found that OOCs could be used in lieu of animal studies for certain standardized tests, for example, to assess skin damage from a compound. Some recommendations: GAO called for policies that: — Increase access to diverse, high-quality human cells — Create standards around the technology — Encourage more research and validation studies — Provide regulatory guidance Notably, it said companies were confused about FDA guidance regarding OOCs. And as of the end of last year, the agency hadn't qualified an OOC for use in regulatory review. However, the FDA's Innovative Science and Technology Approaches for New Drugs pilot program accepted a letter of intent for an OOC that would eventually predict drug-induced liver injury. What's next: 'Body-on-a-chip' is coming. Instead of chips with single organs, the next generation of OOCs will link multiple organs, including intestines, livers and kidneys— to understand how they interact. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Kids advocacy group Fairplay and the Electronic Privacy Information Center are asking the Federal Trade Commission to investigate whether a new kid-focused release of Google's AI chatbot Gemini is violating children privacy laws. Google says the technology is available through parent-supervised accounts and parents are free to disable it. Share any thoughts, news, tips and feedback with Danny Nguyen at dnguyen@ Carmen Paun at cpaun@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Want to share a tip securely? Message us on Signal: Dannyn516.70, CarmenP.82, RuthReader.02 or ErinSchumaker.01. AROUND THE NATION States are increasingly interested in making Apple and Google responsible for protecting kids from online harms. Texas is poised to be the second state to require app stores, like Apple's App Store and Google's Google Play store, to verify their users' ages and — if they're minors — get parental consent to download apps. In March, Utah became the first state to sign an app store age-verification bill into law. The bill sailed through the Texas House with support from 80 percent of the state Legislature and passed in the Senate by voice vote last week. Now it's awaiting Governor Greg Abbott's signature. In practice, app stores must verify a user's age. If the user is a minor, the app store must obtain parental consent for each app download. The app stores would then relay this information to the app developer, because some apps provide different experiences based on age. However, certain apps like crisis hotlines and emergency services won't require parental consent. Pushback: Google isn't happy about the bill's advancement (Apple also opposes this legislation). In particular, the company says there's no commercially reasonable way to verify who a child's parent is. 'Will they need to show a birth certificate or custody document to demonstrate that they have the legal authority to make decisions on behalf of a child?' asked Kareem Ghanem, Google's Senior Director of Government Affairs & Public Policy. Google prefers a targeted approach: Send 'an age signal' with explicit parental consent only to developers whose apps pose risks to minors. But such picking and choosing could open this legislation up to legal scrutiny. Long-time concerns: Doctors, including former Surgeon General Vivek Murthy; parents; and even kids are frustrated with the state of online media. For years, growing evidence has suggested that social media apps wear on kids' mental health. But social media platforms enjoy protections from a decades-old law that prevents them from being sued their platforms' content. And states like California and Maryland that have tried to put guardrails on social media have been sued for blocking free speech. Legal challenges: Requiring app stores to verify ages isn't likely run into First Amendment issues. What's more, the policy rests on a fairly well-established legal foundation: contract law. For years, app stores have required minors to sign lengthy contracts — the ones most people don't read — before creating accounts, and legally, it can't do that. Minors can sign contracts but they aren't legally enforceable. App store age-verification laws, however, require sign-off from a legal guardian. Supporters hope app store accountability laws will provide a first-line defense, funneling more kids into parent-linked app store accounts. It could also render the 1998 Children's Online Privacy Protection Act, which limits the amount of data that apps and websites can collect on children under 13, more enforceable. However, the law doesn't change social media or the risks associated with those platforms. What's next: As more states take up app-store age verification, federal lawmakers considering similar legislation are likely to feel more pressure to prioritize it.


Politico
19-03-2025
- Health
- Politico
Google embraces health care's AI agentic era
TECH MAZE Google is betting big on bots. The company is working on several initiatives that give doctors and researchers access to AI agents built on large language models that can review scientific journals more quickly than humans and advise on research proposals, patient diagnoses and treatment options. This use of bots as agents is referred to as agentic AI. Developing new therapies: The newest of the bots is TxGemma, a collection of language models aimed at helping pharmaceutical companies with drug discovery. Google announced the initiative Tuesday at its annual Check-Up event in New York City. TxGemma is based on Google's Gemini artificial intelligence chatbot, a general-purpose large language model. Jöelle Barral, senior director of research and engineering at Google Deepmind, the company's artificial intelligence laboratory, said that developers can use TxGemma to build products that can answer questions about a gene's relationship to disease, a drug's potential for toxicity, and whether a drug would likely clear a clinical trial. Researchers can also ask Gemma-bots to explain their reasoning and how they arrived at specific answers — opening up the algorithm's black box. Acquiring patient data previsit: The company's other bets are more focused on patient care. Google has deployed its Articulate Medical Intelligence Explorer at Beth Israel Deaconess Medical Center in Boston as part of a prospective study to explore how well AMIE can collect information from patients before their visit and what role it can best play in patient care. Google also has a 'co-scientist' — a Gemini-based chatbot to help researchers review scientific literature, reach a hypothesis and even compose potential research proposals. Customizing tools for doctors and patients: Google is helping doctors design tools for their own purposes. At the Netherlands' Princess Maxima Center, doctors built a bot called Capricorn, using Gemini, that helps them pick personalized cancer treatments for kids. One doctor from Princess Maxima said the task once took him two to three days to complete, but 'now it takes 40 seconds.' Meanwhile, the American Cancer Society is using Gemini as the basis for a bot called Ana that's designed to answer basic questions patients have about their cancer diagnosis, their treatments and where to find social support resources. Why it matters: AI is moving fast in health care, and Google seems eager to develop a go-to assistant for doctors, researchers and, now, drug developers. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Nursing unions are pushing back against AI health care workers, the Associated Press reports. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Daniel Payne at dpayne@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Are you a current or former federal worker who wants to share a tip securely? Message us on Signal: CarmenP.82, DanielP.100, RuthReader.02 or ErinSchumaker.01. FORWARD THINKING For years, Congress has extended pandemic-era telehealth rules without making them permanent. Lawmakers opted for another short-term solution earlier this month, extending the pandemic policies through September. But some health providers and remote care-related businesses are tired of the short-term stopgaps. Jiang Li, CEO of remote patient-monitoring company Vivalink, said in a statement that 'progress will stall' without a long-term policy strategy that would allow providers to invest in new methods to offer care. The chance the pandemic rules could change or lapse in a fight over government funding could discourage further investment. Even so: Concerns about stalled progress from Congress' last-minute, short-term fixes have been shared on the Hill before — and remote care has continued to boom. Why it matters: The rise of telehealth, hospital-at-home programs and other remote care services represents significant transformation in how patients experience health care today. How policymakers handle remote care could determine access to — and the cost of — care in the coming decades.

Politico
12-03-2025
- Health
- Politico
Seeking: Responsible AI worldwide
WORLD VIEW The World Health Organization will team up with a Dutch university to help its member countries adopt responsible artificial intelligence technologies, the global health body said earlier this month. The WHO has designated a research center at Delft University of Technology as a WHO Collaborating Centre on AI for health governance. The Digital Ethics Centre in the Netherlands will research key AI health applications and help inform the WHO's guidance and policies on the technology. Why it matters: The global health body said AI has the potential to reshape health care, save lives and improve health and well-being. But for that to happen, ethical safeguards must be included and evidence-based policies must be followed, the WHO said. The WHO and the United Nations, its parent organization, aim to ensure that developing and wealthy countries benefit from the rapid development and adoption of AI while algorithms adhere to local laws without harming the public's health. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Robots could help parents have better conversations with their children, according to a small study published today in Science Robotics. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Daniel Payne at dpayne@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ FORWARD THINKING Chronic disease rates among children may be higher than previously believed, according to new research published in Academic Pediatrics. The share of people ages 5-25 with one or more chronic diseases rose from nearly 23 percent from 1999 to 2000 to more than 30 percent from 2017 to 2018, according to the researchers, from UCLA and Harvard. A few diagnoses are behind the larger swell, researchers found: ADHD, ADD, autism, asthma, prediabetes, depression and anxiety. 'It is incumbent for the U.S. health system to seek ways to treat these patients in pediatric settings and eventually matriculate them into adult care,' the researchers wrote. Why it matters: The incoming administration's health work, led by HHS Secretary Robert F. Kennedy Jr., promises to focus on better understanding and treating chronic diseases. President Donald Trump created the Make America Healthy Again Commission, headed by Kennedy, to spearhead the work across agencies. Even so: Kennedy has for years spread baseless claims about the cause of the rising chronic disease rates in kids — and that may influence the administration's approach to tackling the problem.

Politico
11-03-2025
- Health
- Politico
In patient portals, people prefer AI
THE LAB Duke patients in a new study prefer patient portal messages written using artificial intelligence — until they learn AI wrote them. Those findings are from a paper published today in JAMA Network Open, in which researchers asked survey participants, comprised of mostly patients, to compare clinical vignettes written by ChatGPT and human clinicians. Participants generally preferred the AI-drafted messages, which tended to be longer and more detailed, likely making them seem more empathetic than those written by humans. But when participants were told that humans wrote the messages or when no author was specified, their preference shifted to greater satisfaction than when they were told the messages were AI generated. This suggests patients assume communications are written by humans unless told otherwise. Even so: Researchers surveyed more than 1,400 patients and community members ages 18 and older from the Duke University Health System in North Carolina. Since participants were from a single health system and tended to be older, highly educated and white, the findings can't be extrapolated to the general population. People most familiar with generative AI tend to be younger and male, so results might differ with a different survey population. Why it matters: Health systems are grappling with whether to disclose AI use to patients, the study authors say. At the same time, doctors' administrative burdens, including responding to patient portal messages, are growing and AI could lighten their workload. Bottom line: The study was designed to measure how transparent AI use affects the patient experience. Seventy-five percent of respondents said they were happy with the messages they received, regardless of who authored the communications, whether the writer was disclosed or how serious the clinical topic was. That suggests being transparent about AI use doesn't drastically hurt patient confidence. 'These findings give us confidence to use technologies like this to potentially help our clinicians reduce burnout, while still doing the right thing and telling our patients when we use AI,' Dr. Anand Chowdhury, study co-author and assistant professor at Duke University School of Medicine, said in a statement. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Cellular reprogramming, an experimental treatment that's effective at extending the lives of mice, but which can come with serious side effects, is nearing testing in humans, The Washington Post reports. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Daniel Payne at dpayne@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Are you a current or former federal worker who wants to share a tip securely? Message us on Signal: CarmenP.82, DanielP.100, RuthReader.02 or ErinSchumaker.01. AROUND THE AGENCIES Dr. Douglas Kelly, the FDA's deputy center director for science and chief scientist of the Center for Devices and Radiological Health, is leaving the agency. His departure is further evidence of the agency's brain drain. In a departing post on LinkedIn, he said he helped recruit senior talent including Troy Tazbaz, former director of the Digital Health Center of Excellence, and Dr. Ross Segan, director of the CDRH's Office of Product Evaluation and Quality, from technology and medical device companies. Kelly, a venture capitalist before he joined the FDA in 2020, was also responsible for the Total Product Lifestyle Advisory Program, which pushed the FDA to engage with device makers early on in the development process to help more products make it through authorization. Why it matters: CDRH has recently lost key talent who were working to make the agency more nimble and more capable of reviewing advanced medical technology, particularly AI. The Department of Government Efficiency made cuts, which included firing Segan, to the CDRH in February. The medical device industry, which pays medical device user fees that cover the salaries of the staff who review devices, responded negatively to the cuts. 'If these cuts are not reversed, there is no question that it will slow down the process for new technologies to get to market, particularly in health care AI and the most innovative therapies,' Scott Whitaker, CEO of the medical device trade organization AdvaMed, told POLITICO. Some staff have since been asked to return. What's next: Kelly is heading back into the venture world, according to a person familiar with his plans, who was granted anonymity.