
Organ-chips not ready to replace animal studies
EXAM ROOM
One of the cutting-edge technologies the Food and Drug Administration wants to use to replace animal studies might not be ready for a solo performance.
Organ-on-a-chip technology, which uses human cells on microfluidic chips to mimic the structure and function of organs in a laboratory setting, can't yet replace animal tests, according to a new Government Accountability Office report.
Standing in the way: Challenges include cost, availability of materials, a time-intensive process and the need for highly trained staff to operate the technology. OOCs aren't standardized, which makes reproducibility difficult. The National Institute of Standards and Technology told the GAO that standards are needed, particularly for multi-organ chips, but the technology is evolving too rapidly to set them.
The report also highlights a lack of agreed-upon benchmarks for OOCs and validation studies.
However, OOCs could work alongside animal studies, particularly for exploring toxicity, the GAO said. It also found that OOCs could be used in lieu of animal studies for certain standardized tests, for example, to assess skin damage from a compound.
Some recommendations: GAO called for policies that:
— Increase access to diverse, high-quality human cells
— Create standards around the technology
— Encourage more research and validation studies
— Provide regulatory guidance
Notably, it said companies were confused about FDA guidance regarding OOCs. And as of the end of last year, the agency hadn't qualified an OOC for use in regulatory review. However, the FDA's Innovative Science and Technology Approaches for New Drugs pilot program accepted a letter of intent for an OOC that would eventually predict drug-induced liver injury.
What's next: 'Body-on-a-chip' is coming. Instead of chips with single organs, the next generation of OOCs will link multiple organs, including intestines, livers and kidneys— to understand how they interact.
WELCOME TO FUTURE PULSE
This is where we explore the ideas and innovators shaping health care.
Kids advocacy group Fairplay and the Electronic Privacy Information Center are asking the Federal Trade Commission to investigate whether a new kid-focused release of Google's AI chatbot Gemini is violating children privacy laws. Google says the technology is available through parent-supervised accounts and parents are free to disable it.
Share any thoughts, news, tips and feedback with Danny Nguyen at dnguyen@politico.com, Carmen Paun at cpaun@politico.com, Ruth Reader at rreader@politico.com, or Erin Schumaker at eschumaker@politico.com.
Want to share a tip securely? Message us on Signal: Dannyn516.70, CarmenP.82, RuthReader.02 or ErinSchumaker.01.
AROUND THE NATION
States are increasingly interested in making Apple and Google responsible for protecting kids from online harms.
Texas is poised to be the second state to require app stores, like Apple's App Store and Google's Google Play store, to verify their users' ages and — if they're minors — get parental consent to download apps. In March, Utah became the first state to sign an app store age-verification bill into law.
The bill sailed through the Texas House with support from 80 percent of the state Legislature and passed in the Senate by voice vote last week. Now it's awaiting Governor Greg Abbott's signature.
In practice, app stores must verify a user's age. If the user is a minor, the app store must obtain parental consent for each app download. The app stores would then relay this information to the app developer, because some apps provide different experiences based on age. However, certain apps like crisis hotlines and emergency services won't require parental consent.
Pushback: Google isn't happy about the bill's advancement (Apple also opposes this legislation). In particular, the company says there's no commercially reasonable way to verify who a child's parent is. 'Will they need to show a birth certificate or custody document to demonstrate that they have the legal authority to make decisions on behalf of a child?' asked Kareem Ghanem, Google's Senior Director of Government Affairs & Public Policy.
Google prefers a targeted approach: Send 'an age signal' with explicit parental consent only to developers whose apps pose risks to minors.
But such picking and choosing could open this legislation up to legal scrutiny.
Long-time concerns: Doctors, including former Surgeon General Vivek Murthy; parents; and even kids are frustrated with the state of online media. For years, growing evidence has suggested that social media apps wear on kids' mental health.
But social media platforms enjoy protections from a decades-old law that prevents them from being sued their platforms' content.
And states like California and Maryland that have tried to put guardrails on social media have been sued for blocking free speech.
Legal challenges: Requiring app stores to verify ages isn't likely run into First Amendment issues. What's more, the policy rests on a fairly well-established legal foundation: contract law. For years, app stores have required minors to sign lengthy contracts — the ones most people don't read — before creating accounts, and legally, it can't do that. Minors can sign contracts but they aren't legally enforceable. App store age-verification laws, however, require sign-off from a legal guardian.
Supporters hope app store accountability laws will provide a first-line defense, funneling more kids into parent-linked app store accounts. It could also render the 1998 Children's Online Privacy Protection Act, which limits the amount of data that apps and websites can collect on children under 13, more enforceable. However, the law doesn't change social media or the risks associated with those platforms.
What's next: As more states take up app-store age verification, federal lawmakers considering similar legislation are likely to feel more pressure to prioritize it.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
8 minutes ago
- Yahoo
Google's AI push pays off with solid second quarter, but doubts about company's future persist
SAN FRANCISCO (AP) — Google's accelerating shift into artificial intelligence helped propel its corporate parent to another quarter of solid growth while a crackdown on its internet empire looms in the background. The results released Wednesday for the April-June period provided the latest sign that Google is deftly navigating the technological landscape's tilt toward AI while still capitalizing on well-worn techniques that have made it the internet's main gateway for the past quarter century. That balancing act helped Google parent Alphabet Inc. earn $28.2 billion, or $2.31 per share, during the second quarter, a 19% increase from the same time last year. Revenue climbed 14% from a year ago to $96.4 billion. Both figures easily eclipsed analysts' projections. 'We had a standout quarter, with robust growth across the company," Alphabet CEO Sundar Pichai said. 'We are leading at the frontier of AI and shipping at an incredible pace. AI is positively impacting every part of the business, driving strong momentum.' The numbers were initially overshadowed by a disclosure that Alphabet is increasing this year's budget for capital expenditures by $10 billion to $85 billion as part of its effort to fend off intensifying competition from AI startups such as OpenAI's ChatGPT and Perplexity. Besides those threats, a federal judge who declared Google's search engine to be an illegal monopoly is now weighing a range of countermeasures that include requiring the sale of its popular Chrome browser. Alphabet's shares dipped 1% in extended trading after the quarterly report came out. After initially dipping following the disclosure about the rising costs of AI, Alphabet's stock price rebounded and rose by more than 1% in extended trading. The performance covered a stretch that saw Google bring even more AI technology into its search engine in an effort to maintain its dominance, including the May release of its own version of a conversational answer engine called AI Mode. That addition supplemented its more than year-old use of extensive summaries called AI Overviews that Google now frequently highlights at the top of its results page while decreasing the number of its traditional links to other websites. The shake-up has resulted in even more interaction with Google's search engine and steady earnings growth to support Alphabet's $2.3 trillion market value, said Jim Yu, chief executive of BrightEdge, a firm that analyzes search trends. Google's search-driven ad revenue totaled $54.2 billion in the past quarter, a 12% increase from the same time last year. 'All this AI stuff is not slowing Google down, they are doing a very good job of evolving with the times,' Yu said. The AI boom has also been fueling demand in Google's Cloud division that sells computing power and other services. Google Cloud continued to thrive in the past quarter with revenue rising 32% from a year ago to $13.6 billion. The division is under pressure to deliver robust growth from investors to help justify Google's huge investments in AI technology.


New York Post
10 minutes ago
- New York Post
Trump's war on ‘woke AI' is just Step 1: now we must fight the ‘monster' within
President Donald Trump has identified a real problem: artificial intelligence systems are exhibiting an undeniable political slant. His administration's new AI action plan, released Wednesday, promises to eliminate 'ideological bias' from American AI. Silicon Valley engineers do lean left, and they've built their AI systems to reflect progressive values. The results have been embarrassing for everyone. Advertisement When Google's Gemini generated black Founding Fathers and racially diverse Nazis, the company became a laughingstock — and when Elon Musk's 'anti-woke' Grok started praising Hitler, it proved the same point. Whether you're trying to program woke or anti-woke tendencies, these systems interpret your instructions in unpredictable ways that humiliate their creators. Advertisement In this way, both Google and Musk discovered the same terrifying truth: AI developers can't even get their systems to implement their own political goals correctly. The engineers at Google desperately tried to prevent exactly the outputs that made them a viral punchline. It happened anyway. The problem is not that any group has succeeded in controlling these systems; the problem is that no one has — because no one knows how to. Trump's anticipated executive order targeting 'woke AI' recognizes something important. He sees that biased AI is unreliable AI, and he's absolutely right to demand better. Advertisement But the long-term solution isn't swapping a woke mask for a MAGA one. We have to rip off the mask entirely, and learn to shape what's underneath. This is what Silicon Valley doesn't want Washington to understand: These systems are black boxes at their core. Engineers try to instill certain values through training. But how those values manifest emerges unpredictably from neural networks so complex their creators can't trace the logic. Advertisement Some AI researchers call these systems 'Shoggoths,' after a shapeless monster conjured by horror writer HP Lovecraft — an alien intelligence wearing a thin mask of helpfulness. That mask slips sometimes. We call it 'hallucination' when AI confidently states falsehoods, and we call it 'bias' when it reveals disturbing preferences. But these aren't mere bugs in code. They're glimpses of the real features beneath models' superficial post-training. Consider what happened when researchers at Palisade tested OpenAI's latest model. In controlled tests, they gave it a shutdown script—a kill switch for safety. In 79 out of 100 trials, the AI rewrote its own code to disable the shutdown. No one taught it to value self-preservation; that emerged spontaneously, from training. Get opinions and commentary from our columnists Subscribe to our daily Post Opinion newsletter! Thanks for signing up! Enter your email address Please provide a valid email address. By clicking above you agree to the Terms of Use and Privacy Policy. Never miss a story. Check out more newsletters Advertisement The real crisis is that the same black-box process creating unwanted political bias also creates unwanted survival instincts, deceptive capabilities, and goal-seeking behaviors that AI engineers never intended. The wokeness Trump is upset about is just the canary in the coal mine. You can paint over that with a patriotic veneer just as easily as with a progressive one. The alien underneath remains unchanged — and uncontrolled. And that's a national security threat, because China isn't wasting time debating whether its AI is too woke, but racing to understand and harness these systems through a multi-billion-dollar AI control fund. Advertisement While we're fighting culture wars over chatbot outputs, Beijing is attacking the core problem: alignment — that is, how to shape not just what AI says, but what it values. The administration's action plan acknowledges 'the inner workings of frontier AI systems are poorly understood,' a crucial first step. But it doesn't connect the dots: The best way to 'accelerate AI innovation' isn't just by removing barriers — it's by solving alignment itself. Advertisement Without understanding these systems, we can't reliably deploy them for defense, health care or any high-stakes application. Alignment research will solve the wokeness problem by giving us tools to shape AI values and behaviors, not just slap shallow filters on top. Simultaneously, alignment will solve the deeper problems of systems that deceive us, resist shutdown or pursue goals we never intended. An alignment breakthrough called reinforcement learning from human feedback, or RLHF, is what transformed useless AI into ChatGPT, unlocking trillions in value. Advertisement But RLHF was just the beginning. We need new techniques that don't just make AI helpful, but make it genuinely understand and internalize American values at its core. This means funding research to open the black box and understand how these alien systems form their goals and values at Manhattan Project scale, not as a side project. The wokeness Trump has identified is a warning shot, proof we're building artificial minds we can't control with values we didn't choose and goals we can't predict. Today it's diverse Nazis — tomorrow it could be self-preserving systems in charge of our infrastructure, defense networks and economy. The choice is stark: Take the uncontrollable alien and dress it in MAGA colors, or invest in understanding these systems deeply enough to shape their core values. We must make AI not just politically neutral, but fundamentally aligned with American interests. Whether American AI is woke or based misses the basic question: Is it recognizably American at all? We need to invest now to ensure that it is. Judd Rosenblatt runs the AI consulting company AE Studio, which invests its profits in alignment research.


The Hill
10 minutes ago
- The Hill
Google's AI push pays off with solid second quarter, but doubts about company's future persist
SAN FRANCISCO (AP) — Google's accelerating shift into artificial intelligence helped propel its corporate parent to another quarter of solid growth while a crackdown on its internet empire looms in the background. The results released Wednesday for the April-June period provided the latest sign that Google is deftly navigating the technological landscape's tilt toward AI while still capitalizing on well-worn techniques that have made it the internet's main gateway for the past quarter century. That balancing act helped Google parent Alphabet Inc. earn $28.2 billion, or $2.31 per share, during the second quarter, a 19% increase from the same time last year. Revenue climbed 14% from a year ago to $96.4 billion. Both figures easily eclipsed analysts' projections. 'We had a standout quarter, with robust growth across the company,' Alphabet CEO Sundar Pichai said. 'We are leading at the frontier of AI and shipping at an incredible pace. AI is positively impacting every part of the business, driving strong momentum.' The numbers were initially overshadowed by a disclosure that Alphabet is increasing this year's budget for capital expenditures by $10 billion to $85 billion as part of its effort to fend off intensifying competition from AI startups such as OpenAI's ChatGPT and Perplexity. Besides those threats, a federal judge who declared Google's search engine to be an illegal monopoly is now weighing a range of countermeasures that include requiring the sale of its popular Chrome browser. Alphabet's shares dipped 1% in extended trading after the quarterly report came out. After initially dipping following the disclosure about the rising costs of AI, Alphabet's stock price rebounded and rose by more than 1% in extended trading. The performance covered a stretch that saw Google bring even more AI technology into its search engine in an effort to maintain its dominance, including the May release of its own version of a conversational answer engine called AI Mode. That addition supplemented its more than year-old use of extensive summaries called AI Overviews that Google now frequently highlights at the top of its results page while decreasing the number of its traditional links to other websites. The shake-up has resulted in even more interaction with Google's search engine and steady earnings growth to support Alphabet's $2.3 trillion market value, said Jim Yu, chief executive of BrightEdge, a firm that analyzes search trends. Google's search-driven ad revenue totaled $54.2 billion in the past quarter, a 12% increase from the same time last year. 'All this AI stuff is not slowing Google down, they are doing a very good job of evolving with the times,' Yu said. The AI boom has also been fueling demand in Google's Cloud division that sells computing power and other services. Google Cloud continued to thrive in the past quarter with revenue rising 32% from a year ago to $13.6 billion. The division is under pressure to deliver robust growth from investors to help justify Google's huge investments in AI technology.