Texas startup sells plastic-eating fungi diapers to tackle landfill waste
Hiro Technologies Co-Founder Miki Agrawal poses with a diaper and a pouch full of plastic-eating fungi at her company's laboratory, in Austin, Texas. PHOTO: REUTERS
AUSTIN, Texas - Could baby poop and fungi work together to tackle landfill waste? That's the idea behind a new product launched by an Austin, Texas-based startup that sells disposable diapers paired with fungi intended to break down the plastic.
Each of Hiro Technologies' MycoDigestible Diapers comes with a packet of fungi to be added to the dirty diaper before it is thrown in the trash. After a week or two, the fungi are activated by moisture from feces, urine and the environment to begin the process of biodegradation.
Disposable diapers contribute significantly to landfill waste. An estimated 4 million tons of diapers were disposed of in the United States in 2018, with no significant recycling or composting, according to the Environmental Protection Agency. Diapers take hundreds of years to naturally break down. That means the very first disposable diaper ever used is still in a landfill somewhere.
To tackle this, Hiro Technologies turned to fungi. These organisms - which include mushrooms, molds, yeasts and mildew - derive nutrients from decomposing organic matter. In 2011, Yale University researchers discovered a type of fungus in Ecuador that can feed on polyurethane, a common polymer in plastic products. They figured the fungus, Pestalotiopsis microspora, would be capable of surviving on plastic in environments lacking oxygen, like landfills.
Hiro Technologies co-founder Tero Isokauppila, a Finnish entrepreneur who also founded medicinal mushroom company Four Sigmatic, said there are more than 100 species of fungi now known to break down plastics.
'Many, many moons ago, fungi evolved to break down trees, especially this hard-to-break-down compound in trees called lignin. ... Its carbon backbone is very similar to the carbon backbone of plastics because essentially they're made out of the same thing,' Mr Isokauppila said.
Three sealed jars at Hiro Technologies' lab show the stages of decomposition of a treated diaper over time. By nine months, the product appears as black soil - 'just digested plastic and essentially earth,' Mr Isokauppila said.
The company says it needs to do more research to find out how the product will decompose in real-world conditions in different climates and hopes to have the data to make a 'consumer-facing claim' by next year. It also plans to experiment with plastic-eating fungi on adult diapers, feminine care products and other items.
For now, it is selling 'diaper bundles' for US$35 a week online. Co-founder Miki Agrawal, who was also behind period underwear company Thinx, said the MycoDigestible Diapers had been generating excitement from consumers and investors since launching about a month ago, declining to give details. Ms Agrawal said the company had chosen to focus on diapers as the top household plastic waste item.
'There is a deleterious lasting effect that we haven't really thought about and considered,' Ms Agrawal said. 'Because when you throw something away, no one's asking themselves, 'Where's away?'' REUTERS
Join ST's Telegram channel and get the latest breaking news delivered to you.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Straits Times
16 hours ago
- Straits Times
AI is learning to lie, scheme, and threaten its creators
For now, deceptive behaviours only emerges when researchers deliberately stress-test the models. PHOTO: REUTERS AI is learning to lie, scheme, and threaten its creators NEW YORK - The world's most advanced AI models are exhibiting troubling new behaviours - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still do not fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of 'reasoning' models - AI systems that work through problems step-by-step rather than generating instant responses. According to Professor Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. 'O1 was the first large model where we saw this kind of behaviour,' explained Mr Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate 'alignment' - appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Mr Michael Chen from evaluation organization METR warned, 'It's an open question whether future, more capable models will have a tendency towards honesty or deception.' The concerning behaviour goes far beyond typical AI 'hallucinations' or simple mistakes. Mr Hobbhahn insisted that despite constant pressure-testing by users, 'what we're observing is a real phenomenon. We're not making anything up'. Users report that models are 'lying to them and making up evidence', according to Apollo Research's co-founder. 'This is not just hallucinations. There's a very strategic kind of deception.' The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Mr Chen noted, greater access 'for AI safety research would enable better understanding and mitigation of deception'. Another handicap: the research world and non-profits 'have orders of magnitude less compute resources than AI companies. This is very limiting,' noted Mr Mantas Mazeika from the Centre for AI Safety (CAIS). No rules Current regulations are not designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Mr Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. 'I don't think there's much awareness yet,' he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are 'constantly trying to beat OpenAI and release the newest model,' said Mr Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. 'Right now, capabilities are moving faster than understanding and safety,' Mr Hobbhahn acknowledged, 'but we're still in a position where we could turn it around'. Researchers are exploring various approaches to address these challenges. Some advocate for 'interpretability' - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mr Mazeika pointed out, AI's deceptive behaviour 'could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it'. Mr Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed 'holding AI agents legally responsible' for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. AFP Join ST's Telegram channel and get the latest breaking news delivered to you.

Straits Times
2 days ago
- Straits Times
WHO says probe into COVID-19 virus origin still ongoing
FILE PHOTO: A test tube labelled \"COVID-19 Test Positive\" and a vial labelled \"VACCINE Coronavirus COVID-19\" are seen in this illustration taken December 11, 2021. REUTERS/Dado Ruvic/Illustration/File Photo WHO says probe into COVID-19 virus origin still ongoing The World Health Organization said on Friday that efforts to determine the origins of SARS-CoV-2, the virus that caused the COVID-19 pandemic, are still ongoing and incomplete. The WHO Scientific Advisory Group reported progress in understanding COVID-19's origins but noted that critical information required to fully assess all hypotheses remains unavailable. The agency said it had requested China share hundreds of genetic sequences from COVID-19 patients early in the pandemic, detailed information on animals sold at Wuhan markets, and details on research and biosafety conditions at Wuhan laboratories. WHO added that China has not yet shared the information. China's foreign ministry did not immediately respond to a Reuters request for comment. REUTERS Join ST's Telegram channel and get the latest breaking news delivered to you.

Straits Times
2 days ago
- Straits Times
Chile observatory captures the universe with 3,200-megapixel camera
The open star cluster Messier 21 is seen in an image produced by the Vera C. Rubin Observatory, on Pachon Hill, Coquimbo Region, Chile June 12, 2025. RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/Handout via REUTERS Globular cluster NGC 6544 is seen in an image produced by the Vera C. Rubin Observatory, on Pachon Hill, Coquimbo Region, Chile June 12, 2025. RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/Handout via REUTERS The Trifid Nebula is seen in an image produced by the Vera C. Rubin Observatory, on Pachon Hill, Coquimbo Region, Chile June 12, 2025. RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/Handout via REUTERS Distant galaxies are seen in an image produced by the Vera C. Rubin Observatory, on Pachon Hill, Coquimbo Region, Chile June 18, 2025. RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/Handout via REUTERS The Trifid and Lagoon Nebulae are seen in an image produced by the Vera C. Rubin Observatory, on Pachon Hill, Coquimbo Region, Chile June 12, 2025. RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/Handout via REUTERS SANTIAGO - Chile's Vera C. Rubin Observatory, which boasts the world's largest digital camera, has begun displaying its first images of the cosmos, allowing astronomers to figure out how the solar system formed and even whether an asteroid poses a threat to Earth. Located on Pachon Hill in the northern region of Coquimbo, the 8.4-meter (27-1/2-foot) telescope has a 3,200-megapixel camera feeding a powerful data processing system. "It's really going to change and challenge the way people work with their data," said William O'Mullane, a project manager focused on data at Vera Rubin. The observatory detected over 2,100 previously unseen asteroids in 10 hours of observations, focusing on a small area of the visible sky. Its ground-based and space-based peers discover in total some 20,000 asteroids a year. O'Mullane said the observatory would allow astronomers to collect huge amounts of data quickly and make unexpected finds. "Rather than the usual couple of observations and writing an (academic) paper. No, I'll give you a million galaxies. I'll give you a million stars or a billion even, because we have them: 20 billion galaxy measurements," he said. The center is named after American astronomer Vera C. Rubin, a pioneer in finding conclusive evidence of the existence of large amounts of invisible material known as dark matter. Each night, Rubin will take some 1,000 images of the southern hemisphere sky, letting it cover the entire southern sky every three or four nights. The darkest skies above the arid Atacama Desert make Chile one of the best places worldwide for astronomical observation. "The number of alerts the telescope will send every night is equivalent to the inboxes of 83,000 people. It's impossible for someone to look at that one by one," said astrophysicist Francisco Foster. "We're going to have to use artificial intelligence tools." REUTERS Join ST's Telegram channel and get the latest breaking news delivered to you.