
This Decentralized AI Could Revolutionize Drug Development
In April, Rowan Labs released Egret-1, a suite of machine-learned neural network potentials designed to simulate organic chemistry at atomic precision. In plain terms, this model offers 'the level of accuracy from national supercomputers at a thousand to a million times the speed,' Rowan Labs Co-founder Ari Wagen said on Zoom. And they've open-sourced the entire package.
But the real acceleration comes from Rowan's partnership with subnet 25 of the decentralized AI protocol Bittensor, called Macrocosmos. It's an unlikely yet potent collaboration — Rowan's high-accuracy synthetic data generation, now powered by a decentralized compute layer, could drastically reduce the cost and time to discover new therapeutic compounds and treatments.
At the heart of Rowan's work is the idea of training AI neural networks not on scraped web data, but on physics in action — specifically, quantum mechanics. 'We build synthetic datasets by running quantum mechanics equations,' Wagen explained. 'We're training neural networks to recreate the outputs of those equations. It's like Unreal Engine [a leading 3D modeling app], but for simulating the atomic-level real world.'
This isn't theory. It's application. Rowan's models can already predict critical pharmacological properties — like how tightly a small molecule binds to a protein. That matters when trying to determine if a potential drug compound will actually work. 'Instead of running experiments, you can run simulations in the computer,' Wagen said. 'You save so much time, so much money and you get better results.'
To generate the training data for these models, Rowan used conventional quantum mechanical simulations. But to go further — to make the models more generalizable and robust — they need more data. That's where Macrocosmos comes in.
'We've spent the past year trying to incentivize better molecular dynamics,' said Macrocosmos' Founding Engineer, Brian McCrindle. 'The vision is to let Rowan spin up synthetic data generation across our decentralized compute layer — at fractions of the cost of AWS or centralized infrastructure.'
The advantage isn't just cost — it's scale, speed and resilience. 'If we can generate the next training dataset in a month instead of six, the next version of Egret will come out twice as fast,' McCrindle added.
The stakes are enormous. With the right volume and variety of high-quality data, Rowan hopes to build 'a model of unprecedented scale that can simulate chemistry and biology at the atomic level,' Wagen said. That's not hyperbole — it's a strategy to compress the drug discovery timeline by years and open the door to faster cures for rare diseases and more effective preclinical toxicity testing.
And it doesn't stop at human health. Rowan is already working with researchers tackling carbon capture, atomic-level manufacturing and even oil spill cleanup using this technology. 'We can predict how fast materials break down, or optimize catalysts to degrade pollutants,' said Rowan Co-founder, Jonathon Vandezande, a materials scientist by training.
Of course, synthetic data raises the question of reliability. Wagen was clear: 'The synthetic data we generate is more accurate than what you'd get from running a physical experiment. Real instruments have worse error bars than our quantum mechanical approximations.'
And unlike earlier failures like IBM Watson Health, Rowan posts all model benchmarks publicly. 'You can see exactly where they perform well—and where they don't,' he said.
So what's next? Within a year, both teams aim to release a new peer-reviewed paper demonstrating how decentralized compute generated the next generation of chemical simulation models. 'This partnership lets us take what would have been a six-figure cloud bill and decentralize it,' McCrindle noted. 'That's the promise of decentralized science.'
It's also a compelling proof point for Bittensor, which now supports over 100 subnets tackling everything from international soccer match predictions to AI deepfake detection. But for McCrindle, the vision is simpler: 'Can we incentivize any kind of science? That's always been the question.'
With Egret-1 and Macrocosmos' decentralized AI platform — the answer looks increasingly like a yes.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
6 minutes ago
- Gizmodo
OpenAI Hits Pause on Its Meta Killer
OpenAI is delaying the release of its much-anticipated open-weight AI model, citing the need for 'additional safety tests' and last-minute concerns over 'high-risk areas,' CEO Sam Altman announced on X (formerly Twitter). The decision lands in the middle of a brutal AI arms race, particularly with Meta, which has been aggressively poaching OpenAI talent and championing open-source models like Llama 3. The model, which was slated to drop this week, would be OpenAI's first major open-weight system, meaning developers would be free to download and use the model's underlying code and data weights to build their own apps, research projects, or commercial tools. But as Altman pointed out, once these models are released, 'they can't be pulled back.' That's the nature of open-source, and it's exactly why this delay is raising eyebrows across the AI community. 'While we trust the community will build great things with this model, once weights are out, they can't be pulled back,' Altman wrote on X (formerly Twitter) on July 11. 'This is new for us and we want to get it right.' we planned to launch our open-weight model next week. we are delaying it; we need time to run additional safety tests and review high-risk areas. we are not yet sure how long it will take us. while we trust the community will build great things with this model, once weights are… — Sam Altman (@sama) July 12, 2025In AI, 'weights' are the millions of numerical values that act like the model's brain wiring, allowing it to make connections and decisions. When a company releases a model as 'open-weight,' it's not just sharing a blueprint; it's giving away the fully functional brain. Developers are free to download it, modify it, and use it for everything from building chatbots and productivity tools to creating deepfakes and other malicious applications. Open-sourcing models accelerates innovation, but it also raises the risk of misuse, misinformation, and untraceable custom versions. That's why the decision to delay, while frustrating to many, signals that OpenAI is trying to tread cautiously, especially as criticism around AI safety and 'model leaking' intensifies. According to developer chatter online, the delay may have been triggered by a major technical issue discovered just before launch. The rumored model was expected to be smaller than Kimi K2—the new open-weight model from Chinese AI startup Moonshot AI that reportedly clocks in at nearly a trillion parameters—but still 'super powerful,' according to early testers. Kimi K2, which is taking on ChatGPT with impressive coding capabilities at a lower price, was released on July 11, the same day as Altman's announcement. While some online speculators blamed the delay on Kimi's unexpectedly strong performance and a fear of being outshone, there's no confirmation of that from OpenAI. What is clear is that the company is feeling the pressure to deliver something that is safe, fast, and competitive. Rumors that OpenAI delayed their open-source model because of Kimi are fun, but from what I hear: – the model is much smaller than Kimi K2 (<< 1T parameters)– super powerful– but due to some (frankly absurd) reason I can't say, they realized a big issue just before release, so… — Yuchen Jin (@Yuchenj_UW) July 13, 2025OpenAI's delay comes at a time when Meta is eating its lunch, at least in the open-source department. Mark Zuckerberg's company has released increasingly powerful open-weight models like Llama 3, all while quietly hiring away top OpenAI researchers. The talent war is real, and it's affecting timelines and strategy across the board. By delaying this release, OpenAI may be hoping to avoid a flawed launch that could dent its credibility at a critical moment. But it also risks falling further behind Meta, which has already become the go-to platform for developers looking to build with transparent, modifiable AI tools. OpenAI hasn't offered a new timeline for the release. That silence is fueling speculation that the delay could last weeks. If retraining is truly on the table, it could push the launch closer to the fall. For now, the open-source community is in wait-and-see mode. And the question hanging over it all: Can OpenAI deliver a model that is powerful, safe, and competitive enough to match Meta's momentum and keep the Chinese rivals at bay? In other words, can they get it right before someone else does?


Entrepreneur
15 minutes ago
- Entrepreneur
Nvidia CEO: AI Will Change Everyone's Jobs, Including My Own
In a new interview, Nvidia CEO Jensen Huang says AI is "the greatest technology equalizer" the world has ever seen — and that "100% of everybody's jobs will be changed" as a result. Huang told CNN's Fareed Zakaria on Sunday that AI was an "equalizer," meaning that it "lifts" people who aren't well-versed in technology to be able to use it. Huang said ChatGPT, an AI chatbot with over 500 million global weekly users, was an example of how people can easily use AI with little to no formal training in interacting with it. "Look at how many people are using ChatGPT for the very first time," Huang told Zakaria. "And the first time you use it, you're getting something out of it… AI empowers people; it lifts people." Related: Here Are the 10 Highest-Paying Jobs with the Lowest Risk of Being Replaced By AI: 'Safest Jobs Right Now' AI results in people being able to do more with the technology than they would have without it, Huang said. He elaborated that he was "certain" that the "work that we do in our jobs" would be dramatically transformed due to AI. Huang, who has been leading Nvidia as CEO since co-founding it in 1993, said his own work has changed because of AI. "The work will change," Huang said in the interview. "My job has already changed. The work that I do has changed, but I'm still doing my job." Huang said that "some" jobs would be lost because of AI, but "many" jobs would be created thanks to the technology. He predicted that AI would result in productivity gains across industries, lifting society as a whole. Nvidia CEO Jensen Huang. PhotoHuang's predictions are less dire than those of Dario Amodei, the CEO of $61.5 billion AI startup Anthropic. In May, Amodei told Axios that within the next five years, AI could wipe out half of all entry-level white-collar jobs and cause unemployment to rise to 10% to 20%. In March, he stated that AI would write "all of the code" for companies within a year. Adam Dorr, research director at the think tank RethinkX, stated that by 2045, AI and robotics could make human jobs obsolete. "We don't have that long to get ready for this," Dorr told The Guardian last week. "We know it's going to be tumultuous." Related: 'Fully Replacing People': A Tech Investor Says These Two Professions Should Be the Most Wary of AI Taking Their Jobs


CNBC
18 minutes ago
- CNBC
Jim Cramer makes the case for why Apple and investors should stick with CEO Tim Cook
Jim Cramer says he's still backing Apple 's Tim Cook despite calls for the CEO to resign. "I really believe in Tim," Jim said at Friday's annual meeting of the CNBC Investing Club from the New York Stock Exchange. "He's made us a lot of money. He gets the benefit of the doubt." Responding to a question from a member who asked whether the Club would consider trimming the stock if Apple cannot turn things around, Jim also addressed the Street's list of concerns about the tech giant, including last Tuesday's announcement that COO Jeff Williams, 62, will retire later this year. Williams was the No. 2 executive at Apple. A day later, LightShed Partners analyst Walter Piecyk called for Apple to replace Cook. Piecyk did credit the 64-year-old CEO for an amazing job navigating the iPhone era, but said Apple now needs a product-focused CEO. Piecyk told CNBC's "Fast Money" last Wednesday evening that the idea of Apple needing new leadership is not new among institutional investors. "It cannot miss out on AI," LightShed wrote in its note to clients. Jim recently advocated for Apple to buy AI start-up Perplexity as a solution to getting back in the game. "They screwed up the AI. Jeff Williams is retiring. Luca Maestri, the great CFO, is gone. The new CFO [Sabih Khan] is young. They're right now lacking innovation. A lot of people feel that Vision Pro [headset] is a bust," Jim said. "There isn't anything that they are doing right, right now, according to people," he acknowledged. But in a show of faith, Jim kept Apple stock as one of the Club portfolio's 12 core holdings , alongside artificial intelligence winners Amazon , Meta Platforms , and newly crowned $4 trillion market cap stock Nvidia . Apple's shares have been feeling the weight of shaky investor confidence, with the stock down nearly 16.5% year to date. Unlike other tech companies following the tariff-driven April lows, Apple has been slower to recover. Currently trading around $209, the stock would have to see an upside move of roughly 19% to get back to its record-high close of $259 on Dec. 26, 2024. AAPL YTD mountain Apple YTD It's undeniable that Apple is up to its eyeballs in problems, with AI being one of those at the forefront of investors' minds. Earlier this year, the company delayed its rollout of an AI-powered conversational Siri, helping fuel naysayers who are upset with its failure to catch up in the AI revolution. To add more flames to the fire, Apple lost a top AI executive , Ruoming Pang, to Meta last week. Furthermore, the company has been a direct target of the Trump administration, which has publicly criticized Cook for a lack of urgency in moving iPhone production back to the U.S. Despite having shifted some production to India, most of Apple's phones are still made in China. But either way, President Donald Trump wants iPhones made in America, which could more than double the price tag of the device. "It is painful to hear people going for [Cook's] head or that it's time for him to change," Jim said, as he questioned whether investors have forgotten the "thousands and thousands of percentages" in profits that the company and Cook have made them. "As long as this [iPhone] is remarkable. As long as this [iPhone] is indispensable, we're going to own the stock," Jim said. (Jim Cramer's Charitable Trust is long AAPL. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust's portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.