logo
Theta Labs Launches Hybrid Edge Cloud Architecture with Dynamic Supply-Demand GPU marketplace to Democratize Access to Computing

Theta Labs Launches Hybrid Edge Cloud Architecture with Dynamic Supply-Demand GPU marketplace to Democratize Access to Computing

New decentralized system delivers enterprise-grade AI computing at up to 70% lower cost by intelligently combining cloud and distributed resources
Theta Labs today announced the beta release of its hybrid edge cloud architecture for its Theta EdgeCloud network, a computing platform that combines traditional cloud-based GPUs with a distributed network of over 30,000 community-operated edge nodes. The platform is designed to provide cost-effective access to high-performance computing resources for AI model training and inference, video processing, financial modelling, and other GPU-intensive tasks. In this new release, a new decentralized GPU marketplace is introduced that keeps compute pricing competitive and transparent across the platform.
The hybrid architecture addresses a fundamental challenge in modern computing: the rising cost and limited availability of specialized hardware needed for AI and machine learning tasks. Traditional cloud providers charge premium rates for GPU access, often pricing out smaller research teams, startups, and academic institutions. By integrating distributed computing resources from community members alongside conventional cloud infrastructure, the platform will provide similar capabilities at significantly reduced costs.
A dynamic GPU marketplace for efficient compute routing
Theta EdgeCloud is a decentralized marketplace that connects the supply and demand for GPU computing power. It empowers anyone with idle GPUs to contribute their resources and earn rewards, while providing developers and AI teams with a scalable, cost-efficient platform for running containerized workloads.
The platform now allows customers to choose the most suitable infrastructure for different types of computing tasks. For instance, training a large AI model that requires substantial GPU vRAM can be directed to powerful cloud or data center based GPUs. In contrast, tasks like burst model inference workloads, which are inherently parallelizable, can be distributed across many community-operated gaming machines, providing a flexible and cost-effective alternative.
To ensure a fair and dynamic pricing model, Theta EdgeCloud allows node operators (the supply side) to set their own hourly rental rates. Meanwhile, users (the demand side) can select nodes that meet their performance requirements and budget constraints when launching workloads. This market-driven approach helps keep GPU compute pricing competitive and transparent across the platform.
The system includes backup mechanisms with automatic failover logic to reassign work if any community device goes offline, ensuring reliable completion of computing tasks across the heterogeneous network of GPU types.
Powering Leading Academic and Enterprise Customers
The platform currently serves customers including Stanford University, Seoul National University, KAIST, Yonsei University, the University of Oregon, Michigan State University, and NTU Singapore for academic AI research. Enterprise clients include major sports teams such as the NHL's Las Vegas Knights, NBA's Houston Rockets, and global esports teams FlyQuest and Evil Geniuses.
The beta release includes features requested by existing customers, including persistent storage for AI model training, improved job prioritization, and a developer API interface for job submission and analytics.
"The reality is that GPU costs have become prohibitive for many organizations doing important AI research," said Jieyi Long, CTO of Theta Labs. "Universities are telling us they're having to scale back projects or wait months for access to affordable computing resources. By tapping into the unused GPU power sitting in gaming computers and workstations around the world, we can deliver the same computational capabilities at a fraction of the cost. This means our partners at Stanford, KAIST, and other institutions can run more experiments, iterate faster, and push the boundaries of what's possible in AI research without budget constraints limiting their ambitions."
Technical Capabilities
The hybrid architecture supports containerized computing tasks including AI model training and inference, video encoding and transcoding, 3D rendering, financial simulations, and scientific computing applications. The platform provides over 80 PetaFLOPS of distributed GPU compute power through its combination of cloud partnerships with Google Cloud and Amazon Web Services and its distributed edge network.
The EdgeCloud client node is a lightweight software package that enables community members to contribute their idle GPU capacity. Advanced job containerization ensures high-efficiency computation across different GPU types and specifications. To learn more about the EdgeCloud client software, please check out this link.
Theta Labs is the leading provider of decentralized cloud infrastructure for AI, media and entertainment powered by a global network of 30,000 distributed edge nodes and a native blockchain. Backed by Samsung, Sony, Bertelsmann Digital Media Investments and Creative Artists Agency, Theta is among the top 10 DePIN blockchains by market capitalization on Coingecko and top AI tokens on Binance.com. Theta's enterprise validator and governance council is composed of global market leaders including Google, Samsung, CAA and Binance.
Recently launched Theta EdgeCloud is the first hybrid cloud-edge computing AI platform with over 80 PetaFLOPS of on-demand distributed GPU compute power. EdgeCloud now counts 25 global customers including 4 of the top 5 South Korea universities, top professional sports teams including NHL's Las Vegas Knights, NBA's Houston Rockets and global esports teams FlyQuest and Evil Geniuses among others.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Stanford University To Lay Off Staff, Cut $140 Million From Its Budget
Stanford University To Lay Off Staff, Cut $140 Million From Its Budget

Forbes

time10 hours ago

  • Forbes

Stanford University To Lay Off Staff, Cut $140 Million From Its Budget

Stanford University joins the growing list of universities forced to make major budget cuts, ... More announcing reductions of $140 million for the upcoming year. Stanford University announced this week that it would be laying off employees and cutting $140 million in general operating funds from its budget for the upcoming 2025-26 academic year. The news came in a June 26 letter to the campus from Stanford President Jon Levin and Provost Jenny Martinez. Acknowledging that the news was difficult to share, the administrators wrote that the university faces 'significant budget consequences from federal policy changes. These changes include reductions in federal research support and an increase in the endowment tax.' '(W)e need to be realistic about the current landscape and its consequences. There is significant uncertainty about how federal support for universities will evolve, but it is clear that the status quo has changed,' they warned. The Stanford campus has been bracing for bad budget news for months. On February 26, Levin and Martinez announced that Stanford had placed a freeze on staff hiring, writing that uncertainty about NSF and NIH funding and the possibility of an increased federal tax on university endowments would likely affect its bottom line. In April, Stanford deans were instructed to prepare various budget reduction models, and about three weeks ago, Martinez informed the Faculty Senate that the university "could experience policy changes that would reduce our operating budget by hundreds of millions of dollars a year.' This week that prophecy was fulfilled, and it's likely to get worse. The $140 million reduction does not include the School of Medicine, which will identify its own budget cuts in the coming weeks. Levin and Martinez instructed unit heads to formulate their budget plans, which should become final in the next several weeks, according to four principles: They also wrote that they expect schools and units to handle the reductions in different ways, and that the university will increase its endowment payout by 2.9%, which will provide departments some needed support. The administrators admitted that the budget cuts 'will require some reduction in staff positions, not all of which can be accomplished by eliminating open positions," before adding that the university will make benefits and other compensation available 'to support transitions in cases where layoffs are necessary.' Stanford's current budget plan includes several other elements. In 2024–25, Stanford operating budget was $9.7 billion, $1.8 billion, or about 20%, of which was covered by the annual payout of its more than $37 billion endowment. 'Though the budget reductions in the period ahead will be painful, we are confident that by acting now to put Stanford on stronger and more resilient financial footing, we will be better positioned to pursue excellence and new opportunities going forward,' Levin and Martinez concluded.

TIME100 Most Influential Companies 2025: Iberostar Group
TIME100 Most Influential Companies 2025: Iberostar Group

Time​ Magazine

time2 days ago

  • Time​ Magazine

TIME100 Most Influential Companies 2025: Iberostar Group

With oceanfront escapes that double as environmental labs, Iberostar hotels are redefining what climate action can look like in the travel sector. Many of the family-owned Spanish company's 100 hotels are on coastlines protected by coral reefs, some of which are dying. Iberostar operates coral labs and underwater nurseries at some properties, collaborating with scientists from Stanford University and other organizations to develop reef resilience strategies. 'We are firm believers that science will show the way, and that helps us use our resources in the best way possible,' says Gloria Fluxà Thienemann, vice-chairman and chief sustainability officer at Iberostar Group. The company eliminated single-use plastics in 2020, cut greenhouse gas emissions by 22.7% between 2019 and 2024, and reduced its landfill waste by 70% between 2021 and 2024—all in pursuit of net zero by 2030, a goal it aims to reach 20 years ahead of its industry. 'We move fast…because we really see the need,' Thienemann says. Guests don't seem to mind; the company, which has properties spread across 14 countries, saw an 8% increase in revenue from 2023 to 2024.

Why Companies Need To Institute Stricter AI Policies
Why Companies Need To Institute Stricter AI Policies

Forbes

time2 days ago

  • Forbes

Why Companies Need To Institute Stricter AI Policies

As the hype grows around AI and more people try it out, many are using it at work to see how it can help them become more efficient. But there's a real problem: Some employees are using public instances of chatbots, putting proprietary company data at risk by giving an open LLM access. Others use the company's access to AI, but input highly sensitive and personal data, like Social Security numbers and financial data. A new study from technology security company Kiteworks found that 27% of companies reported that nearly a third of all of the data sent to AI systems is the type of information that should be kept private, like company records, employee information and trade secrets. (It could be more; 17% of companies don't know how much private data ends up getting sent to AI.) It's a problem that's growing. A Stanford University report on AI found a 56.4% increase in security incidents with the technology last year. While it may seem obvious to the tech savvy, many employees might not know the risks of this kind of AI sharing, and 83% of companies only rely on training or warning emails to let them know. Kiteworks found that just 17% have automatic controls that keep employees from uploading sensitive information to public AI tools. Further, most companies don't have much of an AI governance structure—only 9%, according to Deloitte research cited by Kiteworks. The study results show that companies need to add policies and infrastructure to control employee use of AI and protect their own data. This kind of use can cause real damage to companies—not to mention their employees and clients. The study concludes that companies need to acknowledge the threat, deploy controls that can be verified, and ensure that they can stand up to regulatory scrutiny. 'With incidents surging, zero-day attacks targeting the security infrastructure itself, and the vast majority lacking real visibility or control, the window for implementing meaningful protections is rapidly closing,' Kiteworks CMO Tim Freestone said in a statement. Many people still haven't grasped how to truly use AI to benefit their company—we all know its practical business functions go beyond asking a chatbot for advice. I talked to Lindsay Phillips, cofounder and COO of tech change coaching firm SkyPhi Studios, about how to bring AI to your company and get people using it. An excerpt from our conversation is later in this newsletter. This is the published version of Forbes' CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday. STOCK MARKET NEWS Nvidia founder and CEO Jensen Huang speaks at the VivaTech technology startup and innovation fair in Paris earlier this month. Mustafa Yalcin/Anadolu via Getty Images Nvidia is back on top, pulling ahead of Microsoft as the world's most valuable company. A research note on Wednesday from Loop Capital analyst Ananda Baruah raised his price target for Nvidia's stock from $175 to $250. Baruah wrote that Nvidia will lead the next 'Golden Wave' for generative AI, and will see 'stronger-than-anticipated demand.' The research note—as well as Nvidia's annual shareholder meeting, where CEO Jensen Huang laid out an optimistic view of the company's future—drove the company's stock to hit a record high of $154.43. While share prices dropped somewhat before markets closed, it was a banner day for the chips company. Chips and robotics company, that is. At the annual meeting, Nvidia positioned itself squarely in the robotics and physical industrial AI spaces. In the company's annual report, Huang wrote that so far this year, Nvidia has fundamentally transformed itself from a chip maker to a builder of infrastructure. While AI data centers will continue to be important, AI-powered robots in factories, hospitals, farms and cities will be what moves society forward in the not-so-distant future. 'We stopped thinking of ourselves as a chip company long ago,' Huang said at the meeting, according to CNBC. CYBERSECURITY An Iranian man stands in his apartment in Tehran, which was destroyed by Israeli attacks. Morteza Nikoubazl/NurPhoto via Getty Images While it seems for now that the physical conflict between Iran and Israel—which the U.S. inserted itself into by bombing Iranian nuclear facilities—is on hold, the war will likely still be bitterly fought online. Forbes' Thomas Brewster writes that Iran, which has a 'robust cyber apparatus' according to Middle East cybersecurity experts, is likely to launch disruptive attacks on U.S. and Israeli interests. For the U.S. government, this could be especially challenging. Since the start of President Donald Trump's second term, the Department of Homeland Security's Cybersecurity and Infrastructure Agency has lost many experienced employees and does not even have a permanent director. Insiders told Forbes that CISA already is stretched thin, but coordinated attacks from Iran could make it much worse. The PR war over how the attacks are seen is also riddled with conflicting imagery. AI-generated Iranian propaganda showing a rocket launch and a convoy of tanks and missiles have racked up millions of views, writes Forbes' Emily Baker-White. These videos, which were seen throughout social media, abruptly disappeared. None of the platforms have any policies against releasing propaganda, and the videos didn't appear to be labeled as AI-generated. NOTABLE NEWS getty AI can be a force for good, but it can also be nefarious. New research from Barracuda, Columbia University and the University of Chicago found that AI writes more than half of all spam email now, writes Forbes senior contributor Davey Winder. This includes the vast majority of messages in your personal account's spam folder—but it's also getting prevalent in business emails as well. About 14% of business inbox attacks, in which often senior people in organizations are targeted with requests for financial transactions, are written by AI. Winder writes that it makes sense to use AI for spam. It tends to do a passable job of writing with proper grammar and spelling across a variety of languages. It can also write in a convincing way to provoke a recipient to respond. The study found that attackers seem to be using AI to draft different versions of spam and hacking emails, searching for the best way to get a recipient's attention. BITS + BYTES How To Get Your Company To Use AI To Its Full Potential SkyPhi Studios cofounder and COO Lindsay Phillips. SkyPhi Studios AI truly can transform the way we do business, but in order for it to make a difference, employees actually need to use it. AI is unlike any other tech shift in recent years, and it can be difficult for everyone at a company to embrace it. I talked to Lindsay Phillips, cofounder and COO of tech change coaching firm SkyPhi Studios, about the best way to get your employees through these challenges. This conversation has been edited for length, clarity and continuity. What is the biggest hurdle that you see to widespread AI adoption in companies? Phillips: The biggest hurdle is companies approaching AI as a software shift, and AI is really a mindset shift. It's not just about teaching people how to do the same work in a different tool. They have to completely change how they think about work, how they think about the value that they're bringing to the workplace, how they approach their workflows. Companies that are just trying to make tools available and don't help people understand how to use it, how to adopt it successfully and how to integrate it, are running into adoption challenges. How do you get past that roadblock? Understanding how you want people to change to use the tool. Understanding how roles and responsibilities need to change, the RACI [responsibility matrix] that's required so that folks adopt this tool and integrate it into their workflows. You want to also make sure that leadership across the org understands that strategy. Most folks want to get guidance from their manager, so you want to leverage leadership cascades and really make sure they're carrying that message forward and reinforcing it. The other thing that we're seeing with companies that are successful is a community of practice: Creating ambassadors throughout the org, at the ground level, boots on the street, that are supporting their team members and adopting these tools and helping them make that mindset shift. How does the move toward AI compare to other technology shifts in the past, like moving to the cloud and increasing cybersecurity? It definitely feels like we're driving the car while we're building the car, so we need our team members to be very collaborative in helping us figure out how to use these tools. Experts are going to be able to tell you how AI is going to be able to make their jobs more productive and be able to help them do their work better. Leadership's not necessarily going to know that, so it really does need to be a lot more collaborative and agile than it needed to be in the past. In the past, you find requirements, you help people adopt those new ways of working, you meet those requirements. This we're figuring it out as we go, and it's much more experimental. What sets the companies that are most successfully integrating AI apart? A comfort with experimentation and a certain amount of comfort with uncertainty of saying, 'We're going to give it a try. We're going to see how it goes, and we'll pivot as we need to.' Companies that are very attached to clarity and certainty and always having the right answer are going to struggle with AI adoption. It's not a linear straight path. It is going to be something you have to figure out as you go. What advice would you give to executives that are trying to get their employees to use AI? The more accessible we can make it, the better. There's companies doing weekly challenges. Just start to encourage your folks to incorporate it into their daily lives. Encourage folks to share wins. You want to hear about how other people are using these tools so that it can give you ideas of how you can also use it in your day-to-day. It is a massive mindset shift. Approach it with little tiny bites that you can take to start to shift habits daily. COMINGS + GOINGS Defense contractor Lockheed Martin appointed Dr. Craig Martell as its new vice president and chief technology officer, effective June 23. Martell previously worked as chief AI officer for Cohesity, and as the first chief digital and AI officer for the U.S. Department of Defense prior to that. appointed as its new vice president and chief technology officer, effective June 23. Martell previously worked as chief AI officer for Cohesity, and as the first chief digital and AI officer for the U.S. Department of Defense prior to that. Convenience store chain Love's Travel Stops hired Tim Langley-Hawthorne as chief technology officer, effective June 23. Langley-Hawthorne steps into the role after working as Hertz's executive vice president and chief information officer. hired as chief technology officer, effective June 23. Langley-Hawthorne steps into the role after working as Hertz's executive vice president and chief information officer. Newspaper chain Gannett selected Joe Miranda for its chief technology and data officer role, effective June 23. Miranda joins the firm after working as executive vice president and chief digital and technology officer of Herbalife, and has also held leadership roles at Thomson Reuters and Voya Financial. STRATEGIES + ADVICE As more companies adopt enterprise AI packages, several trends in what they are using and how are coming to the surface. Here are 10 trends that are shaping enterprise LLM use today. While part of business advancement today is all about personal branding and showing the world your expertise and strategic viewpoint, you may feel more comfortable quietly sitting in front of a screen. The good news is that you don't have to be an extrovert. Here are some tips for shy people who want to build personal brands. QUIZ After Meta's deal to take a 49% stake in data-labeling company Scale AI, another company with a very different main business stepped up as a potential giant in data labeling. Which company is it? A. Priceline B. Uber C. Instacart D. Electronic Arts See if you got the answer right here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store