logo
Cloudflare launches Containers beta for flexible edge computing

Cloudflare launches Containers beta for flexible edge computing

Techday NZ4 days ago

Cloudflare has announced the public beta release of its Containers product, enabling developers to execute code in a secure, isolated environment as part of its connectivity cloud services.
The company said Containers are now accessible to all users on paid plans, providing a platform where applications such as media processing, backend services, and command-line interface tools can run at the edge of the network or in batch workloads.
The integration with Cloudflare Workers means developers maintain a simple workflow using familiar tools.
Cloudflare Containers are designed to extend the existing Workers platform by allowing more compute-intensive and flexible tasks. Developers can deploy globally without needing to manage configuration across multiple regions.
They also have the option to choose between using Workers for lightweight requests or Containers for tasks that require greater resources and full Linux compatibility. The company highlighted the ability to run commonly used developer tools and libraries that were not previously available in the Workers environment.
The workflow for deploying applications remains straightforward.
Developers define a Container in a few lines of code and deploy it using existing tools. Cloudflare handles the routing, provisioning, and scaling, deploying containers in optimal locations across its global network for reduced latency and rapid start times. This is designed to enable use cases such as code sandboxing, where each user or AI-generated session requires a securely isolated environment, a scenario already adopted by some users including Coder.
Configuration is managed via the Container class and a configuration file. Each unique session triggers a new container instance, and Cloudflare automatically selects the best available location to minimise response times for end-users. Initial startup times for containers are typically just a few seconds, according to the company.
During development, wrangler dev allows for live iteration on container code, with containers being rebuilt and restarted directly from the terminal. For production deployment, developers use wrangler deploy, which pushes the container image to Cloudflare's infrastructure, handling all artefact management and integration processes automatically so developers can focus solely on their code.
Observability and resource tracking are built into the Containers platform. Developers can monitor container status and resource usage through the Cloudflare dashboard, with built-in metrics and access to real-time logs. Logs are retained for seven days and can be exported to external sinks if needed.
Application range
Cloudflare pointed to a range of new applications enabled by Containers, such as deploying video processing libraries like FFmpeg, running backend services in any language, setting up routine batch jobs, or hosting a static frontend with a containerised backend. Integration with other Cloudflare Developer Platform services—including Durable Objects for state management, Workflows, Queues, Agents, and object storage via R2—expands potential application architectures. "We're excited about all the new types of applications that are now possible to build on Workers. We've heard many of you tell us over the years that you would love to run your entire application on Cloudflare, if only you could deploy this one piece that needs to run in a container." "Today, you can run libraries that you couldn't run in Workers before. For instance, try this Worker that uses FFmpeg to convert video to a GIF. Or you can run a container as part of a cron job. Or deploy a static frontend with a containerized backend. Or even run a Cloudflare Agent that uses a Container to run Claude Code on your behalf. The integration with the rest of the Developer Platform makes Containers even more powerful: use Durable Objects for state management, Workflows, Queues, and Agents to compose complex behaviors, R2 to store Container data or media, and more."
Pricing details
The Containers platform is available in three instance sizes at launch—dev, basic, and standard—ranging from 256 MiB to 4 GiB of memory and fractional vCPU allocation. Cloudflare charges based on actual resource usage in 10-millisecond increments.
Memory is billed at USD $0.0000025 per GiB-second with a 25 GiB-hour monthly allowance, CPU at USD $0.000020 per vCPU-second with 375 vCPU-minutes included, and disk usage at USD $0.00000007 per GB-second with 200 GB-hour included. Network egress rates vary between USD $0.025 per GB for North America and Europe, up to USD $0.050 per GB for Australia, New Zealand, Taiwan, and Korea, with included data transfer varying by region.
Charges begin when a container is active and end when it automatically sleeps after a timeout, aiming to ensure efficient scaling down for unpredictable workloads. The company plans to expand available instance sizes and increase concurrent limits over time to support more demanding use cases.
Roadmap
Cloudflare outlined upcoming features for Containers, including higher memory and CPU limits, global autoscaling, latency-aware routing, enhanced communication channels between Workers and Containers, and deeper integrations with the broader developer platform. Plans are underway to introduce support for additional APIs and easier data storage access.
"With today's release, we've only just begun to scratch the surface of what Containers will do on Workers. This is the first step of many towards our vision of a simple, global, and highly programmable Container platform."
"We're already thinking about what's next, and wanted to give you a preview: Higher limits and larger instances... global autoscaling and latency-aware routing... more ways for your Worker to communicate with your container... further integrations with the Developer Platform — We will continue to integrate with the developer platform with first-party APIs for our various services. We want it to be dead simple to mount R2 buckets, reach Hyperdrive, access KV, and more. And we are just getting started. Stay tuned for more updates this summer and over the course of the entire year."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ManageEngine Launches MSP Central: A Platform Built For Strengthening Modern MSP Infrastructure
ManageEngine Launches MSP Central: A Platform Built For Strengthening Modern MSP Infrastructure

Scoop

time3 days ago

  • Scoop

ManageEngine Launches MSP Central: A Platform Built For Strengthening Modern MSP Infrastructure

Manage clients securely with integrated RMM, PSA, and advanced server monitoring on a multi-tenant, role-based platform Boost technician productivity with AI-driven ticket insights, sentiment detection, and intelligent alert correlation ManageEngine, a division of Zoho Corporation and a leading provider of enterprise IT management solutions, has announced the launch of MSP Central—a unified platform designed to help MSPs streamline service delivery, device management, threat protection, and infrastructure monitoring from a single interface. ManageEngine focuses on addressing specific operational models and business challenges of MSPs, developing tools that support multi-client environments, technician efficiency, and service scalability. MSP Central brings together these capabilities into a unified platform tailored to how MSPs deliver and manage IT services today. Meeting the Evolving Needs of MSPs With the global managed services market projected to reach $511 billion by 2029, MSPs are facing mounting pressure to scale operations without compromising service quality so as to offer a strategic value to customers and differentiate from the competition. MSP Central directly addresses this fragmentation by offering a unified platform to manage day-to-day operations across clients—from technician workflows and asset visibility to endpoint protection and network health monitoring. Its modular, cloud-native architecture supports native multi-tenancy, fine-grained role-based access control, and seamless integrations with both Zoho apps and third-party tools. This gives MSPs the flexibility to adopt only the modules they need and expand at their own pace. Features Designed to Support MSP Operations 'With MSP Central, we're bringing together the best of ManageEngine's proven IT management and security capabilities in a platform designed from the ground up for MSPs,' said Mathivanan Venkatachalam, vice president at ManageEngine. 'While each of these modules stands strong on its own, together they form a truly unified platform—delivering a single, connected experience for service providers. This approach lets MSPs consolidate their operations, eliminate tool sprawl, and enable their teams to work more efficiently and effectively—all from a unified console." The platform includes the following capabilities: Modular architecture: Adopt only the components required—no bundling or mandatory licensing. Remote monitoring and management (RMM): Manage devices across clients with patching, asset visibility, and proactive remediation in a multi-tenant setup. Professional services automation (PSA): Integrate ticketing, contract management, SLAs, time tracking, and billing in a unified workflow. Advanced server monitoring: Monitor infrastructure across Windows, Linux, databases, and virtual systems with automated alerts and deep metrics. Endpoint security: Provide comprehensive protection against evolving cyberthreats with vulnerability management, device and application control, anti-ransomware, and browser security. AI-powered automation: Accelerate workflows with ticket summarisation, sentiment detection, alert correlation, and predictive thresholds. Third-party integrations: Connect seamlessly with over 20 tools across IT, security and business ecosystems via open APIs and pre-built connectors. Marketplace ready: Built for integration into cloud marketplaces and partner ecosystems. Looking Ahead MSP Central marks the foundation of ManageEngine's long-term MSP platform strategy, which supports the full spectrum of managed services. Future enhancements will focus on expanding into adjacent domains like SIEM, privileged access management, and advanced analytics, helping MSPs and MSSPs manage security and compliance alongside operations. The platform will also evolve to support deeper integrations with business applications and partner ecosystems, empowering providers to streamline service delivery end to end. 'Our goal is to give MSPs a platform that adapts to their growth, supports their preferred tools, and eliminates the friction of fragmented systems. We're starting with RMM, PSA, and advanced server monitoring, but this is just the beginning. Our vision is to bring all of ManageEngine's standalone MSP tools together under this platform, delivering depth, flexibility, and scalability that helps providers grow alongside their clients' needs. MSP Central is designed to support MSPs for the long haul,' added Venkatachalam. Pricing and Availability MSP Central is available globally starting today. The platform supports flexible modular pricing so MSPs can pay for only what they need.

Cloudflare launches Containers beta for flexible edge computing
Cloudflare launches Containers beta for flexible edge computing

Techday NZ

time4 days ago

  • Techday NZ

Cloudflare launches Containers beta for flexible edge computing

Cloudflare has announced the public beta release of its Containers product, enabling developers to execute code in a secure, isolated environment as part of its connectivity cloud services. The company said Containers are now accessible to all users on paid plans, providing a platform where applications such as media processing, backend services, and command-line interface tools can run at the edge of the network or in batch workloads. The integration with Cloudflare Workers means developers maintain a simple workflow using familiar tools. Cloudflare Containers are designed to extend the existing Workers platform by allowing more compute-intensive and flexible tasks. Developers can deploy globally without needing to manage configuration across multiple regions. They also have the option to choose between using Workers for lightweight requests or Containers for tasks that require greater resources and full Linux compatibility. The company highlighted the ability to run commonly used developer tools and libraries that were not previously available in the Workers environment. The workflow for deploying applications remains straightforward. Developers define a Container in a few lines of code and deploy it using existing tools. Cloudflare handles the routing, provisioning, and scaling, deploying containers in optimal locations across its global network for reduced latency and rapid start times. This is designed to enable use cases such as code sandboxing, where each user or AI-generated session requires a securely isolated environment, a scenario already adopted by some users including Coder. Configuration is managed via the Container class and a configuration file. Each unique session triggers a new container instance, and Cloudflare automatically selects the best available location to minimise response times for end-users. Initial startup times for containers are typically just a few seconds, according to the company. During development, wrangler dev allows for live iteration on container code, with containers being rebuilt and restarted directly from the terminal. For production deployment, developers use wrangler deploy, which pushes the container image to Cloudflare's infrastructure, handling all artefact management and integration processes automatically so developers can focus solely on their code. Observability and resource tracking are built into the Containers platform. Developers can monitor container status and resource usage through the Cloudflare dashboard, with built-in metrics and access to real-time logs. Logs are retained for seven days and can be exported to external sinks if needed. Application range Cloudflare pointed to a range of new applications enabled by Containers, such as deploying video processing libraries like FFmpeg, running backend services in any language, setting up routine batch jobs, or hosting a static frontend with a containerised backend. Integration with other Cloudflare Developer Platform services—including Durable Objects for state management, Workflows, Queues, Agents, and object storage via R2—expands potential application architectures. "We're excited about all the new types of applications that are now possible to build on Workers. We've heard many of you tell us over the years that you would love to run your entire application on Cloudflare, if only you could deploy this one piece that needs to run in a container." "Today, you can run libraries that you couldn't run in Workers before. For instance, try this Worker that uses FFmpeg to convert video to a GIF. Or you can run a container as part of a cron job. Or deploy a static frontend with a containerized backend. Or even run a Cloudflare Agent that uses a Container to run Claude Code on your behalf. The integration with the rest of the Developer Platform makes Containers even more powerful: use Durable Objects for state management, Workflows, Queues, and Agents to compose complex behaviors, R2 to store Container data or media, and more." Pricing details The Containers platform is available in three instance sizes at launch—dev, basic, and standard—ranging from 256 MiB to 4 GiB of memory and fractional vCPU allocation. Cloudflare charges based on actual resource usage in 10-millisecond increments. Memory is billed at USD $0.0000025 per GiB-second with a 25 GiB-hour monthly allowance, CPU at USD $0.000020 per vCPU-second with 375 vCPU-minutes included, and disk usage at USD $0.00000007 per GB-second with 200 GB-hour included. Network egress rates vary between USD $0.025 per GB for North America and Europe, up to USD $0.050 per GB for Australia, New Zealand, Taiwan, and Korea, with included data transfer varying by region. Charges begin when a container is active and end when it automatically sleeps after a timeout, aiming to ensure efficient scaling down for unpredictable workloads. The company plans to expand available instance sizes and increase concurrent limits over time to support more demanding use cases. Roadmap Cloudflare outlined upcoming features for Containers, including higher memory and CPU limits, global autoscaling, latency-aware routing, enhanced communication channels between Workers and Containers, and deeper integrations with the broader developer platform. Plans are underway to introduce support for additional APIs and easier data storage access. "With today's release, we've only just begun to scratch the surface of what Containers will do on Workers. This is the first step of many towards our vision of a simple, global, and highly programmable Container platform." "We're already thinking about what's next, and wanted to give you a preview: Higher limits and larger instances... global autoscaling and latency-aware routing... more ways for your Worker to communicate with your container... further integrations with the Developer Platform — We will continue to integrate with the developer platform with first-party APIs for our various services. We want it to be dead simple to mount R2 buckets, reach Hyperdrive, access KV, and more. And we are just getting started. Stay tuned for more updates this summer and over the course of the entire year."

Cloudflare thwarts record 7.3 Tbps DDoS attack with automation
Cloudflare thwarts record 7.3 Tbps DDoS attack with automation

Techday NZ

time20-06-2025

  • Techday NZ

Cloudflare thwarts record 7.3 Tbps DDoS attack with automation

Cloudflare has confirmed it recently mitigated what it describes as the largest distributed denial-of-service (DDoS) attack ever publicly disclosed, clocking in at 7.3 terabits per second (Tbps), surpassing previous known records. The attack, which occurred in mid-May 2025, targeted a hosting provider customer utilising Cloudflare's Magic Transit service for network defence. According to Cloudflare data, this incident follows closely on the heels of attacks recorded at 6.5 Tbps and 4.8 billion packets per second, illustrating that DDoS attacks are continuing to increase in both scale and complexity. Cloudflare stated that the 7.3 Tbps attack was 12% larger than its previous record and 1 Tbps greater than another recent attack reported by security journalist Brian Krebs. Attack analysis The 7.3 Tbps DDoS attack delivered a total of 37.4 terabytes of data within a 45-second window. During the attack, the targeted IP address was bombarded across an average of 21,925 destination ports, reaching a peak of 34,517 destination ports per second. The distribution of source ports mirrored this targeting method. The attack employed several vectors but was dominated by UDP floods, constituting 99.996% of total traffic. The residual traffic, amounting to 1.3 GB, involved QOTD reflection, Echo reflection, NTP reflection, Mirai UDP floods, Portmap flood, and RIPv1 amplification techniques. Each vector was identified and catalogued, with Cloudflare detailing how organisations could protect both themselves and the broader Internet from such forms of abuse. Cloudflare explained that the UDP DDoS component worked by sending large volumes of UDP packets to random or specific destination ports, either to saturate the Internet link or overwhelm network appliances. Other vectors, such as the QOTD (Quote of the Day), Echo, NTP, Portmap, and RIPv1, exploited vulnerabilities in legacy protocols and services to reflect and amplify attack traffic onto target systems. Global scale The attack was notable for its global reach. Traffic originated from more than 122,145 source IP addresses across 5,433 autonomous systems in 161 countries. Nearly half of the attack traffic came from Brazil and Vietnam, accounting for around twenty-five percent each. The remainder was largely attributable to sources in Taiwan, China, Indonesia, Ukraine, Ecuador, Thailand, the United States, and Saudi Arabia. At an autonomous system level, Telefonica Brazil (AS27699) contributed 10.5% of attack traffic, with Viettel Group (AS7552), China Unicom (AS4837), Chunghwa Telecom (AS3462), and China Telecom (AS4134) among the other major sources. The attack saw an average of 26,855 unique source IP addresses per second, peaking at 45,097. Technical response Cloudflare utilised the global anycast architecture to divert and dissipate the massive influx of traffic. As packets arrived at Cloudflare's network edge, they were routed to the closest data centre. This incident was managed across 477 data centres in 293 locations worldwide, with some regions operating multiple facilities due to traffic volume. Detection and mitigation were handled by Cloudflare's automated systems, which operate independently in each data centre. The Cloudflare global network runs every service in every data centre. This includes our DDoS detection and mitigation systems. This means that attacks can be detected and mitigated fully autonomously, regardless of where they originate from. Upon arrival, data packets were intelligently distributed to available servers where they were sampled for analysis. Cloudflare employed the denial of service daemon (dosd), a heuristic engine that reviews packet headers and anomalies for malicious patterns. The system then generated multiple permutations of digital fingerprints specific to the attack, seeking patterns that maximised blocking efficacy while minimising impact on legitimate traffic. Within data centres, real-time intelligence was shared by servers multicasting fingerprint information, refining mitigation on both a local and global scale. When a fingerprint surpassed predefined thresholds, mitigation rules were compiled and deployed as extended Berkeley Packet Filter (eBPF) programs to block the offending traffic. Once the attack ceased, associated rules were removed automatically. Botnet feed and future mitigation Cloudflare also maintains a free DDoS Botnet Threat Feed to help Internet service providers and hosting companies identify malicious traffic originating within their own infrastructure. The company said that over 600 organisations have subscribed to this service, allowing them to receive up-to-date lists of offending IP addresses engaged in DDoS attacks. Recommendations from Cloudflare emphasise tailored defences to address the unique characteristics of each network or application, with care taken to ensure that mitigation steps do not inadvertently disrupt legitimate traffic, particularly for services that depend on UDP or legacy protocols. Cloudflare's team highlighted that these successful defences occurred entirely without human intervention, alerting, or incident escalation, underscoring the shift towards fully autonomous, distributed mitigation strategies in response to modern DDoS threats.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store