pickuma.
Infrastructure

Deploying Bun Apps on Cloudflare Workers in 2026: Edge Compute for the Rest of Us

A hands-on look at running Bun-based JavaScript apps on Cloudflare Workers — cold starts, free tier limits, the node:* compat story, and when Workers beats a VPS for developer side projects.

7 min read

The Edge, Without the Ceremony

Cloudflare Workers has been the answer to “how do I run code close to users without managing servers?” since 2017. But for most of that time, the answer came with an asterisk: you had to write for the Workers runtime, which meant a limited subset of Node.js APIs, no filesystem, and CPU time measured in milliseconds rather than seconds.

Two things changed in the past year that make Workers worth a fresh look for JavaScript developers: the runtime compatibility story improved dramatically (the node:* compat flag now covers node:buffer, node:crypto, node:stream, node:events, and more), and Bun — the fast JavaScript runtime that ships with a bundler, test runner, and package manager built in — became a serious contender for the “write local, deploy to edge” workflow.

This is about whether Cloudflare Workers is a viable target for Bun-authored JavaScript in 2026. Spoiler: it depends heavily on what you’re building.

The Runtime Compatibility Story

Cloudflare Workers run on V8 isolates (the workerd runtime), not Node.js. The surface area is different: no process, no fs, no net, none of the native bindings that expect a POSIX environment. For years this meant rewriting imports, polyfilling missing APIs, and discovering at deploy time that your favorite npm package used Buffer internally.

The nodejs_compat compatibility flag — enabled by default in new Workers since mid-2025 — bridges most of this gap. It aliases node:buffer, node:crypto, node:stream, node:events, node:path, node:url, node:assert, node:util, and node:process (with a partial implementation) to Workers-native equivalents. This means a surprising number of npm packages now work without modification.

Bun ships its own implementations of node:* modules (written in Zig and JavaScript, often faster than Node’s originals). The question is whether Bun-authored code that depends on node:fs or node:child_process has any path to Workers — and the answer is mostly no. Workers has no filesystem, no process spawning, no TCP sockets. If your Bun app reads files, spawns subprocesses, or opens raw network connections, Workers is the wrong target regardless of bundler.

What does work: HTTP servers (Bun’s Bun.serve() is conceptually similar to Workers’ fetch() handler), cryptographic operations, WebSocket handling, streaming responses, and anything using standard Web APIs (Request, Response, fetch, URL, TextEncoder, WebSocket). If your Bun app is an API server or a webhook handler, the port to Workers is mostly a matter of replacing Bun.serve() with export default { fetch() }.

Cold Starts: The Numbers That Matter

Workers has the fastest cold start in serverless, and it’s not close. Because Workers run as V8 isolates (not containers or microVMs), there’s no container to spin up, no runtime to initialize, no warming delay. The isolate is created in under 5ms, and your code starts executing immediately after.

Compare this to the alternatives:

PlatformCold start (p50)Cold start (p99)Notes
Cloudflare Workers~1ms~5msIsolates, no container overhead
Vercel Edge Functions~25ms~100msAlso V8 isolates, but with middleware pipeline
AWS Lambda (Node)~200ms~800msContainer-based, improves with provisioned concurrency ($)
Fly.io Machines~300ms~2sFull VM start + your app init
Railway / Render~500ms~3sContainer pull + boot

For a Bun API server running on Fly.io or Railway, cold starts are measured in seconds because the entire runtime — Bun binary, module resolution, your app’s initialization — has to happen from a cold state. On Workers, you pre-bundle your code, and the isolate starts in single-digit milliseconds.

The tradeoff: Workers has a CPU time limit (30 seconds on paid, 10ms per request on free). Fly.io and Railway give you a full Linux box for as long as you want. If your endpoint does heavy computation (image processing, PDF generation, ML inference), Workers CPU limits become the bottleneck long before cold starts matter.

Free Tier: Where Workers Dominates

Cloudflare Workers free tier: 100,000 requests per day, unlimited scripts, with a 10ms CPU time limit per request. That’s 3 million requests per month, free. No credit card required at signup.

Compare to:

PlatformFree tierThe catch
Cloudflare Workers100K req/day, 3M/month10ms CPU/req, 128MB memory
Vercel Edge Functions1M invocations/monthPaired with Vercel Hobby plan limits
Fly.io$5/month creditBilled after credit exhausted. Cold starts exist
Railway$5 credit (once)No persistent free tier. Hobby plan removed in 2023
Render750 hours/monthSpins down after 15min inactive. 30s+ cold start on wake

The CPU limit is the real constraint. At 10ms per request, you’re building an API that handles 100,000 requests per day with sub-10ms response times. That works for auth endpoints, webhook handlers, URL shorteners, redirect services, and lightweight API gateways. It does not work for endpoints that query a database, process a file, or call multiple downstream APIs in sequence — those will blow past 10ms and get throttled.

Upgrading to Workers Paid ($5/month + usage) bumps the CPU limit to 30 seconds and gives you access to Workers KV, D1, Durable Objects, and Queues. At that point, the comparison shifts from “can I run this for free?” to “is this cheaper than a $6/month VPS?”

When to Pick Workers Over a VPS

A $6/month DigitalOcean droplet or Hetzner VPS runs 24/7, has no CPU time limits, can open any port, and will host anything you throw at it — databases, background workers, WebSocket servers, cron jobs. Cloudflare Workers is a more constrained, more opinionated, and more managed platform. The decision comes down to what you value:

Pick Workers when:

  • You want global distribution without configuring load balancers, CDN caching, and multi-region replication
  • You’re building API routes that do lightweight orchestration (auth check, data transform, forward to upstream)
  • Your traffic is spiky (0 requests one hour, 10,000 the next) and you don’t want to provision for peak
  • You want zero-downtime deploys, automatic HTTPS, DDoS protection, and a CDN — all as free defaults
  • You’re shipping a side project and want to stay on the free tier as long as possible

Pick a VPS when:

  • Your endpoint does real computation (image resizing, PDF generation, video transcoding)
  • You need a database on the same machine (SQLite, Postgres) without paying per-query
  • You need filesystem access (write logs, serve static files from disk, store uploads locally)
  • You’re running a persistent process (WebSocket server with long-lived connections, queue worker that runs for hours)
  • You need raw network access (UDP, custom protocols, TCP connections to arbitrary hosts)

The Bun-to-Workers Workflow

If you decide Workers is the right target, the workflow looks like this:

  1. Develop locally with Bun. Use bun --hot for hot reloading, bun test for tests, and standard Web APIs (Request, Response, fetch, URLPattern) instead of Bun-specific APIs.

  2. Bundle with Bun’s built-in bundler. bun build src/index.ts --outdir dist --target bun produces a single-file output. The output is standard JavaScript — Workers will run it if the APIs used are compatible.

  3. Deploy with Wrangler. Cloudflare’s CLI reads wrangler.toml, uploads the bundled script, and maps routes. Use wrangler dev --local to test locally with the same runtime Workers uses in production.

  4. Watch for node: incompatibilities.* Anything that touches the filesystem, spawns subprocesses, or opens raw sockets will fail at runtime, not at build time. Test on Wrangler’s local runtime early and often.

The missing piece: Bun’s native SQLite bindings don’t work on Workers. If your Bun app uses bun:sqlite, you’ll need to migrate to Workers D1 (Cloudflare’s serverless SQLite, API-compatible with better-sqlite3) or an external Postgres service like Neon or Supabase.

The Bottom Line

Cloudflare Workers in 2026 is the best free tier in serverless, with cold starts that make Lambda look broken and a node:* compat layer that covers most of the npm ecosystem. For Bun developers building API servers, webhook handlers, and lightweight backends, the “develop with Bun, deploy to Workers” workflow is production-viable.

It stops being the right choice when you need more than 30 seconds of CPU time, more than 128 MB of memory, or filesystem access. At that point, deploy Bun directly on a VPS — or better yet, on Fly.io with a Bun Docker image. The edge is fast, but sometimes a single machine in Frankfurt is fast enough.

Cloudflare Workers

Deploy serverless code to Cloudflare's edge network in 330+ cities. 100K free requests/day, zero cold start ceremony, and a generous free tier that makes it the default choice for side project backends.

Free: 100K requests/day. Paid: $5/month + usage

Try Cloudflare Workers

Affiliate link · We earn a commission at no cost to you.

Related tools

Some links above are affiliate links. We may earn a commission if you sign up. See our disclosure for details.

Related reading

See all Infrastructure articles →

Get the best tools, weekly

One email every Friday. No spam, unsubscribe anytime.