Cloudflare Workers and D1 for Edge Computing
Cloudflare Workers D1 edge computing enables developers to build and deploy full-stack applications that run within milliseconds of every user on Earth. Unlike traditional serverless platforms that run in a handful of regions, Workers execute on Cloudflare’s network of 300+ data centers worldwide. Combined with D1 (Cloudflare’s edge SQL database), you can build complete applications with database access at the edge — no origin server required.
This guide covers building production applications with Workers and D1, from API development and database schema design to authentication, caching, and migration strategies. Moreover, you will learn how to combine Workers with other Cloudflare primitives like KV, R2, Queues, and Durable Objects for complex application architectures.
Why Edge Computing Matters
Traditional applications deploy to one or a few cloud regions. A user in Tokyo hitting a server in us-east-1 experiences 150-200ms of network latency before any application code runs. Edge computing eliminates this latency by running your code in the data center closest to each user. Furthermore, Cloudflare Workers have zero cold starts — unlike AWS Lambda which can add 100ms-1s of initialization time.
D1 solves the database challenge that has historically prevented full-stack edge computing. Previous edge functions could only access external databases over the network, negating the latency benefits. D1 runs SQLite at the edge with automatic replication, giving you sub-millisecond read latency from any location.
Cloudflare Workers D1: Building a REST API
// src/index.ts — Main Worker entry point
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { jwt } from 'hono/jwt';
import { logger } from 'hono/logger';
type Bindings = {
DB: D1Database;
CACHE: KVNamespace;
JWT_SECRET: string;
ASSETS: R2Bucket;
};
const app = new Hono<{ Bindings: Bindings }>();
// Middleware
app.use('*', logger());
app.use('/api/*', cors({
origin: ['https://myapp.com', 'https://staging.myapp.com'],
credentials: true,
}));
// Public routes
app.get('/api/products', async (c) => {
const { category, page = '1', limit = '20' } = c.req.query();
const offset = (parseInt(page) - 1) * parseInt(limit);
// Check KV cache first
const cacheKey = `products:${category || 'all'}:${page}`;
const cached = await c.env.CACHE.get(cacheKey, 'json');
if (cached) {
return c.json(cached);
}
let query = 'SELECT * FROM products WHERE active = 1';
const params: any[] = [];
if (category) {
query += ' AND category = ?';
params.push(category);
}
query += ' ORDER BY created_at DESC LIMIT ? OFFSET ?';
params.push(parseInt(limit), offset);
const { results } = await c.env.DB.prepare(query)
.bind(...params)
.all();
// Count total for pagination
let countQuery = 'SELECT COUNT(*) as total FROM products WHERE active = 1';
if (category) {
countQuery += ' AND category = ?';
}
const countResult = await c.env.DB.prepare(countQuery)
.bind(...(category ? [category] : []))
.first();
const response = {
products: results,
pagination: {
page: parseInt(page),
limit: parseInt(limit),
total: countResult?.total || 0,
}
};
// Cache for 5 minutes
await c.env.CACHE.put(cacheKey, JSON.stringify(response), {
expirationTtl: 300,
});
return c.json(response);
});
// Protected routes
app.use('/api/admin/*', jwt({ secret: 'JWT_SECRET' }));
app.post('/api/admin/products', async (c) => {
const body = await c.req.json();
const { name, description, price, category, image_url } = body;
const result = await c.env.DB.prepare(
`INSERT INTO products (name, description, price, category, image_url, active, created_at)
VALUES (?, ?, ?, ?, ?, 1, datetime('now'))
RETURNING *`
).bind(name, description, price, category, image_url).first();
// Invalidate cache
const keys = await c.env.CACHE.list({ prefix: 'products:' });
await Promise.all(keys.keys.map(k => c.env.CACHE.delete(k.name)));
return c.json(result, 201);
});
export default app;D1 Database Schema and Migrations
-- migrations/0001_initial_schema.sql
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
description TEXT,
price REAL NOT NULL CHECK (price >= 0),
category TEXT NOT NULL,
image_url TEXT,
active INTEGER DEFAULT 1,
created_at TEXT DEFAULT (datetime('now')),
updated_at TEXT DEFAULT (datetime('now'))
);
CREATE INDEX idx_products_category ON products(category);
CREATE INDEX idx_products_active ON products(active, created_at);
CREATE TABLE IF NOT EXISTS orders (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
total REAL NOT NULL,
items TEXT NOT NULL, -- JSON array
shipping_address TEXT,
created_at TEXT DEFAULT (datetime('now')),
updated_at TEXT DEFAULT (datetime('now'))
);
CREATE INDEX idx_orders_user ON orders(user_id, created_at);
CREATE INDEX idx_orders_status ON orders(status);# wrangler.toml — Worker configuration
name = "my-edge-app"
main = "src/index.ts"
compatibility_date = "2026-03-01"
compatibility_flags = ["nodejs_compat"]
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "xxxxx-xxxx-xxxx-xxxx"
migrations_dir = "migrations"
[[kv_namespaces]]
binding = "CACHE"
id = "xxxxx"
[[r2_buckets]]
binding = "ASSETS"
bucket_name = "my-app-assets"
[vars]
JWT_SECRET = "your-secret-here"Advanced Patterns: Durable Objects for State
Therefore, when you need coordination or stateful logic at the edge — rate limiting, WebSocket management, or real-time collaboration — Durable Objects provide single-threaded, consistent state per object instance.
// src/rate-limiter.ts — Edge rate limiting with Durable Objects
export class RateLimiter implements DurableObject {
private requests: Map<string, number[]> = new Map();
constructor(private state: DurableObjectState) {}
async fetch(request: Request): Promise<Response> {
const ip = request.headers.get('CF-Connecting-IP') || 'unknown';
const now = Date.now();
const windowMs = 60000; // 1 minute window
const maxRequests = 100;
// Get existing timestamps for this IP
let timestamps = this.requests.get(ip) || [];
// Remove expired timestamps
timestamps = timestamps.filter(t => now - t < windowMs);
if (timestamps.length >= maxRequests) {
return new Response('Rate limit exceeded', {
status: 429,
headers: {
'Retry-After': '60',
'X-RateLimit-Limit': maxRequests.toString(),
'X-RateLimit-Remaining': '0',
}
});
}
timestamps.push(now);
this.requests.set(ip, timestamps);
return new Response('OK', {
headers: {
'X-RateLimit-Limit': maxRequests.toString(),
'X-RateLimit-Remaining': (maxRequests - timestamps.length).toString(),
}
});
}
}When NOT to Use Edge Computing
Edge computing is not suitable for applications that require heavy computation (ML inference, video transcoding, complex analytics) — Workers have a 30-second CPU time limit per request. Additionally, D1 has size limits (10GB per database) and is not designed for heavy write workloads — it is optimized for read-heavy patterns with eventual consistency on writes.
If your application primarily serves users in a single region and latency is not a concern, traditional serverless (AWS Lambda, Cloud Run) offers a more mature ecosystem with larger resource limits and more service integrations. As a result, evaluate whether global distribution genuinely benefits your users before committing to edge architecture.
Key Takeaways
Cloudflare Workers D1 edge computing enables full-stack applications with global distribution and sub-millisecond latency. The combination of Workers for compute, D1 for SQL, KV for caching, and R2 for storage provides a complete application platform at the edge. Furthermore, zero cold starts and pay-per-request pricing make edge computing cost-effective for applications with variable traffic patterns.
Start with a simple API Worker and add D1 once you are comfortable with the Workers programming model. For comprehensive documentation, visit the Cloudflare Workers docs and the D1 database documentation. Our guides on Bun 2 runtime and Astro 5 static sites provide complementary web development approaches.