WebAssembly Containers in Kubernetes: Running Wasm Workloads in Production

WebAssembly Containers in Kubernetes

WebAssembly containers Kubernetes integration represents the next evolution in cloud-native computing. Wasm containers start in under 1 millisecond, consume a fraction of the memory of Linux containers, and provide stronger sandboxing guarantees. With projects like SpinKube and runwasi reaching production readiness in 2026, teams can now run Wasm workloads alongside traditional containers in the same Kubernetes cluster.

This guide walks you through the complete setup — from configuring containerd with Wasm shims to deploying production workloads. Whether you are evaluating Wasm for edge computing, serverless functions, or plugin systems, you will find practical patterns here.

Why Wasm Containers Matter

Traditional Linux containers package an entire OS userspace — libraries, shells, package managers. A typical Node.js container image is 200MB+. A Wasm module doing the same work is often under 5MB. Moreover, Wasm modules start in microseconds, not seconds, making them ideal for scale-to-zero serverless patterns.

WebAssembly containers Kubernetes architecture
Comparing traditional containers with WebAssembly modules in Kubernetes

The security model is fundamentally different too. Wasm modules run in a capability-based sandbox — they cannot access the filesystem, network, or environment variables unless explicitly granted permission. This is deny-by-default security, unlike Linux containers which start with broad access and restrict via seccomp/AppArmor profiles.

Comparison: Linux Container vs Wasm Container

┌─────────────────────┬────────────────┬────────────────┐
│ Metric              │ Linux Container│ Wasm Container │
├─────────────────────┼────────────────┼────────────────┤
│ Cold Start          │ 500ms - 5s     │ < 1ms          │
│ Image Size          │ 50MB - 500MB   │ 1MB - 10MB     │
│ Memory Overhead     │ 20MB - 100MB   │ 1MB - 5MB      │
│ Security Model      │ Allow-default  │ Deny-default   │
│ Language Support     │ Any            │ Rust/Go/C/JS   │
│ CPU Architecture     │ Platform-bound │ Universal      │
│ Networking          │ Full stack      │ WASI sockets   │
│ Filesystem          │ Full (chroot)   │ Capability-based│
└─────────────────────┴────────────────┴────────────────┘

Setting Up Kubernetes for Wasm

The architecture uses containerd shims to run Wasm modules alongside traditional containers. The key component is runwasi — a containerd shim that delegates Wasm execution to runtimes like Wasmtime or Spin.

Installing the Wasm Runtime Shim

# Install runwasi shim for containerd
# On each Kubernetes node:
curl -LO https://github.com/containerd/runwasi/releases/latest/download/containerd-shim-wasmtime-v1
chmod +x containerd-shim-wasmtime-v1
sudo mv containerd-shim-wasmtime-v1 /usr/local/bin/

# Install SpinKube shim for Spin framework apps
curl -LO https://github.com/spinkube/containerd-shim-spin/releases/latest/download/containerd-shim-spin-v2
chmod +x containerd-shim-spin-v2
sudo mv containerd-shim-spin-v2 /usr/local/bin/

# Add RuntimeClass to containerd config
cat >> /etc/containerd/config.toml << 'EOF'
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.spin]
  runtime_type = "io.containerd.spin.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
  runtime_type = "io.containerd.wasmtime.v1"
EOF

sudo systemctl restart containerd

Configuring RuntimeClasses

# runtime-classes.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime
handler: wasmtime
scheduling:
  nodeSelector:
    kubernetes.io/wasm: "true"
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: spin
handler: spin
scheduling:
  nodeSelector:
    kubernetes.io/wasm: "true"
Kubernetes cluster with WebAssembly runtime
Kubernetes nodes configured with Wasm runtime shims

Deploying Wasm Workloads

With the runtime configured, deploying Wasm workloads uses standard Kubernetes manifests. The only difference is specifying the RuntimeClass:

# wasm-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: wasm-api
  template:
    metadata:
      labels:
        app: wasm-api
    spec:
      runtimeClassName: spin  # Use Wasm runtime
      containers:
        - name: api
          image: ghcr.io/myorg/api-service:latest
          ports:
            - containerPort: 80
          resources:
            requests:
              memory: "10Mi"   # Wasm needs very little memory
              cpu: "50m"
            limits:
              memory: "32Mi"
              cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
  name: wasm-api
spec:
  selector:
    app: wasm-api
  ports:
    - port: 80
      targetPort: 80

Building Wasm Container Images

// src/lib.rs — A simple Spin HTTP component in Rust
use spin_sdk::http::{IntoResponse, Request, Response};
use spin_sdk::http_component;

#[http_component]
fn handle_request(req: Request) -> anyhow::Result {
    let path = req.uri().path();
    match path {
        "/health" => Ok(Response::builder()
            .status(200)
            .body("OK")
            .build()),
        "/api/data" => {
            let data = serde_json::json!({
                "message": "Hello from WebAssembly",
                "runtime": "Spin on Kubernetes"
            });
            Ok(Response::builder()
                .status(200)
                .header("Content-Type", "application/json")
                .body(data.to_string())
                .build())
        }
        _ => Ok(Response::builder()
            .status(404)
            .body("Not Found")
            .build()),
    }
}
# Build and push Wasm OCI image
spin build
spin registry push ghcr.io/myorg/api-service:latest

Production Patterns with SpinKube

SpinKube provides a Kubernetes operator that manages Wasm applications with custom resources. Therefore, it handles scaling, networking, and lifecycle management automatically:

# SpinKube SpinApp custom resource
apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: my-api
spec:
  image: ghcr.io/myorg/api-service:latest
  replicas: 2
  executor: containerd-shim-spin
  resources:
    limits:
      memory: 32Mi
      cpu: 100m
  enableAutoscaling: true
  targetCPUUtilization: 60
Cloud infrastructure for Wasm containers
SpinKube managing WebAssembly workloads in production Kubernetes

When NOT to Use Wasm Containers

Despite the advantages, Wasm containers are not a universal replacement. Avoid them when you need full Linux system access, complex filesystem operations, established language ecosystems (Python ML libraries), or long-running stateful processes. As a result, most teams adopt a hybrid approach — Wasm for lightweight API handlers and Linux containers for heavy workloads.

Key Takeaways

WebAssembly containers in Kubernetes are production-ready for specific use cases. They offer sub-millisecond cold starts, minimal resource consumption, and superior security isolation. Start with edge computing or serverless API endpoints and expand as the ecosystem matures.

Related Reading

External Resources

Scroll to Top