Docker Compose to Kubernetes: A Practical Migration Playbook

Docker Compose to Kubernetes Migration: A Practical Guide

Your application runs perfectly in Docker Compose on a single server, but now you need high availability, auto-scaling, and zero-downtime deployments. Kubernetes provides these capabilities, but the migration path is full of subtle differences that break applications in production. Therefore, this guide walks through the practical conversion of Docker Compose services to Kubernetes manifests, covering the networking, storage, and configuration changes that trip up most teams.

Why Migrate? Honest Trade-offs

Docker Compose is excellent for single-host deployments, development environments, and small applications. If your app runs on one server and you don’t need auto-scaling or rolling updates, Compose may be the right choice. Moreover, Compose files are simpler to read, faster to deploy, and require less operational knowledge.

Kubernetes makes sense when you need multi-node availability, horizontal auto-scaling, rolling deployments with automatic rollback, or when your organization standardizes on Kubernetes for all workloads. However, Kubernetes adds operational complexity — you need monitoring, RBAC, network policies, and cluster management. Consequently, evaluate whether the benefits justify the complexity for your specific use case.

# docker-compose.yml — Starting point
version: '3.8'
services:
  api:
    build: ./api
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    depends_on:
      - db
      - cache
    restart: unless-stopped

  db:
    image: postgres:16
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=myapp

  cache:
    image: redis:7-alpine
    command: redis-server --maxmemory 256mb

volumes:
  pgdata:
Docker Compose to Kubernetes migration architecture
Migrating from Docker Compose requires rethinking networking, storage, and service discovery

Using Kompose for Initial Manifest Generation

Kompose translates Docker Compose files to Kubernetes manifests automatically. It handles the basic conversion — services become Deployments, ports become Services, volumes become PersistentVolumeClaims. Additionally, it generates the boilerplate YAML that would take hours to write manually.

However, Kompose output is a starting point, not production-ready configuration. It doesn’t set resource limits, liveness probes, or security contexts. Furthermore, environment variables with passwords end up as plaintext in the manifests. You must refine every generated file before deploying to production.

# Install Kompose
curl -L https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-amd64 -o kompose
chmod +x kompose && sudo mv kompose /usr/local/bin/

# Convert docker-compose.yml to Kubernetes manifests
kompose convert -f docker-compose.yml -o k8s/

# Generated files:
# k8s/api-deployment.yaml
# k8s/api-service.yaml
# k8s/db-deployment.yaml
# k8s/db-service.yaml
# k8s/cache-deployment.yaml
# k8s/cache-service.yaml
# k8s/pgdata-persistentvolumeclaim.yaml

# Review and customize before applying
kubectl apply -f k8s/ --dry-run=client

Networking: The Biggest Difference

In Docker Compose, services communicate by container name on a shared bridge network. In Kubernetes, services communicate through Service objects with DNS names like db.default.svc.cluster.local. For most cases, the short name db works within the same namespace. Specifically, Kubernetes DNS resolves service names to ClusterIP addresses, which load-balance across pods.

Compose’s depends_on has no Kubernetes equivalent because pods can restart independently. Your application must handle database connection retries gracefully — use exponential backoff in your connection code. In contrast to Compose where services start in order, Kubernetes pods may start before their dependencies are ready.

# Production-ready Kubernetes Deployment with health checks
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  labels:
    app: api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: registry.example.com/myapp/api:v1.2.3
          ports:
            - containerPort: 3000
          envFrom:
            - secretRef:
                name: api-secrets
          resources:
            requests:
              cpu: 100m
              memory: 256Mi
            limits:
              cpu: 500m
              memory: 512Mi
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 15
          readinessProbe:
            httpGet:
              path: /ready
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  selector:
    app: api
  ports:
    - port: 3000
      targetPort: 3000
  type: ClusterIP
Kubernetes networking and service discovery diagram
Kubernetes service discovery replaces Docker Compose’s simple container name networking

Storage and Secrets Management

Docker Compose volumes are local directories on the host machine. Kubernetes PersistentVolumeClaims (PVCs) abstract storage so pods can move between nodes. For databases, use StatefulSets instead of Deployments — they provide stable network identities and ordered pod management that databases require.

Never put passwords in environment variables within Kubernetes manifests. Use Secrets objects and mount them as environment variables or files. For production, integrate with external secret managers like HashiCorp Vault or AWS Secrets Manager using the External Secrets Operator. As a result, secrets are rotated centrally without redeploying your application.

# Create secrets from literal values (use External Secrets in production)
apiVersion: v1
kind: Secret
metadata:
  name: api-secrets
type: Opaque
stringData:
  DATABASE_URL: "postgres://user:pass@db:5432/myapp"
  REDIS_URL: "redis://cache:6379"
---
# StatefulSet for PostgreSQL (not Deployment)
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: db
spec:
  serviceName: db
  replicas: 1
  selector:
    matchLabels:
      app: db
  template:
    metadata:
      labels:
        app: db
    spec:
      containers:
        - name: postgres
          image: postgres:16
          envFrom:
            - secretRef:
                name: db-secrets
          volumeMounts:
            - name: pgdata
              mountPath: /var/lib/postgresql/data
  volumeClaimTemplates:
    - metadata:
        name: pgdata
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 20Gi

Production Considerations and Deployment Strategy

In production, add Horizontal Pod Autoscalers, Pod Disruption Budgets, and Ingress controllers. Set resource requests and limits on every container — without them, a memory-leaking pod can crash the entire node. Additionally, use namespace isolation to separate environments (staging, production) on the same cluster.

Migrate incrementally. Run your Compose stack and Kubernetes deployment in parallel, routing a percentage of traffic to Kubernetes using a load balancer. Increase the percentage gradually while monitoring error rates and latency. Furthermore, keep your Compose setup as a rollback option until Kubernetes is proven stable under your production load.

Kubernetes production deployment monitoring
Monitor error rates and latency during migration — keep Docker Compose as a rollback option

Related Reading:

Resources:

In conclusion, migrating from Docker Compose to Kubernetes is justified when you need multi-node resilience, auto-scaling, and rolling deployments. Use Kompose for initial manifest generation but customize every file for production — add health probes, resource limits, secrets management, and security contexts. Migrate incrementally and keep Compose as a fallback until your Kubernetes deployment proves itself under real traffic.

Scroll to Top