Skip to content

How to Migrate to Kubernetes

Key idea:

Migration to Kubernetes — 6-step process: 1) Containerize app (Dockerfile), 2) Choose K8s provider (managed EKS/GKE/AKS or self-hosted), 3) Write manifests (Deployment + Service), 4) Configure ingress (nginx-ingress, Traefik), 5) Secrets management (Sealed Secrets, Vault), 6) Observability (Prometheus, Loki). Typical timeframe: 2-8 weeks for a medium app. Risks: stateful data migration, DNS/IP changes, learning curve.

Below: step-by-step, working examples, common pitfalls, FAQ.

Try it now — free →

Step-by-Step Setup

  1. Containerize: write Dockerfile, build image, push to registry (ECR, GHCR)
  2. Pick managed K8s: EKS (AWS), GKE (Google), AKS (Azure), DigitalOcean K8s, Yandex Cloud
  3. Install kubectl + connect: aws eks update-kubeconfig --name my-cluster
  4. Write Deployment + Service manifests for each microservice
  5. Install nginx-ingress controller + TLS (cert-manager for Let's Encrypt)
  6. DB: either managed (RDS Postgres) or operator (CloudNativePG)
  7. Secrets: Sealed Secrets for Git-friendly, or External Secrets → Vault
  8. Deploy observability: Prometheus + Grafana + Loki (or managed DataDog)
  9. Migration window: DNS switchover from VPS to K8s ingress IP

Working Examples

ScenarioConfig
Basic Deployment + ServiceapiVersion: apps/v1 kind: Deployment metadata: { name: my-app } spec: replicas: 3 selector: { matchLabels: { app: my-app } } template: metadata: { labels: { app: my-app } } spec: containers: - name: app image: ghcr.io/me/my-app:v1 ports: [{ containerPort: 3000 }] --- apiVersion: v1 kind: Service spec: selector: { app: my-app } ports: [{ port: 80, targetPort: 3000 }]
Ingress with TLSapiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt spec: tls: - hosts: [app.example.com] secretName: app-tls rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: { service: { name: my-app, port: { number: 80 } } }
HPA (auto-scaling)apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler spec: scaleTargetRef: { kind: Deployment, name: my-app } minReplicas: 2 maxReplicas: 20 metrics: - type: Resource resource: { name: cpu, target: { type: Utilization, averageUtilization: 70 } }
Resource limitsresources: requests: { cpu: 100m, memory: 256Mi } limits: { cpu: 500m, memory: 1Gi }
Liveness + readiness probeslivenessProbe: httpGet: { path: /health, port: 3000 } periodSeconds: 10 readinessProbe: httpGet: { path: /ready, port: 3000 } periodSeconds: 5

Common Pitfalls

  • Stateful data — the hardest. Options: managed DB, volume migration, blue-green with data sync
  • Cost spike: managed K8s cluster base $70+/mo + node costs. Compared to VPS $10/mo — upgrade overhead
  • DNS TTL during migration: lower to 60s 2 days before switch for fast rollback
  • PodDisruptionBudget is mandatory for HA: guarantees min replicas during node drain
  • Resource requests too low → pods evicted on high load. Too high → waste. Measure + iterate

Learn more

Frequently Asked Questions

Managed or self-hosted K8s?

Managed (EKS/GKE/AKS) — saves 40+ hours/month maintenance. Self-hosted only if compliance or data residency disallows cloud.

When is K8s needed?

Microservices (5+ services), multi-region, auto-scaling need. For monolith on 1 VPS — overkill.

Alternatives?

Docker Swarm (simple but declining). Nomad (HashiCorp, simpler than K8s). Managed PaaS: Vercel, Fly.io, Railway — handle everything for web apps.

How to test before migration?

Local: kind, minikube, k3d (single-node). Staging: small managed cluster. Run parallel 2 weeks → compare metrics → cut over.