Skip to content

Как мигрировать в Kubernetes

Коротко:

Migration to Kubernetes — 6-step process: 1) Containerize app (Dockerfile), 2) Choose K8s provider (managed EKS/GKE/AKS или self-hosted), 3) Write manifests (Deployment + Service), 4) Configure ingress (nginx-ingress, Traefik), 5) Secrets management (Sealed Secrets, Vault), 6) Observability (Prometheus, Loki). Typical timeframe: 2-8 weeks for medium app. Risks: stateful data migration, DNS/IP changes, learning curve.

Ниже: пошаговая инструкция, рабочие примеры, типичные ошибки, FAQ.

Пошаговая настройка

  1. Containerize: write Dockerfile, build image, push в registry (ECR, GHCR)
  2. Выберите managed K8s: EKS (AWS), GKE (Google), AKS (Azure), DigitalOcean K8s, Yandex Cloud
  3. Install kubectl + connect: aws eks update-kubeconfig --name my-cluster
  4. Write Deployment + Service manifests для каждого microservice
  5. Install nginx-ingress controller + TLS (cert-manager для Let's Encrypt)
  6. DB: либо managed (RDS Postgres), либо operator (CloudNativePG)
  7. Secrets: Sealed Secrets для Git-friendly, или External Secrets → Vault
  8. Deploy observability: Prometheus + Grafana + Loki (или managed DataDog)
  9. Migration window: DNS switchover из VPS к K8s ingress IP

Рабочие примеры

СценарийКонфиг
Basic Deployment + ServiceapiVersion: apps/v1 kind: Deployment metadata: { name: my-app } spec: replicas: 3 selector: { matchLabels: { app: my-app } } template: metadata: { labels: { app: my-app } } spec: containers: - name: app image: ghcr.io/me/my-app:v1 ports: [{ containerPort: 3000 }] --- apiVersion: v1 kind: Service spec: selector: { app: my-app } ports: [{ port: 80, targetPort: 3000 }]
Ingress с TLSapiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt spec: tls: - hosts: [app.example.com] secretName: app-tls rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: { service: { name: my-app, port: { number: 80 } } }
HPA (auto-scaling)apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler spec: scaleTargetRef: { kind: Deployment, name: my-app } minReplicas: 2 maxReplicas: 20 metrics: - type: Resource resource: { name: cpu, target: { type: Utilization, averageUtilization: 70 } }
Resource limitsresources: requests: { cpu: 100m, memory: 256Mi } limits: { cpu: 500m, memory: 1Gi }
Liveness + readiness probeslivenessProbe: httpGet: { path: /health, port: 3000 } periodSeconds: 10 readinessProbe: httpGet: { path: /ready, port: 3000 } periodSeconds: 5

Типичные ошибки

  • Stateful data — самое сложное. Options: managed DB, volume migration, blue-green с data sync
  • Cost spike: managed K8s cluster base $70+/мес + node costs. Compared to VPS $10/мес — upgrade overhead
  • DNS TTL during migration: snижайте до 60s за 2 дня до switch для fast rollback
  • PodDisruptionBudget обязателен для HA: гарантирует min replicas during node drain
  • Resource requests too low → pods evicted на high load. Too high → waste. Measure + iterate

Больше по теме

Часто задаваемые вопросы

Managed или self-hosted K8s?

Managed (EKS/GKE/AKS) — saves 40+ hours/month maintenance. Self-hosted только если compliance или data residency не позволяет cloud.

Когда нужен K8s?

Microservices (5+ services), multi-region, auto-scaling need. Для monolith на 1 VPS — overkill.

Alternatives?

Docker Swarm (simple, но declining). Nomad (HashiCorp, simpler than K8s). Managed PaaS: Vercel, Fly.io, Railway — handle все автоматически для web apps.

Как test перед migration?

Local: kind, minikube, k3d (single-node). Staging: небольшой managed cluster. Run parallel 2 недели → compare metrics → cut over.