Redis Cluster — native horizontal scaling for Redis. Minimum: 3 masters + 3 replicas = 6 nodes (distributed across 16384 slots). Automated failover, client-side sharding, transparent to app (smart clients). Not for small data (<5GB) — plain Redis master+replica is enough. Alternative: managed Redis (AWS ElastiCache, Redis Cloud, Yandex Managed Redis).
Below: step-by-step, working examples, common pitfalls, FAQ.
cluster-enabled yes, cluster-node-timeout 5000, appendonly yesredis-cli --cluster create ip1:7000 ip2:7001 ... --cluster-replicas 1redis-cli -p 7000 cluster info → cluster_state:ok| Scenario | Config |
|---|---|
| redis.conf (per node) | port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
protected-mode no # for intra-cluster (use firewall + requirepass)
requirepass strongpass |
| Create cluster CLI | redis-cli --cluster create \
10.0.0.1:7000 10.0.0.2:7001 10.0.0.3:7002 \
10.0.0.4:7003 10.0.0.5:7004 10.0.0.6:7005 \
--cluster-replicas 1 -a strongpass |
| ioredis cluster client (Node.js) | const Redis = require('ioredis');
const cluster = new Redis.Cluster([
{ host: '10.0.0.1', port: 7000 },
{ host: '10.0.0.2', port: 7001 },
// only need a few — rest discovered
], { redisOptions: { password: 'strongpass' } });
await cluster.set('key', 'value'); |
| Add new master + resharding | # Add new node to cluster
redis-cli --cluster add-node new:7006 existing:7000
# Reshard slots to new master
redis-cli --cluster reshard existing:7000 |
| Hash tags (keep keys in one slot) | # Without hash tags — random distribution
SET user:1:name alice
SET user:1:email a@e.com
# With hash tag — both on same shard (enabling transactions)
SET {user:1}:name alice
SET {user:1}:email a@e.com |
Data >5 GB or >100k ops/sec. Less — single master + replica + Sentinel is enough.
Sentinel — HA (1 master + replicas, automatic failover). Cluster — scaling (sharded across masters). Different jobs.
AWS ElastiCache (Cluster mode), Redis Cloud (Redis Labs), Yandex Managed Redis. Usually $50-500/mo vs $0 self-host but 20+ hours/month maintenance.
Write dual (old + new), read from single → migrate reads batch → cut write. Or use Redis-shake for bulk copy.