- helm (inspired by stable/redis-ha)
- minikube
- kubectl
kubectl create ns rhak
helm install ./src --namespace rhak --set replicas=3
# Find the master, it will be 0 unless it's been killed, that's the guarantee of StatefulSets
kubectl exec --namespace=rhak redis-high-availability-kluster-server-${N} -ti -- /bin/sh
redis-cli set food good
# Connected to any node
redis-cli get food
helm upgrade ${RELEASE_NAME} ./src --namespace rhak --set replicas=${N}
# Master will be at ambiguous position after being killed
kubectl delete pods/redis-high-availability-kluster-server-${N} -n rhak
helm delete ${RELEASE_NAME} --purge
If you want to remove the data you'll need to remove the Persistent Volumes
- checklist of the optional extras included
- A bash or kubernetes ecosystem test harness asserting you can kill master or slaves and quorum will reassemble
- Continuous Integration of this solution
a) The announce service pattern is used for intra-cluster networking but scaling the StatefulSet does not create a Service, only the Pod/s. This leads to the initialisation failing as the container cannot resolve it's own address within the cluster.
The most immediate "workaround" is to issue updates (scale events) via helm which will in turn use it's templating engine and ensure you have the Service necessary for initialisation to succeed. This is pretty lackluster though, as that creates problems for some of the value propositions kubernetes offers like autoscaling.
If you modified the template to have a "max-replicas" value then you'd have another piece of the infra to worry about it's capacity limit, seems a bit counter intuitive to me.
I'm sure I could modify the approach to address this, but time.
b) Security of this implementation is pretty poor.