Removing a node when using kubernetes and helm charts

I am using helm charts for deploying aerospike using aerospike-kubernetes-enterprise/helm at master · aerospike/aerospike-kubernetes-enterprise · GitHub on my kubernetes cluster with AWS EBS volumes for persistence storage.

Currently, I have set dbReplicas: 5 (5 node cluster). I need to scale down the cluster to 3 nodes. I would like to understand what is the strategy to remove nodes without data loss in such setup.

Hi @Vijendra_Singh_Tomar

You can simply change dbReplicas to 3 and run helm upgrade. Aerospike Helm Charts will automatically scale down the cluster without any data loss.

The scale down will wait for migrations, perform Quiesce procedure for smoother master handoff so that client applications don’t face any timeouts or connection errors.

The pod with highest ordinal index will be taken down first i.e. among pod-0 to pod-4, pod-4 will be taken down first followed by pod-3.

You need to make sure the EBS volumes on 3 pods have enough storage space and memory to accommodate all the data.

If you believe the migrations would take more time (depending on the data size), it’s better to also increase terminationGracePeriodSeconds to a higher value (default is 120 seconds) to prevent force killing of pods thereby preventing data loss.

All these configurations should be applied through helm upgrade using --set option. You can also directly update the values in values.yaml file and apply helm upgrade.

Please check the list of configurations, their usage and their default values.

To give you more detail, this is achieved through preStop container lifecycle hook.

The pre-stop operation executes this util script.

Let us know if you have any more questions.

Also, if you have an enterprise license, you can also raise a ticket with our support team which would get immediate attention.