Removing nodes from aerospike cluster

deletion

#1

Hi,

I had a cluster of 7-8 nodes, which was getting filled up at a quite faster rate. I decided to add 3 bigger nodes (with 4x memory than existing ones) with plan to remove older nodes.

Now, memory usage on older nodes is 50-52% and that on new nodes is 12-13%. With hwm set at 70%, I could remove 3 nodes safely without hitting hwm. But, if I continue removing even a single node, all other nodes (older ones) are on the verge of hitting hwm for memory usage.

In order to continue,

  • I may have to scale the removed nodes vertically, add them back to cluster one by one and delete remaining older nodes
  • Vertically scale remaining older nodes one by one, add them to cluster and remove them one by one.

Is any of the above approaches appropriate? Or can someone please suggest a better way to deal with this?

Thanks.


#2

Currently the simplest solution to this problem is to use rack-aware (which is enterprise-only).

What you would want to do is dynamically configure all of the old nodes to be in their own rack:

for each older nodes as node:
  for each namespace as ns_name:
    asinfo -v "set-config:context=namespace;id=ns_name;rack=123"  # or some rack id that is not already in use.

asinfo -v "recluster:"

wait for migrations to complete and then remove the old nodes.

If you are on community, currently, your only option will be to backup and restore the remaining nodes to the new cluster once they have been removed.