FAQ - Why does overall cluster performance suffer when a node is slow to shut down?

FAQ - Why does overall cluster performance suffer when a node is slow to shut down?


Normally an Aerospike shutdown is a very quick process. There are certain circumstances in which the shutdown of a node may be slower. During a shutdown the index and storage sync so that subsequent restarts can be trusted. When the storage-engine is pmem or the index storage is on disk or if there are simply many sprigs configured for the index, the shutdown may take more time. When the shutdown takes time, an impact on the overall cluster performance may be observed, why is that?


When the index and partitions are syncing they lock all the records on the node that is shutting down. Any transactions from other nodes that need to lock these records will therefore deadlock. A transaction that might lock the record would, for example, be a replica write. These deadlocks will impact overall cluster performance. The recommended approach to avoid this is to always quiesce a node prior to shutdown. This has the effect of moving the node to the end of the succession list meaning that the node will give up partition ownership in all but very rare circumstances (see note). In most circumstances, if the node is at the end of the succession list it will not own any partitions and so there is nothing on that node that would require other nodes to take a lock. Therefore, there will be no deadlocks and cluster performance will not be impacted.

This behavior is addressed in version 5.3 as part of:

  • [AER-6333] - (CLUSTERING) Stop heartbeat on shutdown, to get node kicked out of the cluster in case it was not quiesced and shutdown takes a long time.

In versions 5.3 and above, the node will be ejected from the cluster as part of the shutdown process, prior to going through this sync process that locks down record access.




November 2020

© 2021 Copyright Aerospike, Inc. | All rights reserved. Creators of the Aerospike Database.