Bad performance after upgrade due to migrations

We try to upgrade our aerospike cluster to enterprise version. We have cluster of 6 nodes with aerospike-community 3.4.0. We upgraded 1 node to 3.5.14 enterprise on last Thursday. It started on Friday and from that time we have migration process started and our service that massively reads from aerospike have poor perfromance.

In Zabbix at memory usage graph we can see that right after restart we have massive cached memory usage

We use client library (Java) is 3.0.35. All our java services that read from aerospike (20K reads per second) have troubles with performance after migration started. Updating to 3.1.3 doesn’t improve things.

Why migrations are so long? Why it affects performance of all cluster? We need to upgrade 5 more servers and every upgrade goes for 1 week of bad performance.

Hi Mikhail,

Can you please follow the instructions as described on

If you have persistent data storage for your namespaces, you do not have to wait for migrations to finish before upgrading the next node.


1 Like

Hi samir!

Thanks for this point! Our namespace is persisted on SSD.

But the performance of cluster is very poor when there are lots of migrations. So cluster doesn’t fit our requirements in terms of performance.

Why migrations affect reading from all nodes? Can’t nodes with migrations be write-only?

Migrations are explained at

Migrations involve background process of migrating data to other nodes, a longer running task thats is throttled down. But, during migration since the migrating data is taking away slice of your normal read/write throughput, you would see the normal read/write transaction throughput a notch lower. The migrations could be tunes so that it could run slower/faster based on what you are trying to achieve.


I think the real question is: why are all these migrations happening when essentially a node is restarted that already has the data? Or doesn’t that work when moving from community to enterprise?

1 Like

Well a bit of good news here, as of 3.5.8 “Migration performance improved by modifying initial partition balancing scheme for nodes joining a cluster”. This will drastically reduce the amount of time for migrations to complete.

  • This is a light load, can you describe your server hardware?
    • Number of nodes in the cluster?
    • Cloud provided hosts or bare metal hosts?
    • Number of CPUs per CPU Socket and number of CPU sockets?
    • Type of storage (SSD, HDD)?
      • If bare metal, is there a RAID controller?
      • If bare metal and SSD, make/model of SSDs?
    • How much RAM?
    • Other details that may be useful?
  • Can you provide your aerospike.conf?
  • What is your SLA?

Migrations will increase the number of IOPS on your storage, it will also disturb any temporal locality provided by the post write queue (cache).

When the node returns we need to resync its partitions with other nodes who may now have newer version of the records.

1 Like

Thanks for you reply!

Number of nodes in the cluster?

6 nodes

Cloud provided hosts or bare metal hosts?

metal hosts

Number of CPUs per CPU Socket and number of CPU sockets?

1 CPU, 6 cores, 12 threads

Type of storage (SSD, HDD)?


If bare metal, is there a RAID controller?


If bare metal and SSD, make/model of SSDs?

Unknown maufacturer

How much RAM?

128 Gb (110Gb for aerospike)

Here are Aerospike latency graphs under 8K QPS reads:

>8ms to ≤64ms is up to 62%
>=64 ms is up to 1.5%

Our SLA is 15ms under at least 30K QPS reads.

Here is our aerospike.conf:

service {
	user root
	group root
	transaction-queues 8
	transaction-threads-per-queue 8
	service-threads 1
	fabric-workers 6
	migrate-threads 1
	migrate-xmit-hwm 6
	migrate-xmit-lwm 1
	transaction-retry-ms 1000
	transaction-max-ms 1000
	transaction-pending-limit 200  # Max # of same-key transactions on queue
	ticker-interval 10
	nsup-period 120
	nsup-queue-hwm 2
	nsup-queue-lwm 1
	nsup-startup-evict true
	defrag-queue-hwm 20
	defrag-queue-lwm 5
	defrag-queue-escape 10
	defrag-queue-priority 10
	proto-fd-max 15000
	paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
	transaction-repeatable-read false
	pidfile /var/run/aerospike/

# Log configuration. Log to stderr by default. Log file must be an absolute path.
logging {
	file /var/log/aerospike/aerospike.log {
		context any info

network {
	service {
		address any
		port 3000
           network-interface-name eth0.10

	heartbeat {
        mode multicast
        port 9918
	    interval 150
	    timeout 15

	fabric {
		address any
		port 3001

	info {
		address any
		port 3003

namespace ssd {
	replication-factor 2

	memory-size 110G # 120G
        default-ttl 7776000 # 3 months
        high-water-memory-pct 98
        high-water-disk-pct 80
        stop-writes-pct 99

	# Warning - legacy data in defined raw partition devices will be erased.
	# These partitions must not be mounted by the filesystem.
	storage-engine device {
		scheduler-mode noop     # for SSD
		device /dev/sdb
		device /dev/sdc
		device /dev/sdd
		write-block-size 131072
		defrag-period 120
		defrag-lwm-pct 50
		defrag-max-blocks 4000
1 Like

What happens between 8:44 and 8:50 in the graph?

You have configured service-thread = 1.

Recommended setting for service-threads and transaction-queues is same as number of CPU cores. Can you please try with this modification and let us know how your throughput has changed?



don’t know. aerospike worked but graph is empty