Setting Aerospike in memory configuration

I wanted to point out that there are no ‘quorum reads’ in Aerospike. The client will talk to the node holding the master partition for the specified key for both reads and writes. You can overwrite this behavior by changing the read replica policy.

Writes will always happen against the node holding the master partition for the key. You can change the write commit level policy to choose whether to wait for all the replica writes to complete, or just for the master write. Again, the master will still trigger replica writes to those nodes with the replica partitions, regardless.

So now to how you should set your memory-size for the namespace. Piyush already pointed you in the direction of the capacity planning article, which you should read closely.

Every object, regardless if the namespace stores in memory or on SSD, costs 64B of DRAM for the metadata entry in the primary index. If your namespace keeps its data in memory you will also need to account for how much DRAM that will cost you. 1 billion objects of 1K cost 64G across all the nodes, for example, so 64 * 2 (replication factor) / 5 = 25G of DRAM for the primary index per-node.

The high watermark for memory is associated with evictions. Read the knowledge base FAQ What are Expiration, Eviction and Stop-Writes? Evictions won’t happen at all if your records are set to never expire (TTL -1 in the client). Evictions also don’t happen if you define the sets in your namespace to not evict (set-disable-eviction). See Namespace Retention Configuration.

If you do want to use TTLs for your records, make sure that you don’t hit stop writes when a node goes down. Assume that stop-writes is still set to the default value of 90%. In a 5 node cluster, you’ll need to set your high-water-memory-pct to lower than 90% * 4 / 5 = 72% . You probably want to use 80% max after you have a node go down, so 80 * 4 / 5 = 64 or a value between those two points.