Does HWM influence SSD capacity?

I’ve been using data-size * defrag lwm * rep factor to compute capacity requirements for SSD storage.

basically, data * 2 * 3 for a cluster with rep=3

how does HWM play into this? or does it?

Thanks for your help.

Capacity planning can be found on the following documentation: Capacity Planning Guide | Aerospike Documentation

Basically, you need to find out how much memory and disk you need to ensure capacity is below it’s high-water-mark (default 50% for Disk and 60% for memory) on each node in the cluster. High-water-mark configuration’s only control the eviction (if you have data that expires).

If you’re not setting any TTL for your records (your client uses TTL -1 “never expire” or TTL 0 inheriting a default-ttl 0 from the namespace), expirations and evictions won’t happen. As long as you don’t delete data yourself, your namespace will advance toward stop-writes, which by default is 90% of the namespace’s memory-size being full.

Consider a node going down in a multi-node cluster, and you want to make sure that the same data distributed over N-1 nodes doesn’t immediately hit stop-writes. Let’s assume you do not want to exceed 80% of the namespace max memory limit when this happens. You’d set the HWM for memory to 80 * N-1 / N. For example, for a 6 node cluster, this would be 80 * 5 / 6 = 66%. That’s what you can raise the high-water-memory-pct to.

The high-water-disk-pct should remain at the default of 50%, regardless of the number of nodes. This leaves enough space for defragmentation to happen continuously without affecting the latencies of other operations. If you change it, you should raise the defrag-lwm-pct as well.