SSD partitions



we have 6.4*4=26.4T (Micron 9200 max) SSDs per node and 24 core CPU. By looking at the document . It suggests to have 16 partitions for 16 core CPU. In our case its 24 cores that means we should have 24 partitions of 1TB each? If we have 1TB for each device that would be too large for a namespace with less data. Can we partition into smaller like 32 partitions of 800GB size each?


The resources that you need to distribute to your namespaces are storage capacity (total of 26.4T per-node) and IOPS.

Each of those Micron 9200 MAX SSDs was rated using ACT at 105.5X. If the IOPS needs aren’t the same, you may want to split one drive for several less-used namespaces, and allocate the remaining ones to the more IOPS hungry ones (which in aggregate have the equivalent of 316X). Alternatively, you can let them all access the 4 SSDs as a pool of IOPS, and distribute partitions (devices) depending on the space needs of the namespaces.

There is no reason you can’t have uneven partitions (in terms of size) on each SSD. For example, if you partition each SSD to something like 3 x 670G, 3 x 1TB you get a total of 24 devices to allocate to your namespaces. Make sure that a given namespace has evenly sized devices on each SSD. Another namespace, which needs less storage, can have one small device from each SSD.

If you’re an enterprise customer, please open a support ticket to discuss this in specific detail.


Thanks Ronen.

can we do more than 24 partitions as we have 24 cores?


You can go up to 128 devices per-namespace in Aerospike 4.2 (64 devices per-namespace in earlier versions). However, it’s not recommended to have more devices than you have cores. You can benchmark and compare two different setups and see for yourself what the impact is, and if it’s acceptable to you.

Again, your 4 x 6.4T Micron 9200 Max looks familiar. If you’re an enterprise customer please email to open a new support case. We can discuss your namespaces and see how the SSDs should be partitioned.


partition of device need to be done based on cores or threads? and above you mentioned for 4.2 aerospike version we can go upto 128 devices per namespace. what if we have multiple namespaces. can we have 24 partitions for each namespace( as we have 24 cores) or total 24 partitions for whole node? Right now we have two namespaces but in future we might add more namespaces in the cluster. we have 24 cores and 48 threads per node.


Partition of the device would be based on cores. But that’s a rule of thumb or a high level suggestion for systems with a mid-level number of cores (24 would probably qualify, but 56 or more may not)… It is always recommended to build a minimum system and benchmark with loads similar to peak production (and higher) to validate the performance.

You can have 24 partitions for each namespace, but with more namespaces, the performance may depend on the workload that each namespace receives (in term of record sizes, update/delete to read ratio (update/delete driving defragmentation), throughput, etc…). It is not possible to really guess beyond that. More devices gives you more buffers to writes (avoid contention at the device level) and more defragmentation threads, but having too many partitions for the same physical drive could adversely impact as well…