Ratio of disk to memory usage



I am storing records in-memory(RAM) and traditional(non SSD) disks. My RAM to disk usage ratio is 1:3. I am not able to understand this large consumption of disk compared to RAM.

My records size is 256 Bytes (it’s actually lesser but 256 is the nearest multiple of 128). I am using default write-block-size (which is 1 MB for non SSD I guess).

I was earlier suspecting that 1MB block size might be holding 1 record only and that could be the reason. But got to know from another thread in forum that a disk block may contain multiple records.

So, what could be the reason of large disk consumption here ?

Also, while doing capacity planning, is there any rule of thumb that disk must be this much times the RAM ?

Kindly suggest. Thanks.


See Capacity Planning Guide for reference.

Each record on disk will contain meta data that RAM does not share. The additional meta data is needed to rebuild the index in the event of a cold start. The amount of meta data depends on the structure of your records. For instance, as you will see in the Planning Guide, set and number of bins in a records has a much higher impact of used disk storage than it will on memory storage. The reason for this is we have in memory tables tracking set and bin names so that the structure in RAM will only need a reference to this table rather than the full bin or set name for each record. This is a classical space/time tradeoff, the table look up in memory is very fast and memory is typically a valued commodity, on disk reading from an on disk table in addition to the data would be slow and since disk space is relatively cheap, it is more cost effective to store the data with the record.

In your case since are writing about 256 Bytes, the metadata is likely pushing your records into 3 record blocks or 384 Bytes of storage per record.


Thanks @kporter . You have been a great help in understanding Aerospike :smile: