Record Size Limit in Memory Mode


Is there any limit to record size with data in memory mode ? What if data is stored in ram as well as disk ?


Record size limit comes from write-block-size (1 Mb max, including all overhead) which applies to persistent storage, raw blocks on SSD or file based storage on HDD. If you store records purely in RAM then you are limited only by the various buffers in the food chain from client to Aerospike. If you do storage on disk with data in memory also, the 1MB limitation will apply.


I also observed that with data in memory mode as the size of a map object increases, write into map(insert/update) slows down. I was not expecting this with data in memory mode. What can be the reason for this ?


any update is full rewrite. so with map type bin, if you are inserting new key-value pairs in the map, entire record has to be re-written to a new location. there is no in-situ modification. entire record is stored contiguously in memory.

  1. Is this storage-engine memory or storage-engine device with data-in-memory true?
  2. What is the replication factor for the namespace?


I think prior to 3.8.3 sorted maps, maps were serialized using msg_pack. Storage space difference between bins and maps - i think they are still serialized with msg_pack but added info is added for key/index based sorting. Since msg_pack offers compaction, I would think any change will be a full rewrite regardless of storage being memory or disk. So I would think it is a full rewrite - but perhaps kporter may shed more light on the topic.


You are doing add() operations. Refer to the performance table in the map documentation (link above). Your performance is likely dominated by O(N) for unsorted map types and O(log N) for sorted.

There are a number of O(1) operations in the table. Those are still affected by the record size because "Every modify op has a +C for copy on write for allowing rollback on failure." memcpy performance should scale linearly with size, but it is orders of magnitude smaller than the map traversing operations performance.


I am referring to storage-engine memory only. Replication factor is 2.

Basically I am trying to compare this with hset of redis where hset can be as big as 100MB to 1GB. Can such a record be kept as single record (map) in aerospike with similar performance or it is not feasible for such type of records ?


Strictly speaking …100MB yes, 1 GB No.


Unable to insert data greater than 10 MB per record in the Aerospike storage-engine memory, to my understanding, there isn’t a record size limit when using storage-engine memory.

Is this limitation is specific to any version if there? Currently I’m using Aerospike v3.11. I’m also using the aerospike PHP client.


Which version of the PHP client are you using?

Also could you share the error returned by the client?

And anything you could tell us about the record you are storing could be useful as well.


PHP version is 5.3.29. Error message:- Currently one of the tenants have breached the 10MB limit per record in aerospike cache. its a json record which consists of some data and it is stored as key value pair. Is there any “maxbuffersize” variable is there in php version because this resolve the issue for me in aerospike go client??


Your PHP version is older than what our current PHP client supports, so you must be on our much older PHP client. I’m not sure if it had such a restriction but I would suggest testing on a the newer PHP client to see if the issue exists there as well.