Device overload when map size is too big

map

#1

We got “device overload” error after the program ran successfully on production for a few months. And we find that some maps’ sizes are very big, which may be bigger than 1,000.

After I inspected the source code, I found that the reason of “devcie overload” is that the write queue is beyond limitations, and the length of the write queue is related to the effiency of processing.

So I checked the “particle_map” file, and I suspect that the whole map will be rewritten even if we just want to insert one pair of KV into the map.

But I am not so sure about this. Any advice ?

Cross-posted here: https://stackoverflow.com/questions/50535678/aerospike-device-overload-error-when-size-of-map-is-too-big


#2

So I checked the “particle_map” file, and I suspect that the whole map will be rewritten even if we just want to insert one pair of KV into the map.

You are correct. When using persistence, Aerospike does not update records in-place. Each update/insert is buffered into an in-memory write-block which, when full, is queued to be written to disk. This queue allows for short bursts that exceed your disks max IO but if the burst is sustained for too long the server will begin to fail the writes with the ‘device overload’ error you have mentioned. How far behind the disk is allowed to get is controlled by the max-write-cache namespace storage-engine parameter.

You can find more about our storage layer at https://www.aerospike.com/docs/architecture/index.html.


#3

Take a look at this article as well: Device Overload


#5

But am I right about the single put of a map ? Does aerospike really need to rewrite the whole map even if we just want to put one pair key-value?


#6

Yes. Records always have to be written contiguously… would not be possible to efficiently read them otherwise…