Hi,
We’ve a use case to store large data blobs (128-256MB) in Aerospike. We’re using HDD as storage media. Is it possible to configure Aerospike to achieve this or is 1MB per record a hard limit?
Thanks, Jiten
Hi,
We’ve a use case to store large data blobs (128-256MB) in Aerospike. We’re using HDD as storage media. Is it possible to configure Aerospike to achieve this or is 1MB per record a hard limit?
Thanks, Jiten
Jiten,
This is a current hard limit based on the underlying storage system. We have a variety of product plans to allow larger objects, but they are not here yet.
At the application level, simply slice your object into smaller objects. The async API is very good for doing multiple writes at the same time, and the batch read API allows getting all the chunks at the same time.
Aerospike is also not built for rotational disk. It can’t migrate and move data effectively on a system with such slow seek times. Why would you consider such a high performance database over such a slow performance storage layer? It doesn’t make sense. Most people would use HDFS or Gluster or similar for storing very large objects on rotational disk, what is the issue with that ?
Thanks for quick response. Our product is based on micro-services architecture, where service instances are created and destroyed based on load. There is only one use case where we need this blob storage feature and adding a new component like HDFS/Gluster to the ecosystem adds to the “cost” of the product. What I mean by cost is the need to monitor these components and dynamically scale them independently based on the load.
We can definitely try splitting the byte array.
Thanks, Jiten