I’d like to store a binary blob larger than 1MB (usually between 1MB - 10MB), using an ssd storage engine. Is this possible?
You’d chop it up on the application side, and store it as separate record, then fetch the parts back with either a batch-read or a series of single read operations, for example in the case you’re streaming it. You can store the data in a namespace that is on SSD, and the metadata about the BLOB in a faster storage option, such as an in-memory namespace.
There are probably simpler options for this, but you can use Aerospike. You’d want to use several optimizations and pay attention to tuning.
- Use a REPLACE ‘exists’ write policy instead of UPDATE. This data will never change, so might as well save on the read IOP that precedes the merge and write of UPDATE.
- Use the TTL wisely. Will these objects ever expire? If not, be careful with how you tune the high water-mark for memory and disk. You will not want evictions to happen.
Thanks for the advice. The application here is storing a set of images along with a bunch of associated metadata about the image. Given Aeroxpike’s nice scalability, I thought storing the actual image content (the compressed jpeg binary blob) in the database itself would make scaling a little bit simpler than storing the metadata in Aerospike and the images in some sort of distributed filesystem.
From your answer, looks like it is possible, but also like it’s a bit outside of the intended use of Aerospike.