Aerospike for large objects - LDT & LLIST (looking for alternatives to MongoDB and S3)

If your namespace is configured to use SSD you are limited by the write-block-size (at most 1MB). If you’re using data-in-memory without persistence you can store larger objects. Be aware that such large objects will run into networking related slowdowns, as they’re communicated between the client and server cluster.

Using LList will not necessarily help, because a single object within the LList is still bound by the write-block-size. What happens with LList is that the list is implemented across multiple physical records (subrecords), but an object in the list can at most be contained in one physical record.

It makes more sense for you to chop your objects into multiple parts, each of which fits in a record, then use either batch reads or queries to fetch the components, combining them on the client-side. You can also chop them into Large Ordered List objects and assemble them in your application when you get the record.

For (3), read the architecture and distribution article to understand how the cluster rebalances data automatically as it grows.

Regarding (4), asbackup can handle any namespace or set as long as you have enough disk space for the backup files it generates.

However, my real question is why would you store such incredibly large records in any database. Things such as files should live in a CDN or served up by webservers tuned to delivering files (stripped of scripting). Databases aren’t too efficient at this type of work.