Replace separated on several parts BLOB


I need to store large BLOB (more than 1MB), so I decided split it on several parts with key equal "baseKey-count ". But if in future I replace existing composite record with other smaller size then I’ll lose some space which will be cleared only after TTL. Example: BLOB with 3 parts: Header: {key: “some-key”, count: 3} Elms: {key: “some-key0”, body: “…”}, {key: “some-key1”, body: “…”}, {key: “some-key2”, body: “…”}

Then I replace it with other: Header: {key: “some-key”, count: 2} Elms: {key: “some-key0”, body: “…”}, {key: “some-key1”, body: “…”}

So I lost last record with key “some-key2”. How can I prevent it?


Not clear what you are trying to prevent. You are starting with 4 records.

1 - master record:  some-key | 3
2 - part:  some-key0 | data
3 - part: some-key1 | data
4 - part: some-key2 | data

Then you update to smaller BLOB:

1 - master record: some-key | 2  (updated)
2 - part: some-key0 | data (updated)
3 - part: some-key1 | data (updated)

some-key2 is either explicitly deleted by you or is gone when its TTL expired.

So, what is it you want to prevent? (Now, there are other holes in this scheme … but that is a separate issue.)


Both of these ways not well I think. So I asked for best practice for this case.

Now, there are other holes in this scheme

What holes are there?


“Both” – ?? - we are only discussing one way here. So, I am still not able to understand your initial question.

You have to think of each record update as a separate transaction that may or may not complete. When a transaction completes in Aerospike, and the client gets an acknowledgement, client can be sure that the transaction completed. If the client does not get an acknowledgement due to network error in some segment, client cannot be sure what the server has. It may have your record update or not. For a single record transaction, you can design your code to read back and retry if you timeout in client waiting for an ack. So think in terms of client failing in middle of a multi-record transaction - i.e. after updating master record, client dies, or, network failure scenarios between these multiple-record transactions. The other aspect you must worry about is multiple clients updating the same group of records - how do you lock out other clients when one is doing a multi-record transaction? What after taking a lock, the client dies? How do you release the lock and rollback a partially completed update? How do you manage a consistent state? You are getting into “consistent multi-record transactions”. It takes a lot more work on the client side to effectuate multi-record transactions with rollback. You will have to keep track of the entire transaction in the application. This is true for any database that does not provide multi-record transactions at the database level.