Сlarification about expirations during long rolling upgrade to 4.5.1+

We currently have 3-replica C- cluster. We would like to upgrade to some latest version (Community) without full downtime. Rolling upgrade means that the cluster will run mixed versions for about two days due to rebalancing. We use record TTL (which implies expirations, if I understand correctly), deletions and UDFs (existing - not planning to modify UDFs during upgrading - so I guess it is not a problem?). Storage devices will be rolling-wiped to minimize risks.

After reading https://www.aerospike.com/docs/operations/upgrade/aerospike/special_upgrades/4.5.1/index.html do I understand correctly that:

  1. device wiping does not help because the problem is in internal server protocol incompatibility?
  2. does TTL gets replicated between mixed versions? (for example, ‘touch’)
  3. records with master 4.5.1+ will not be expired on <4.5.1 nodes?
    3.1 records with master <4.5.1 will send unneeded expirations to 4.5.1+ replicas but it is not a problem?
    3.2 what are the bad consequnces? Let’s imagine there is enough reserved storage space. Then let’s imagine something goes wrong and, for example, the master changes back again. Expired records won’t be returned to a client (and… re-replicated?), will they?
    3.3 after rolling upgrade is completed, if there are these not-expired records left, will they be expired by the new version mechanism? For example, imagine 2 of 3 (or 1 of 3) machines run the new version and in this time some records on the 3rd one doesnt get expired - then I upgrade this machine - will it expire these records?
    While I’m writing it, I recalled that records doesn’t get really deleted in-place until nsup reviews them, and I got even more confused in the context of rebalances… what then the act of expiration means? Also I fail to understand prole-extra-ttl and what value should I set it to. If nsup sees the expired records already, why doesn’t it delete them and why it is not a flag but a ttl - why does it matter if it is set to 2 seconds or to 2 minutes or to whatever? Does it exists only in and is already deleted in - does it mean I must remain on to use it or it is somehow always-auto-enabled starting from

And so, to sum it up - is enabling prole-extra-ttl for rolling restarts fully mitigate the problem with expirations and deletions during long rebalances?





Expired records will appear deleted to the client, but will still occupy memory. Note that you can avoid this with the prole-extra-ttl configuration mentioned in the here upgrade guide.

Yes, upgraded nodes will expire any missed records.

This is wrong. Aerospike doesn’t delete the records from the disk. When the write block on device becomes eligible for defrag, live records are move to a new block and the old block is moved to the free-queue where it can be recycled for new writes. So records aren’t deleted until the wblock is recycled, additionally there could be multiple copies of the record on device due to prior defragmentation or older copies due to later writes. So without the Tomb Raider provided by Durable Deletes, you cannot know that a record has actually been removed from device.

Let me know if anything is still not clear.

© 2015 Copyright Aerospike, Inc. | All rights reserved. Creators of the Aerospike Database.