Understanding eviction behaviour on 0 default-ttl

Hi,

I have 8 node cluster with version 3.8.1 following namespace config -

namespace test {
  replication-factor 2
  memory-size 50G
  default-ttl 0 # 30 days, use 0 to never expire/evict.
        high-water-disk-pct 50 # How full may the disk become before the
                               # server begins eviction (expiring records
                               # early)
        high-water-memory-pct 60 # How full may the memory become before the
                                 # server begins eviction (expiring records
                                 # early)
        stop-writes-pct 90  # How full may the memory become before
                            # we disallow new writes
  # storage-engine memory
  storage-engine device {
                #device /dev/sdb1
                #data-in-memory false

    file /opt/aerospike/data/test.data
    filesize 1000G # 8 times of RAM
    data-in-memory true

                #write-block-size 128K   # adjust block size to make it efficient for SSDs.
                # largwst size of any object
        }
}

The current RAM uses in AMC dashboard is showing ~ >61% on all nodes. I want to understand that i have set default-ttl to 0 and high-water-memory-pct to 60. will it start evicting records if RAM uses will exceed 60% ? If yes how can i stop it?

I found inconsistency in record some times in my application (Golang) that query aerospike although not sure if it is because of aerospike or some application logic on my side.

Records stored with a TTL of 0 will never expire, eviction (without going into detail) can be thought of as early expiration. Assuming your application doesn’t override the default then records that are marked to never expire cannot evict (expire early).

@kporter I have one more question. Is it possible to change memory-size of namespace in existing cluster? i want to increase memory-size of test and reduce memory-size of another namespace.

memory-size is dynamic, the only restriction is that the server only allows you to decrease it by half.

can we increase it ?

I have currently 64 GB RAM node with 2 namespace test and test1 both in memory and test is HDD persistent. test has memory-size 50 GB and test1 has 10 GB.

Now i have 2 question -

  1. can i reduce test1 and increase test memory-size?
  2. i have 13 node cluster (all 64 GB with these 2 namespace). i want to upgrade RAM to 256 GB and reduce cluster size. Is it possible without complete cluster shutdown? If yes, How?

Note: I am upgrading server from 3.8.1 to 3.12.1.2.

Yes, you can raise the memory-size.

Depends, while adding the new nodes and eventually removing the old, migrations will redistribute partitions. You may eventually be in a case where the distribution will overwhelm a remaining lower capacity node.

3.14 allows rack aware to be changed dynamically. You can sidestep yhis problem by dynamically placing the old nodes into rack 1 and starting all of the new nodes at once. Once the new nodes are in the cluster you can stop all of rack 1 (old nodes) once migrations complete. Problem with this solution is that there will be a period where your replication factor is reduced by one.

can i have a node with more memory-size for test namespace than other in cluster?

can i add 256 GB machine one by one in cluster and then remove all old 64 GB one by one and once i will have all 256 GB nodes only i will dynamically increase memory-size to 240 GB from 50 GB for test namespace ? if yes, do i have to change it in configuration files also? should i change it dynamically on all nodes at once?

The node with the lowest value for memory-size will limit the rest.

Possibly, if you will have the same or more new nodes as currently exist then this is fine. If you want to decrease the cluster size by adding nodes with larger capacity then it is possible that while removing the old nodes the distribution begins to overwhelm the remaining old nodes.

Both, you can dynamically set the memory-size on all nodes at once, but you will also want to set the config file so that the next time that node is restarted it joins with the appropriate memory-size configuration.

Hi Team,

Any update on delete data reappear on restart on community edition.

I am facing this issue with single node and 2 node cluster as well.

Community edition doesn’t have durable delete: Issues with cold-start resurrecting deleted records.