Eviction Policy

Hi All,

I have a 2 node cluster with the same server configuration as mentioned below :

service {
        user root
        group root
        paxos-single-replica-limit 1
        pidfile /var/run/aerospike/asd.pid
        service-threads 4
        transaction-queues 4
        transaction-threads-per-queue 4
        proto-fd-max 15000
        nsup-period 1

logging {
        file /ext/log/aerospike/aerospike.log {
                context any warning

network {
        service {
                address any
                port 3000

        heartbeat {
                mode mesh
                port 3002
                mesh-port 3002
                interval 150
                timeout 10

        fabric {
                port 3001

        info {
                port 3003

namespace testing {
        replication-factor 2
        memory-size 2G
        default-ttl 365d
        high-water-memory-pct 60
        high-water-disk-pct 80
        stop-writes-pct 90
        storage-engine device {
                device /dev/xvdh
                scheduler-mode noop
                write-block-size 128K
                data-in-memory false
                defrag-lwm-pct 50
                defrag-startup-minimum 10

What we are trying to do is use the SSD for the data storage and we, for the time being, have a limited RAM to allocate, but as the policies dictate that the ram will be used for the indexing and it stops writes rather than evicting as the (memory_lwm >= index_sz) && (ssd_lwm >= data_sz) is not breached.

So, what I basically want to know is if it’s possible to have a config where the data eviction from the RAM can happen but not from the SSD and also not to have the insertions in the SSD to stop. (Pardon me if I am willing to have this weird config )

We have gone through the discussion about eviction and expiration policies on the link :

And also the reasons which contribute to the evictions.

I have went through the forum and also the documentation but i can’t find a way out. Can you guys guide me through my problem and help me out or help me understand what I can do.

Best Regards!

I am not sure I fully understand the question, so let me go over some of the mechanisms relevant to your configuration (data-in-memory false)

  • For records to be inserted, you will always need room in RAM (for the index) and on disk (for the data).
  • When evictions happen, data will be expired ‘faster’ (and therefore deleted) and this will remove the related records from RAM (index) and from disk (data itself).
  • You cannot have eviction happen on RAM but not on SSD. Same with stop-writes: when in stop writes, data will not be written both to RAM and to SSD. A record will always have to have an INDEX (in RAM) and it’s data (on SSD in your case).
  • With your configuration, you should always be evicting (assuming your records have a TTL set) before hitting stop writes assuming your defragmentation is keeping up, since the high-water-memory-pct (60%) is below the stop-writes-pct (90%).

Hopefully this helps, let me know otherwise.

To help with your sizing:

You need 64bytes of RAM per record for the primary index. So, if you are not doing any secondary indices and not storing data in memory, with 2GB or RAM and replication factor 2, you should be able to store:

  • 60% of 2GB is: 1.2GB (60% is default high-water-memory-pct above which you would evict).
  • 1.2GB can hold: (1.210241024*1024)/64 = 20,132,659 records
  • Replication factor 2: 20,132,659 / 2 = 10,066,329 unique records

With a 2 node cluster, you could actually raise the high-water-memory-pct (for example to 80%), since losing a node will not cause any migrations as both nodes will always hold all records (1/2 master and 1/2 replicas). This will allow you to store more records (assuming you have enough disk storage of course).