Hello. Sorry, maybe I duplicate some of the existing questions, but want to ask. Is it possible to delete completely Expired objects in aerospike community version (something like truncate command) ? Without deleting expired objects we finally runs out of Avail% space (when we have enough Disk and Ram space). I have a cluster of aerospikes, so now for deleting expired objects I stop one of aerospike nodes, delete data files and replicate data from another nodes. It is very inconvenient. Is there some another way to delete expired objects ? Thanks in advance.
expired objects on disk should be recovered by the defrag thread giving you back available space. there is something unique about your object size or defrag settings, so, that space is not being recovered. temporarily bump up defrag-lwm-pct from 50 to 80 or so and see if avail pct recovers. make sure you set it back to 50 after this brief test - this is not the right solution. if this helps, then we can see what is really going on.
Hi.
Thanks for the help.
I tried to use defrag-lwm-pct
settings for the increasing Avail %. And it looks like this works for me. Avail % -increased, Expired objects - decreased.
The only thing - a higher level of defrag-lwm-pct
leads to Device overload
exceptions on client side.
Is it possible to control defragmentation intensity ?
Could you share your config? Also include the size of the devices used in for the namespaces.
Would be good to see the config indeed. Number of expired objects should not change based on defrag-lwm-pct
.
service {
paxos-single-replica-limit 1
proto-fd-max 15000
}
logging {
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002
mesh-seed-address-port 10.1.10.1 3002
mesh-seed-address-port 10.1.10.2 3002
interval 150
timeout 30
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace test {
memory-size 60G
default-ttl 30d
replication-factor 2
high-water-memory-pct 90
high-water-disk-pct 90
storage-engine device {
file /mnt/test
filesize 420G
defrag-lwm-pct 60
}
}
For store aerospike data we use 1TB SSD disk. Aerospike version - 4.3.0.10
The expired_objects
metric is the number of objects expired from this namespace on this node since the server started. I.E. expired_objects
are the number of objects that have been removed via the Namespace SUPervisor (NSUP) thread’s periodic scan.
The defrag-lwm-pct
config triggers a write-block be enqueued for defragmentation when the writeblock is depleted below the value of defrag-lwm-pct
. This means that, in the worst case, the device could be defrag-lwm-pct
full and have zero device_available_pct
. Use caution when increasing the defrag-lwm-pct
as it has a non linearly increasing write-amplification effect. I recommend reading more about the defragmentation process and about difference between availible_percent and free_precent.
Based on the description, I believe you are likely not sized for the worst-case disk utilization for your configuration. You may want to allocate more of your SSD to this namespace. You could also lower the high-water-disk-pct config to be closer to the configured defrag-lwm-pct
to ‘early-expire’ records which will encourage device write-blocks
to become eligible for defragmentation. To be certain, could you share the oputput of:
asadm -e "show stat namespace like 'file|device'"
Thanks for the information.
Also send asadm -e "show stat namespace like 'file|device'"
output:
device_available_pct : 54 54
device_free_pct : 65 65
device_total_bytes : 912680550400 912680550400
device_used_bytes : 317961536896 317961537040
storage-engine.commit-to-device : false false
storage-engine.encryption-key-file : null null
storage-engine.file[0] : /mnt/test /mnt/test
storage-engine.file[0].age : -1 -1
storage-engine.file[0].defrag_q : 0 0
storage-engine.file[0].defrag_reads : 4929002 4927171
storage-engine.file[0].defrag_writes : 3004876 3002979
storage-engine.file[0].free_wblocks : 477273 477340
storage-engine.file[0].shadow_write_q: 0 0
storage-engine.file[0].used_bytes : 317961536896 317961537040
storage-engine.file[0].write_q : 0 0
storage-engine.file[0].writes : 2317242 2317241
storage-engine.filesize : 912680550400 912680550400