Hi, I’ve a cluster constructed by 8 nodes, recently some problems occured when dealing with high write pressure. The disk-avail-pct dropped extermly low and finally reached stop-write.
Here’s the info of my cluster:
The version of the cluster is “Aerospike Community Edition build 3.16.0.1”
The problems occured on the namespace user_durable_list. This is a namespace that basically store only list data.
From beginning, the namespace is restored from a backup, and the disk-usage-pct of the namespace is about 14, the disk-avail-pct is about 85. Then it begins to process write requests. The write pressure is high, which the monitor shows that the disk IO reaches over 90%. And the disk-avail-pct is continuously dropping, you could see the values in the screenshoot, which dropped from 80+ to 10+ for certain nodes. And they will finally drop to about 5 or 4, then stop write.
I’ve read serval articles about the disk-usage-pct in the forum, but still can’t figure out why the disk-avail-pct dropped so low, please help.
BTW, I’m considering upgrade the version of the cluster(3.16) to the newest stable version 4.x, are there any special operations I should know?
What is your lwm set at and defrag-sleep? Can you shoot us output of sar -d -p so we can see disk util? Since defrag isnt keeping up, we could make it more aggressive… but if your disks are already going over 90% chances are you need more IO.
When upgrading the Aerospike server, from a version prior to 4.6 , with the security feature enabled, make sure all Aerospike Clients are running a compatible version . (Enterprise Only)
When upgrading the Aerospike server, from a version prior to 4.5.1 , follow the 4.5 special upgrade document 4.5.1+ SMD protocol change.
When upgrading the Aerospike server, from a version prior to 4.3 with replication-factor of 2 or greater along with the use of the rack-aware feature in AP namespaces, refer to the special considerations knowledge base article for details. (Enterprise Only)
When upgrading the Aerospike server, from a version prior to 4.2 , follow the 4.2 special upgrade steps document Storage Format Upgrade in 4.2 Release.
When upgrading the Aerospike server, from a version prior to 3.14 , please follow the upgrade and protocol-switching PREREQUISITES for 3.13.0.11 documentation for Upgrade to 3.13.
I just noticed how different some nodes are reporting. Are you running different hardware on various nodes? What kind of configuration is this? Any config diff? (asadm -e sh config diff )
The defrag-lwm-pct is set to 75%, the defrag-sleep is default 1000.
sar command shows the util% of the disks on each node are mostly over 95%. It’s true that the IO reach upper limit, but there are only few write timeout, and the average cost time of writes is acceptable.
The nodes have the same hardware configuration, and there is no config diff.
I don’t understand the write mechanism.Let’s take the last node as example, the info command shows that the avail% of the disk is only 13%, which means 87% of the disk has been used. However the disk used% is only 14%, there is a discrepancy of 63% between the data size and actual disk usage.
So the problem is where has the 63% disk space gone?
Setting defrag-lwm-pct to 75% will mean that aerospike will defrag blocks that are only 25% full. Typically you always want the LWM to be the same as the HWM. FAQ: Defragmentation
Basically, aerospike will only write to ‘free-blocks’ and blocks can only be free once they have been defragmented and made available. This parameter is usually set to 50% along with the high-water-mark-disk-pct. Higher values generate more write amplification by defragging more aggressively - lower values will save on IO but if you have high amounts of disk used or high write thoughput then the avail will drop.
Why did you set defrag-lwm to 75%? What’s most likely happenning is that its queuing up more defrag than it can process. You can find out by looking from ‘defrag-q’ in the aerospike log/journal.
If defrag is not keeping up, lowering defrag sleep should be first. Increasing defrag-lwm has write amplification effects so its best not to raise that unless there is no other option - assuming you have the IO for it.
Actually from begining the defrag-lwm-pct is set as default value 50%, however the disk avail% dropped too low, I thought it’s because the defrag speed is not fast enough, so I set the param to 75% to make defrag trigger earlier.
I have also tried set the defrag-sleep to 0, but seems the priority of defrag write is lower than data write, the disk avail% still dropped to very low. And I can’t find any param that could increase the priority of defrag write.
Since aerospike will only write to free blocks, if the defrag speed can’t catch up with data write speed, the disk avail% will drop, this make sense. I originally thought the disk write mechanism could be something like “overwrite” or “merge rewrite”.
So the essence of the problem is the write throughput has exceeded the IO capacity, I’ll try to controll the write speed then.
defrag-sleep=0 might disable it. im not sure. When you have defrag-sleep set low and lwm set to 50% - do you see anything in defrag-q? grep defrag-q aerospike.log ?
I don’t understand what you mean by ‘printed when the data write speed has decreased’ ?
Is this based on the file type data storage instead of raw devices? can we get a snippet of your aerospike.conf namespace section? and output of lsblk from a machine?
Sorry for didn’t make it clear. What I mean is the log shows that the defrag-q has a value of 65251, this log is printed when I have decreased data write speed to lower the IO pressure. And Before I decreased the write speed, the value of defrag-q is even higher.
The namespace config:
I was curious if you were doing something like that. Why not just pass device /dev/sdg instead of making a filesystem/file? You should get better performance that way I believe. Also in 4.2 they overhauled the storage mechanism to make files smaller, actually all storage i think, so chances are you’ll get a decent performance boost even if you do continue using files.
I’d recommend going to the latest you’re comfortable with (4.2+) and just using the raw devices. Unless there is some specific reason you’re using files?
I take over serveral aerospike clusters from my former colleague. These clusters have been run for at least 3 years, for online services usage. Actually I’ve suspected that these clusters were run on HDD not SSD when they were built at the very begining.
Also the way of deployment of the clusters may prevent the cluster from using raw devices instead of files. The clusters share each of the disk, what I mean is that on each disk, there are files owned by different clusters.
I’ve read articles about namespace device config, and I got few questions.
The size upper limitation of a raw device is 2TB, however I couldn’t found a param that could manually set the limitation to a lower value.
I guess a single raw device can’t be used by different namespaces, which means single raw device can only be monopolized by one namespace? What would happen if 2 namespaces on different clusters use same raw device?
You might want to see if you get better performance by using raw devices. The solution to limiting how much of a drive you give to a namespace is to just use partitions. You can pass device /dev/sdx1 for example and maybe just have sdx1 just have 400GB or so. Its up to you. The only caution is that if you have multiple namespaces running off the same disk (say you give sdx1 to ns1 and sdx2 to ns2) is that if one namespace starts needing a lot of IO it may affect the other. But sounds like you’re already in that situation.