I have created a 3 node cluster, monitoring it using python client, so I am not able to figure out, How I can write to a node so that it could go beyond max-write-cache configured. so that I can test my python script. I want my write-q to be increase.
I have two namespaces
write-block-size = 128K
, max-write-cache =64M
write-block-size = 1M,
max-write-cache = 64M
I want my write-q to be go beyond 512 either in 1st case or 64 in 2nd namespace.
I tried to use aerospike benchmark tool, but not able to figure out how I can write.
What results are you seeing when you run asbench, and what is the namespace and device configuration? One obvious suggestion is to increase the concurrency and percentage of writes. Note also that the device speed and network speed will dictate whether it is possible to create the condition; if the device throughput is higher than the write throughput you are able to generate, you may not be able back up writes to the device.
That could go either way. Disabling data-in-memory will increase the read pressure on this device, but would also decrease the peak write rate.
@Karan190
Your current asbench setting will generate a maximum (assuming no timeouts) of 8 write blocks (z parameter), but realistically 1 given the selected blob size. We could possibly reach the 8 block max by increasing the blob size from 1400B to 128KiB / 2 + 1 (ensuring only one record per 128KiB write block), but this would still fall short of flooding the write-queue.
I’d suggest tweaking the asbench request as follows:
asbench -h 127.0.0.1 -p 3000 -n test -k 1000000 -o B65537 -w RU,1 -a -W8 -C700
Increased the blob size from 1400 bytes to 65537 to increase the rate at which we fill the 128KiB write blocks.
The -a puts asbenchmark into async mode which is usually easier to overwhelm the queues with.
The -W is the number of eventloops which basically governs the number of threads used in async modes. This should be set to the number of cores, I assumed 8 based on your -z selection.
The -C is the maximum number of concurrent async commands. Unsure if this is per eventloop or global, I think it is global. You need this to be more than the number of write-blocks supported by the write-q - otherwise we are unlikely to fill the write-q.
For both -W and -C, you may need to tune these up and down till you reach the peak throughput (especially since you are running on the same node and therefore the benchmark is competing for the same CPU resources as the server).
I would also recommend testing with the client and server each on separate server or desktop hardware (not a laptop), if possible. If not possible, I wouldn’t expect to extract meaningful results from a laptop, laptop based performance testing is typically unreliable and misleading (thermal throttling, Linux on VM, etc).
[Edits]
Edit 1:
Increased blob size from 1400 to 65537.
Increased -W from 300 to 700 to exceed the 512 write-blocks supported by the write-q.