Hi
I was trying to use this tool and below output looks wrong to me. Shouldn’t there be non-zero number in at least any one column ?
asloglatency -l /var/log/aerospike/aerospike.log -h reads -n 10
reads
Jul 08 2015 12:42:34
% > (ms)
slice-to (sec) 1 8 64 512 4096 32768 ops/sec
-------------- ------ ------ ------ ------ ------ ------ --------
12:42:44 10 0.00 0.00 0.00 0.00 0.00 0.00 12.6
12:42:54 10 0.00 0.00 0.00 0.00 0.00 0.00 21.0
12:43:04 10 0.00 0.00 0.00 0.00 0.00 0.00 10.8
12:43:14 10 0.00 0.00 0.00 0.00 0.00 0.00 5.5
12:43:24 10 0.00 0.00 0.00 0.00 0.00 0.00 13.7
12:43:34 10 0.00 0.00 0.00 0.00 0.00 0.00 11.1
12:43:44 10 0.00 0.00 0.00 0.00 0.00 0.00 6.6
I might be interpreting wrong, can someone please confirm ?
Hi Ashish,
The way to interpret this is each column represent the % of transactions which took greater than the column header in ms.
For example, if you see the following, it means 2% of the total transactions took more than 1ms to complete. Similarly, if you see a ‘x’ (> 0) value under column heading 8, it means it took more than 8ms to complete x% of the total transactions.
asloglatency -l /var/log/aerospike/aerospike.log -h reads -n 10
reads
Jul 08 2015 12:42:34
% > (ms)
slice-to (sec) 1 8 64 512 4096 32768 ops/sec
-------------- ------ ------ ------ ------ ------ ------ --------
12:42:44 10 2.00 0.00 0.00 0.00 0.00 0.00 12.6
Hope the above helps. You can further tune the bucket display by the options given on this page:
http://www.aerospike.com/docs/tools/asloglatency/
Thanks @anushree for your quick response.
So, in my case, since all column values are zero, does it mean that 100% of transactions finished in < 1 ms ? Kindly confirm. Thanks!
Thanks @anushree ! .
One more doubt : Those numbers in the last column are absolute numbers, right ? Not in any other units (thousand etc). Kindly confirm. Thanks!
Correct again, they are absolute numbers.
Thanks @kporter.
And I guess the histogram for ‘reads’ doesn’t include batched reads, just like they are not included in TPS shown on AMC. Is that correct ?
That is correct… Batch is being reworked a bit in 3.6.x; as a side effect of the new implementation the read status will include batch reads. The Java and C based clients will use the new batch by default, I suspect other clients will follow this pattern.
1 Like
Thanks for the confirmation @kporter.
Stats around batched reads is a big and useful feature . Shall help many souls like us who always do batched calls .
Can I take the luxury of asking one more question here if you don’t mind ? What’s the best way of measuring “only aerospike server” latency ? I couldn’t find a good way. In the collectd plugin also, there is no way for this. And asloglatency didn’t helped. We have metrics over Aerospike call in our App but that include network latency also. Is there a good way to measure just the latency happening at Aerospike side for RW operations ?
Sorry for the bombardment of questions . Many Thanks always!
Have you taken a look at the latency stats from the collectd plugin? This will generate stats for reads, writes_master, proxy, writes_reply, udf, and query. You can control the granularity with the hist-track-thresholds parameter in your aerospike.conf (which can also be dynamically changed with asinfo).
Here is how I access the read status with default settings: