Server v3.12.1.3: transaction-queues not set to #CPU & unsuccessful write shown in AMC

I have upgraded from server version 3.8.1 to 3.12.1.2 on centos 6.7 and then upgraded centos 6.7 to 7.4 (kernal 4.13.1-1.el7.elrepo.x86_64) and server from 3.12.1.2 to 3.12.1.3 and i have added auto-pin cpu and because of this removed service-threads and transaction-queues from service config.

service {
    user root
    group root
    paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
    pidfile /var/run/aerospike/asd.pid
    #service-threads 24
    #transaction-queues 24
    transaction-threads-per-queue 4
    proto-fd-max 30000
    transaction-pending-limit 20
    auto-pin cpu
}

logging {
    # Log file must be an absolute path.
    file /var/log/aerospike/aerospike.log {
            context any info
    }
}

network {
    service {
            address bond0
            port 3000
            access-address bond0
    }

    heartbeat {


#               mode multicast
 #              address  224.0.0.116 # 239.1.99.222
  #                     port 9918

mode mesh
port 3002

mesh-seed-address-port 10.56.101.184 3002 #a1
# followed by 13 more machines

#mesh-port 3002

            # To use unicast-mesh heartbeats, remove the 3 lines above, and see
            # aerospike_mesh.conf for alternative.

            interval 150
            timeout 10
    }

    fabric {
            address bond0
            port 3001
    }

    info {
            port 3003
    }
}

namespace test {
    replication-factor 2
    memory-size 1G
    default-ttl 30d # 30 days, use 0 to never expire/evict.

    storage-engine memory
}


namespace production {
  replication-factor 2
  memory-size 59G
  default-ttl 0 # 30 days, use 0 to never expire/evict.
    high-water-disk-pct 50 # How full may the disk become before the
                           # server begins eviction (expiring records
                           # early)
    high-water-memory-pct 85 # How full may the memory become before the
                             # server begins eviction (expiring records
                             # early)
    stop-writes-pct 90  # How full may the memory become before
                        # we disallow new writes
  # storage-engine memory
  storage-engine device {
            #device /dev/sdb1
            #data-in-memory false

file /opt/aerospike/data/production.data
filesize 1000G # 8 times of RAM
data-in-memory true

            #write-block-size 128K   # adjust block size to make it efficient for SSDs.
            # largwst size of any object
    }
}

According to this document service-threads and transaction-queues should be automatically set to number of core . But on 24 core machine

nproc --all
24

asinfo -h localhost -v “get-config:context=service”

paxos-single-replica-limit=1;pidfile=/var/run/aerospike/asd.pid;proto-fd-max=30000;advertise-ipv6=false;batch-threads=4;batch-max-buffers-per-queue=255;batch-max-requests=5000;batch-max-unused-buffers=256;batch-priority=200;batch-index-threads=24;clock-skew-max-ms=1000;cluster-name=null;enable-benchmarks-fabric=false;enable-benchmarks-svc=false;enable-hist-info=false;hist-track-back=300;hist-track-slice=10;hist-track-thresholds=null;info-threads=16;ldt-benchmarks=false;log-local-time=false;migrate-max-num-incoming=4;migrate-threads=1;node-id-interface=null;nsup-delete-sleep=100;nsup-period=120;nsup-startup-evict=true;paxos-max-cluster-size=32;paxos-protocol=v3;paxos-recovery-policy=auto-reset-master;paxos-retransmit-period=5;proto-fd-idle-ms=60000;proto-slow-netio-sleep-ms=1;query-batch-size=100;query-buf-size=2097152;query-bufpool-size=256;query-in-transaction-thread=false;query-long-q-max-size=500;query-microbenchmark=false;query-pre-reserve-partitions=false;query-priority=10;query-priority-sleep-us=1;query-rec-count-bound=18446744073709551615;query-req-in-query-thread=false;query-req-max-inflight=100;query-short-q-max-size=500;query-threads=6;query-threshold=10;query-untracked-time-ms=1000;query-worker-threads=15;respond-client-on-master-completion=false;run-as-daemon=true;scan-max-active=100;scan-max-done=100;scan-max-udf-transactions=32;scan-threads=4;service-threads=24;sindex-builder-threads=4;sindex-gc-enable-histogram=false;ticker-interval=10;transaction-max-ms=1000;transaction-pending-limit=20;transaction-queues=4;transaction-repeatable-read=false;transaction-retry-ms=1002;transaction-threads-per-queue=4;work-directory=/opt/aerospike;write-duplicate-resolution-disable=false;fabric-dump-msgs=false;max-msgs-per-type=-1;memory-accounting=false;prole-extra-ttl=0;non-master-sets-delete=false

shows only service-threads=24 BUT transaction-queues=4.

Shouldn’t it be 24 ?

And also in 14 node cluster , few particular node has 5-10 times more client connections than other. Is is normal ?

What should be performance impact because of low transaction-queues value ?

Thank you for reporting this. This is a shortcoming in the documentation. The number of transaction queues only defaults to the number of CPUs, if you have at least one namespace that’s not in-memory.

In your case, your test namespace uses storage-engine memory and your production namespace uses data-in-memory true. This means that both namespaces are counted as in-memory. That’s why the number of transaction queues still defaults to 4. Yes, this should be documented in greater detail.

The reasoning is that transactions for in-memory namespaces are not dispatched to the transaction threads, i.e., the transaction queues and threads would be basically idle. Instead, the transactions for in-memory namespaces run directly on the service threads to avoid adding latency.

For non-in-memory namespaces the latency is not as critical, because the major part of the transaction latency is caused by SSD accesses. That’s why it’s OK to dispatch to the transaction threads for them.

A higher number of client connections can point to a temporary latency spike on a node. Each node in a cluster gets roughly the same TPS. Suppose that we have 10,000 TPS per node. Suppose that each transaction takes 1 ms. This means that one client connection can do 1,000 transactions per second, as transactions are sequential per client connection and we’re assuming 1 ms per transaction. At 10,000 TPS, the client would thus open 10 connections, so that we’d end up with 1,000 TPS/con x 10 con = 10,000 TPS.

Now, suppose that there’s a temporary latency spike to 100 ms for node, e.g., because of a network hiccup. Then the client would open 1,000 connections, because we’d now have 10 TPS/con and we’d thus need 1,000 connections to achieve 10,000 TPS.

Currently, the client doesn’t ever reduce the number of open connections and a network hiccup has a lasting effect. This may be the reason why you’re seeing an uneven distribution of client connections across clusters.

Thanks for clarification.

I am adding screenshot of AMC dashboard where it shows in “Write per second” graph that total as 10k but success is 1K. UDFs graph shows similar total & success difference. When i switched to “Nodewise” mode , one particular node shows very less success rate in “Write per second” graph and it has 10 times more total than other nodes. shouldn’t it be equally distributed across all nodes ?

ClusterWise:

Nodewise:

I checked logs of that node but couldn’t find anything suspicious.

Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:170) NODE-ID bb904d0657ac40c CLUSTER-SIZE 14
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:253)    system-memory: free-kbytes 8933144 free-pct 13 heap-kbytes (54845208,56047396,61696000) heap-efficiency-pct 88.9
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:267)    in-progress: tsvc-q 0 info-q 0 nsup-delete-q 0 rw-hash 0 proxy-hash 0 tree-gc-q 0
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:289)    fds: proto (2474,45050512,45048038) heartbeat (13,106,93) fabric (312,967,655)
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:298)    heartbeat-received: self 0 foreign 27275132
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:328)    fabric-bytes-per-second: bulk (0,0) ctrl (0,0) meta (0,0) rw (371179,250497)
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:352)    early-fail: demarshal 0 tsvc-client 2 tsvc-batch-sub 0 tsvc-udf-sub 0
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:369)    batch-index: batches (95587,0,9)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:145) histogram dump: batch-index (95587 total) msec
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (00: 0000000208) (01: 0000052501) (02: 0000042659) (03: 0000000169)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (04: 0000000012) (05: 0000000009) (06: 0000000003) (07: 0000000005)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:171)  (08: 0000000004) (09: 0000000009) (10: 0000000008)
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:382) {test} objects: all 1 master 0 prole 1
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:443) {test} migrations: complete
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:462) {test} memory-usage: total-bytes 204 index-bytes 64 sindex-bytes 0 data-bytes 140 used-pct 0.00
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:382) {bidder} objects: all 254767577 master 129867393 prole 124900184
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:443) {bidder} migrations: complete
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:462) {bidder} memory-usage: total-bytes 46305991922 index-bytes 16305124928 sindex-bytes 11052854 data-bytes 29989814140 used-pct 73.09
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:492) {bidder} device-usage: used-bytes 70337293952 avail-pct 92
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:585) {bidder} client: tsvc (0,1329) proxy (332,0,6) read (0,0,0,0) write (15454446,1380,81) delete (0,0,0,0) udf (440219082,956268650,9477) lang (20656,439780581,426994,2)
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:613) {bidder} batch-sub: tsvc (0,2244) proxy (0,0,0) read (30271842,0,0,18736736)
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:641) {bidder} scan: basic (5,3,0) aggr (0,0,0) udf-bg (0,0,0)
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:666) {bidder} query: basic (1581701,0) aggr (0,0) udf-bg (0,0)
Sep 23 2017 05:14:02 GMT: INFO (info): (ticker.c:735) {bidder} retransmits: migration 3208353 client-read 0 client-write (0,0) client-delete (0,0) client-udf (0,0) batch-sub 0 udf-sub (0,0) nsup 0
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-write (15455826 total) msec
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (00: 0015449947) (01: 0000003013) (02: 0000001542) (03: 0000000429)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (04: 0000000201) (05: 0000000138) (06: 0000000098) (07: 0000000102)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (08: 0000000105) (09: 0000000124) (10: 0000000116) (11: 0000000010)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:171)  (12: 0000000001)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-udf (1396487732 total) msec
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (00: 1303173419) (01: 0038038003) (02: 0033232452) (03: 0017718619)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (04: 0004059842) (05: 0000171057) (06: 0000031335) (07: 0000019966)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (08: 0000014197) (09: 0000012921) (10: 0000013912) (11: 0000001806)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:171)  (12: 0000000202) (13: 0000000001)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-query (1581701 total) msec
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (00: 0000802794) (01: 0000777049) (02: 0000001121) (03: 0000000411)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (04: 0000000071) (05: 0000000028) (06: 0000000019) (07: 0000000021)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (08: 0000000031) (09: 0000000050) (10: 0000000048) (11: 0000000057)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:171)  (12: 0000000001)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-query-rec-count (1581563 total) count
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (01: 0000000253) (02: 0000000278) (03: 0000000618) (04: 0000002373)
Sep 23 2017 05:14:02 GMT: INFO (info): (hist.c:162)  (05: 0000014404) (06: 0000274975) (07: 0001168608) (08: 0000120054)
Sep 23 2017 05:14:03 GMT: INFO (drv_ssd): (drv_ssd.c:2118) {bidder} /opt/aerospike/data/bidder.data: used-bytes 70337255168 free-wblocks 942342 write-q 0 write (721776,0.9) defrag-q 0 defrag-read (668186,1.0) defrag-write (288844,0.4)
Sep 23 2017 05:14:07 GMT: INFO (nsup): (thr_nsup.c:1109) {bidder} Records: 129866432, 128147408 0-vt, 3316(5666745) expired, 0(0) evicted, 0(0) set deletes. Evict ttl: 0. Waits: 0,0,0. Total time: 18777 ms
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:170) NODE-ID bb904d0657ac40c CLUSTER-SIZE 14
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:253)    system-memory: free-kbytes 8933524 free-pct 13 heap-kbytes (54844747,56047532,61696000) heap-efficiency-pct 88.9
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:267)    in-progress: tsvc-q 0 info-q 0 nsup-delete-q 0 rw-hash 0 proxy-hash 0 tree-gc-q 0
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:289)    fds: proto (2474,45050514,45048040) heartbeat (13,106,93) fabric (312,967,655)
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:298)    heartbeat-received: self 0 foreign 27275998
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:328)    fabric-bytes-per-second: bulk (0,0) ctrl (0,0) meta (0,0) rw (341968,231534)
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:352)    early-fail: demarshal 0 tsvc-client 2 tsvc-batch-sub 0 tsvc-udf-sub 0
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:369)    batch-index: batches (95591,0,9)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:145) histogram dump: batch-index (95591 total) msec
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (00: 0000000208) (01: 0000052505) (02: 0000042659) (03: 0000000169)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (04: 0000000012) (05: 0000000009) (06: 0000000003) (07: 0000000005)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:171)  (08: 0000000004) (09: 0000000009) (10: 0000000008)
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:382) {test} objects: all 1 master 0 prole 1
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:443) {test} migrations: complete
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:462) {test} memory-usage: total-bytes 204 index-bytes 64 sindex-bytes 0 data-bytes 140 used-pct 0.00
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:382) {bidder} objects: all 254766633 master 129866695 prole 124899938
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:443) {bidder} migrations: complete
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:462) {bidder} memory-usage: total-bytes 46305869125 index-bytes 16305064512 sindex-bytes 11052854 data-bytes 29989751759 used-pct 73.09
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:492) {bidder} device-usage: used-bytes 70337059968 avail-pct 92
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:585) {bidder} client: tsvc (0,1329) proxy (332,0,6) read (0,0,0,0) write (15455076,1380,81) delete (0,0,0,0) udf (440227103,956384416,9477) lang (20656,439788602,426994,2)
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:613) {bidder} batch-sub: tsvc (0,2244) proxy (0,0,0) read (30273154,0,0,18737412)
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:641) {bidder} scan: basic (5,3,0) aggr (0,0,0) udf-bg (0,0,0)
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:666) {bidder} query: basic (1581751,0) aggr (0,0) udf-bg (0,0)
Sep 23 2017 05:14:12 GMT: INFO (info): (ticker.c:735) {bidder} retransmits: migration 3208353 client-read 0 client-write (0,0) client-delete (0,0) client-udf (0,0) batch-sub 0 udf-sub (0,0) nsup 0
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-write (15456456 total) msec
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (00: 0015450577) (01: 0000003013) (02: 0000001542) (03: 0000000429)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (04: 0000000201) (05: 0000000138) (06: 0000000098) (07: 0000000102)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (08: 0000000105) (09: 0000000124) (10: 0000000116) (11: 0000000010)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:171)  (12: 0000000001)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-udf (1396611519 total) msec
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (00: 1303286999) (01: 0038041921) (02: 0033236094) (03: 0017720762)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (04: 0004060334) (05: 0000171069) (06: 0000031335) (07: 0000019966)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (08: 0000014197) (09: 0000012921) (10: 0000013912) (11: 0000001806)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:171)  (12: 0000000202) (13: 0000000001)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-query (1581751 total) msec
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (00: 0000802818) (01: 0000777075) (02: 0000001121) (03: 0000000411)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (04: 0000000071) (05: 0000000028) (06: 0000000019) (07: 0000000021)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (08: 0000000031) (09: 0000000050) (10: 0000000048) (11: 0000000057)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:171)  (12: 0000000001)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-query-rec-count (1581613 total) count
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (01: 0000000253) (02: 0000000278) (03: 0000000618) (04: 0000002373)
Sep 23 2017 05:14:12 GMT: INFO (info): (hist.c:162)  (05: 0000014404) (06: 0000274975) (07: 0001168658) (08: 0000120054)
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:170) NODE-ID bb904d0657ac40c CLUSTER-SIZE 14
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:253)    system-memory: free-kbytes 8933484 free-pct 13 heap-kbytes (54844261,56047300,61696000) heap-efficiency-pct 88.9
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:267)    in-progress: tsvc-q 0 info-q 0 nsup-delete-q 0 rw-hash 0 proxy-hash 0 tree-gc-q 0
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:289)    fds: proto (2474,45050523,45048049) heartbeat (13,106,93) fabric (312,967,655)
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:298)    heartbeat-received: self 0 foreign 27276864
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:328)    fabric-bytes-per-second: bulk (0,0) ctrl (0,0) meta (0,0) rw (316766,216136)
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:352)    early-fail: demarshal 0 tsvc-client 2 tsvc-batch-sub 0 tsvc-udf-sub 0
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:369)    batch-index: batches (95595,0,9)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:145) histogram dump: batch-index (95595 total) msec
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (00: 0000000208) (01: 0000052508) (02: 0000042660) (03: 0000000169)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (04: 0000000012) (05: 0000000009) (06: 0000000003) (07: 0000000005)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:171)  (08: 0000000004) (09: 0000000009) (10: 0000000008)
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:382) {test} objects: all 1 master 0 prole 1
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:443) {test} migrations: complete
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:462) {test} memory-usage: total-bytes 204 index-bytes 64 sindex-bytes 0 data-bytes 140 used-pct 0.00
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:382) {bidder} objects: all 254766703 master 129866882 prole 124899821
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:443) {bidder} migrations: complete
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:462) {bidder} memory-usage: total-bytes 46305878945 index-bytes 16305068992 sindex-bytes 11052854 data-bytes 29989757099 used-pct 73.09
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:492) {bidder} device-usage: used-bytes 70337078912 avail-pct 92
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:585) {bidder} client: tsvc (0,1329) proxy (332,0,6) read (0,0,0,0) write (15455696,1380,81) delete (0,0,0,0) udf (440234927,956494870,9477) lang (20656,439796426,426994,2)
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:613) {bidder} batch-sub: tsvc (0,2244) proxy (0,0,0) read (30274466,0,0,18738088)
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:641) {bidder} scan: basic (5,3,0) aggr (0,0,0) udf-bg (0,0,0)
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:666) {bidder} query: basic (1581801,0) aggr (0,0) udf-bg (0,0)
Sep 23 2017 05:14:22 GMT: INFO (info): (ticker.c:735) {bidder} retransmits: migration 3208353 client-read 0 client-write (0,0) client-delete (0,0) client-udf (0,0) batch-sub 0 udf-sub (0,0) nsup 0
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-write (15457076 total) msec
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (00: 0015451197) (01: 0000003013) (02: 0000001542) (03: 0000000429)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (04: 0000000201) (05: 0000000138) (06: 0000000098) (07: 0000000102)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (08: 0000000105) (09: 0000000124) (10: 0000000116) (11: 0000000010)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:171)  (12: 0000000001)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-udf (1396729797 total) msec
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (00: 1303395454) (01: 0038045588) (02: 0033239656) (03: 0017722798)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (04: 0004060871) (05: 0000171090) (06: 0000031335) (07: 0000019966)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (08: 0000014197) (09: 0000012921) (10: 0000013912) (11: 0000001806)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:171)  (12: 0000000202) (13: 0000000001)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-query (1581801 total) msec
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (00: 0000802841) (01: 0000777102) (02: 0000001121) (03: 0000000411)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (04: 0000000071) (05: 0000000028) (06: 0000000019) (07: 0000000021)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (08: 0000000031) (09: 0000000050) (10: 0000000048) (11: 0000000057)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:171)  (12: 0000000001)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:145) histogram dump: {bidder}-query-rec-count (1581663 total) count
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (01: 0000000253) (02: 0000000278) (03: 0000000618) (04: 0000002373)
Sep 23 2017 05:14:22 GMT: INFO (info): (hist.c:162)  (05: 0000014404) (06: 0000274975) (07: 0001168708) (08: 0000120054)
Sep 23 2017 05:14:23 GMT: INFO (drv_ssd): (drv_ssd.c:2118) {bidder} /opt/aerospike/data/bidder.data: used-bytes 70337087360 free-wblocks 942341 write-q 0 write (721789,0.6) defrag-q 0 defrag-read (668198,0.6) defrag-write (288848,0.2)

It should be evenly distributed, unless you have some particular records which are being continuously updated. Do you have report-data-op enabled for this namespace, or any reason to believe the former being true?

I have not set anything in config for report-data-op. I am not sure how to check it. I run below command -

$ asinfo -h localhost  -v "get-config:context=security"
enable-security=false;privilege-refresh-period=300;report-authentication-sinks=0;report-data-op-sinks=0;report-sys-admin-sinks=0;report-user-admin-sinks=0;report-violation-sinks=0;syslog-local=-1

It should be evenly distributed, unless you have some particular records which are being continuously updated.

This might be true. How can i check it?

It might not be the case, but its not too easy to recognize. The best way is to do a rolling recycle and enable report-data-op to audit all reads… http://www.aerospike.com/docs/guide/security/access-control.html otherwise you’d need to figure that out from tcpdump