Hi,
The command: aerospike-client-c/benchmarks/target/benchmarks -h 10.55.2.166 -o B:100 -w RU,0 -z 300 -L 4,1
Here’s the distribution for one client/one node cluster (310K tps):
top - 01:32:45 up 7 days, 44 min, 1 user, load average: 1.97, 0.83, 0.35
Tasks: 266 total, 2 running, 264 sleeping, 0 stopped, 0 zombie
Cpu0 : 9.1%us, 8.7%sy, 0.0%ni, 3.8%id, 0.0%wa, 0.0%hi, 78.3%si, 0.0%st
Cpu1 : 7.2%us, 8.6%sy, 0.0%ni, 3.1%id, 0.0%wa, 0.0%hi, 81.0%si, 0.0%st
Cpu2 : 20.2%us, 19.1%sy, 0.0%ni, 27.0%id, 0.0%wa, 0.0%hi, 33.1%si, 0.6%st
Cpu3 : 18.8%us, 18.8%sy, 0.0%ni, 23.6%id, 0.0%wa, 0.0%hi, 37.7%si, 1.0%st
Cpu4 : 3.6%us, 5.3%sy, 0.0%ni, 89.9%id, 0.0%wa, 0.0%hi, 0.0%si, 1.2%st
Cpu5 : 6.4%us, 2.3%sy, 0.0%ni, 89.5%id, 0.0%wa, 0.0%hi, 0.0%si, 1.8%st
Cpu6 : 4.1%us, 5.2%sy, 0.0%ni, 89.5%id, 0.0%wa, 0.0%hi, 0.0%si, 1.2%st
Cpu7 : 5.1%us, 4.0%sy, 0.0%ni, 89.1%id, 0.0%wa, 0.0%hi, 0.0%si, 1.7%st
Cpu8 : 0.0%us, 0.4%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu9 : 0.4%us, 0.4%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu10 : 0.0%us, 0.4%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu11 : 0.4%us, 0.9%sy, 0.0%ni, 97.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu12 : 0.0%us, 0.0%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu13 : 0.4%us, 0.4%sy, 0.0%ni, 97.8%id, 0.0%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu14 : 0.4%us, 1.3%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu15 : 0.9%us, 0.4%sy, 0.0%ni, 97.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu16 : 1.6%us, 2.6%sy, 0.0%ni, 94.8%id, 0.0%wa, 0.0%hi, 0.0%si, 1.0%st
Cpu17 : 2.1%us, 0.5%sy, 0.0%ni, 96.4%id, 0.0%wa, 0.0%hi, 0.0%si, 1.0%st
Cpu18 : 5.0%us, 4.5%sy, 0.0%ni, 88.9%id, 0.0%wa, 0.0%hi, 0.0%si, 1.5%st
Cpu19 : 4.6%us, 4.6%sy, 0.0%ni, 89.3%id, 0.0%wa, 0.0%hi, 0.0%si, 1.5%st
Cpu20 : 4.1%us, 2.1%sy, 0.0%ni, 92.2%id, 0.0%wa, 0.0%hi, 0.0%si, 1.6%st
Cpu21 : 2.1%us, 3.2%sy, 0.0%ni, 93.7%id, 0.0%wa, 0.0%hi, 0.0%si, 1.1%st
Cpu22 : 2.6%us, 3.1%sy, 0.0%ni, 92.8%id, 0.0%wa, 0.0%hi, 0.0%si, 1.5%st
Cpu23 : 1.6%us, 3.2%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 1.1%st
Cpu24 : 0.4%us, 0.0%sy, 0.0%ni, 99.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu25 : 0.0%us, 0.4%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st
Cpu26 : 0.0%us, 0.4%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st
Cpu27 : 0.4%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st
Cpu28 : 0.4%us, 0.4%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st
Cpu29 : 0.4%us, 0.4%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.8%st
Cpu30 : 0.0%us, 0.0%sy, 0.0%ni, 99.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st
Cpu31 : 0.0%us, 0.0%sy, 0.0%ni, 99.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st
Mem: 251902544k total, 1624312k used, 250278232k free, 141408k buffers
And here’s the distribution for one client/two-node cluster(82K tps):
top - 01:38:16 up 7 days, 50 min, 1 user, load average: 3.04, 2.30, 1.19
Tasks: 266 total, 1 running, 265 sleeping, 0 stopped, 0 zombie
Cpu0 : 8.8%us, 10.2%sy, 0.0%ni, 12.1%id, 0.0%wa, 0.0%hi, 68.4%si, 0.5%st
Cpu1 : 12.4%us, 13.4%sy, 0.0%ni, 15.3%id, 0.0%wa, 0.0%hi, 58.4%si, 0.5%st
Cpu2 : 6.9%us, 9.0%sy, 0.0%ni, 74.5%id, 0.0%wa, 0.0%hi, 6.9%si, 2.8%st
Cpu3 : 8.7%us, 12.8%sy, 0.0%ni, 62.4%id, 0.0%wa, 0.0%hi, 14.1%si, 2.0%st
Cpu4 : 5.0%us, 3.0%sy, 0.0%ni, 91.1%id, 0.0%wa, 0.0%hi, 0.0%si, 1.0%st
Cpu5 : 3.5%us, 4.5%sy, 0.0%ni, 91.0%id, 0.0%wa, 0.0%hi, 0.0%si, 1.0%st
Cpu6 : 2.5%us, 3.0%sy, 0.0%ni, 93.5%id, 0.0%wa, 0.0%hi, 0.0%si, 1.0%st
Cpu7 : 2.5%us, 3.0%sy, 0.0%ni, 93.5%id, 0.0%wa, 0.0%hi, 0.0%si, 1.0%st
Cpu8 : 7.7%us, 1.8%sy, 0.0%ni, 86.9%id, 2.3%wa, 0.0%hi, 0.0%si, 1.4%st
Cpu9 : 5.9%us, 2.3%sy, 0.0%ni, 87.8%id, 2.7%wa, 0.0%hi, 0.0%si, 1.4%st
Cpu10 : 7.7%us, 2.3%sy, 0.0%ni, 86.5%id, 2.3%wa, 0.0%hi, 0.0%si, 1.4%st
Cpu11 : 5.0%us, 1.8%sy, 0.0%ni, 89.0%id, 2.7%wa, 0.0%hi, 0.0%si, 1.4%st
Cpu12 : 6.3%us, 3.2%sy, 0.0%ni, 87.3%id, 2.3%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu13 : 6.4%us, 2.3%sy, 0.0%ni, 88.6%id, 1.8%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu14 : 4.6%us, 1.4%sy, 0.0%ni, 91.3%id, 1.4%wa, 0.0%hi, 0.0%si, 1.4%st
Cpu15 : 7.1%us, 2.7%sy, 0.0%ni, 86.6%id, 2.2%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu16 : 3.6%us, 3.6%sy, 0.0%ni, 92.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu17 : 3.2%us, 4.1%sy, 0.0%ni, 91.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu18 : 2.7%us, 1.8%sy, 0.0%ni, 94.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu19 : 2.3%us, 3.6%sy, 0.0%ni, 93.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu20 : 1.4%us, 1.4%sy, 0.0%ni, 96.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu21 : 0.9%us, 1.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu22 : 1.8%us, 3.2%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.9%st
Cpu23 : 0.9%us, 3.6%sy, 0.0%ni, 94.2%id, 0.0%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu24 : 2.6%us, 0.9%sy, 0.0%ni, 94.4%id, 0.9%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu25 : 1.7%us, 0.4%sy, 0.0%ni, 95.7%id, 0.9%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu26 : 1.3%us, 0.4%sy, 0.0%ni, 95.2%id, 1.3%wa, 0.0%hi, 0.0%si, 1.7%st
Cpu27 : 2.1%us, 0.0%sy, 0.0%ni, 95.7%id, 0.9%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu28 : 1.3%us, 1.3%sy, 0.0%ni, 95.7%id, 0.4%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu29 : 3.4%us, 1.3%sy, 0.0%ni, 93.2%id, 0.9%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu30 : 1.7%us, 1.3%sy, 0.0%ni, 95.2%id, 0.4%wa, 0.0%hi, 0.0%si, 1.3%st
Cpu31 : 1.3%us, 0.9%sy, 0.0%ni, 95.7%id, 0.9%wa, 0.0%hi, 0.0%si, 1.3%st
As you can see, the load (mainly irq cpu) is lower with two nodes, but throughput is also lower.
One node dstat:
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
0 0 99 0 0 0| 322k 24k| 0 0 | 0 0 | 19k 14k
3 3 86 0 0 8| 0 0 | 73M 27M| 0 0 | 336k 315k
3 3 86 0 0 8| 0 0 | 72M 27M| 0 0 | 337k 314k
3 3 86 0 0 8| 0 0 | 73M 27M| 0 0 | 338k 317k
3 3 87 0 0 7| 0 0 | 71M 26M| 0 0 | 339k 314k
2 3 87 0 0 8| 0 0 | 73M 27M| 0 0 | 335k 316k
3 3 86 0 0 8| 0 0 | 73M 27M| 0 0 | 337k 316k
3 3 85 0 0 8| 0 0 | 75M 28M| 0 0 | 333k 313k
3 3 86 0 0 8| 0 0 | 74M 28M| 0 0 | 340k 318k
3 3 86 0 0 8| 0 0 | 75M 28M| 0 0 | 347k 325k
One node sar:
sar -n DEV 1
Linux 4.4.8-20.46.amzn1.x86_64 (ip-10-55-2-166) 06/02/2016 _x86_64_ (32 CPU)
01:43:30 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:43:31 AM eth0 322360.22 322343.01 80904.10 30219.70 0.00 0.00 0.00
01:43:31 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:43:31 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:43:32 AM eth0 341315.91 341250.00 85642.98 31992.44 0.00 0.00 0.00
01:43:32 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:43:32 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:43:33 AM eth0 341445.45 341497.73 85694.22 32015.78 0.00 0.00 0.00
01:43:33 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Two nodes dstat:
dstat
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
0 0 99 0 0 0| 322k 24k| 0 0 | 0 0 | 19k 13k
4 3 89 1 0 4| 897k 0 | 30M 28M| 0 0 | 425k 392k
4 3 89 0 0 4| 149k 0 | 31M 28M| 0 0 | 413k 389k
4 3 90 0 0 4| 0 32k| 31M 28M| 0 0 | 416k 391k
4 3 90 0 0 4| 0 0 | 30M 28M| 0 0 | 408k 382k
3 3 90 0 0 4| 0 0 | 30M 28M| 0 0 | 419k 384k
4 3 90 0 0 3| 0 0 | 31M 28M| 0 0 | 418k 388k
4 3 90 0 0 4| 0 0 | 30M 27M| 0 0 | 412k 380k
3 3 90 0 0 4| 0 0 | 30M 27M| 0 0 | 420k 383k
4 3 89 1 0 4| 632k 10M| 31M 28M| 0 0 | 414k 386k
4 3 88 1 0 4| 895k 16M| 31M 28M| 0 0 | 409k 387k
4 3 89 1 0 4| 904k 16M| 30M 28M| 0 0 | 417k 392k
3 3 89 1 0 4| 902k 16M| 30M 28M| 0 0 | 413k 387k
Two node cluster sar:
01:46:48 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:46:49 AM eth0 197400.00 191634.92 45825.60 36401.26 0.00 0.00 0.00
01:46:49 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:46:49 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:46:50 AM eth0 212863.79 211029.31 49966.82 39905.64 0.00 0.00 0.00
01:46:50 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:46:50 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:46:51 AM eth0 188024.62 190293.85 44468.70 35797.17 0.00 0.00 0.00
01:46:51 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:46:51 AM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:46:52 AM eth0 207218.03 192340.98 47433.24 37260.85 0.00 0.00 0.00
01:46:52 AM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00