Cluster Write latency impacted by number of client threads


#1

8 nodes Cluster with Aerospike 3.3.17 In-Memory, Replica = 2. Single C Client with multiple threads on a separate node to send Write traffic (in the same internal Switch).

The network latency by PING has stable 0.15ms.

#Threads	TPS	Average Latency
1		1.7K	0.57ms
2		3.6K	0.55ms
10		 37K	0.27ms
120		307K	0.39ms

Why the latency with #Threads = 1 is so large? (0.57ms) As comparison, the local Write on a single node has average latency 0.07ms.

I noticed the number of sockets on client side to remote port 3000 was dynamically changed (up to hundreds).


#2

In normal stable situation where socket connections are pre-established and are reused on each transaction, the lower thread count will have a lower latency.

Given that you see the client side re-establishing sockets, the extra latency must be caused by abnormal socket closing and reconnection. Here are a few possibility: (1) The transactions have timed out, causing the socket to close and new socket to open. (2) The sockets have been idle, and connections have been shutdown from the server side. New transactions will initiate new socket connections.