Can't explain the spikes in writes per second


#1

Hi running on 3.6.1

I have a 4 nodes cluster that consists of 32 cores and 128GB per machine… My namespace is configured hybrid 30GB RAM and 30GB disk per node.

My business logic is as follows…

1- Receive Https request in my app. 2- Execute single write to Aerospike. 3- Execute 7 queries in parallel. 4- Return response to web client.

So far I have 60,000,000 records and I am capable of achieiving about 5,000 full business requests per second.

But i noticed these wild spikes of 12,000 writes per second, which would also allow me to do even more business requests.

Do these spikes mean that I have some tuning to do which will allow for even more performance?

The CPU ustilisation is about 30% per node. Also the data is on Sandisk Extreme Pro SSDs


#2

Here is a more recent view…


#3

Are you using expiration for records? If a large amount expire at the same time you can see spikes like this as they’re deleted.

Throughput is likely not limited by Aerospike and more likely the threads and bandwidth of your app, especially if you’re accessing different keys in parallel (avoiding contention with multiple requests writing to the same keys).

Are you using async requests?


#4

Hi thanks,

1- Yes it’s probably the evictions causing this. I’ll look closer.

2- The app only does a single write per request and the key is a sequence number. So no 2 writes will ever contend for the same key.

3- Once the data is written, the app executes 7 queries in parallel, using 7 different secondary indexes.

4- Yes using the client in async mode. Write are async but not the statements. Can statements be async also?

The app manually creates 7 Statements and executes them in parallel using 7 threads.Wondering is it possible for the app to create 7 statements and batch them using 1 thread and using the Aerospike client Operations?


#5

In the next release (3.6.2 - not out yet at the time of this writing), deletes from eviction and expiration will no longer affect write counts and histograms.