Multi-threaded performance

Hi,

I’m investigating lookup performance of Aerospike using the Java client from Spark. We have an 18-node Aerospike setup (1 disk each). For my use case, we have 64 processes with 1 client instance each, and 5 threads per process. The applications sends around 800,000 requests in parallel, which takes over 3 seconds to complete. This means that the performance is about 250,000 lookups/sec across 18 nodes.

In independent testing with YCSB, we saw the performance to be much higher; we got about 80,000 lookups/sec per node. Are there some configuration changes that would be suggested to get better performance in this use case? I can provide more details, if needed.

1 Like

Hi

Your issue is with the sizing of the cluster and is unrelated to the Java client API.

Did you complete a sizing exercise with Aerospike? If not, can you answer the following questions:

  1. Average size of each object/record
  2. Replication count (the default is 2)
  3. Number of peak read TPS
  4. Number of peak write TPS
  5. Number of total objects/records to be stored at anyone time
  6. Size of RAM in each node
  7. Type and size of SSDs (if used)
  8. Is your spark application in close network proximity to the aerospike cluster
  9. is each node in the aerospike cluster dedicated i.e. not doing anything else

If you can anwer the above questions accurately, I’ll size a cluster to meet your needs.

It is important to compare apples with apples. Have you setup the aerospike cluster with identical hardware specs with the YCSB benchmark that you are referring to?

Regards

Peter

Hi Peter,

I went back and did the YCSB tests again. As it turned out, the problem was with a couple of slow SSDs in the cluster, which reduced the query performance down to the slowest node. Thus far, we’ve been able to determine why the SSDs slow down. We are using Samsung 850 PRO 1TB SSDs. Rebooting the machine seems to fix the problem.

1 Like

Hi coolfrood,

Glad to hear this solved the problem! Please feel free to post any other questions to our forum.

Regards,

Maud