Not able to get the required Throughput time

Hi All,

I am trying to achieve a TPS of more than 1 Million. But I am not able to go beyond 100,000 TPS for Read operations and 13000 for Write Operation.

I am generating records of about 1 Million from Java code concurrently and for each record generated it simultaneously insert into Aerospike for which I am using AerospikeClient.put method and for reading I am using AerospikeClient.get

A single record contains just a numeric key and double Value.

Hardware : Server with 24 cores and 280 GB RAM.

Can you please Suggest any improvement in this ?

We are comparing BigMemory, GigaSpace and Aerospike for benchmarking ?

BigMemory is taking just 2 seconds for read/write operations for 1 Million records and GigaSpace is taking 6-8 seconds.

The configuration which I am using with Aerospike is as follows

Aerospike database configuration file.

service { user root group root paxos-single-replica-limit 1 pidfile /var/run/aerospike/asd.pid service-threads 4 transaction-queues 4 transaction-threads-per-queue 4 proto-fd-max 15000 }

logging { file /var/log/aerospike/aerospike.log { context any info } }

network { service { address any port 3000 }

heartbeat {
	mode multicast
	address 239.1.99.222
	port 9918
	interval 150
	timeout 10
}

fabric {
	port 3001
}

info {
	port 3003
}

}

namespace test { replication-factor 2 memory-size 4G default-ttl 30d

}

namespace bar { replication-factor 2 memory-size 4G default-ttl 30d storage-engine memory }