This piece of code is called 20,000 times.
But due to default 300 async connections limit I am not able to correctly call this.
I want to use this in an async manner.
Thanks for helping out.
I am using a Nodejs client to connect.
Right now I am creating batches of 300 requests and when all promises get resolved, I insert next set of records.
This is giving me decent throughout but looking for better/optimized solution.
I am using latest version for Nodejs client.
Also, for now I’ve set maxConnsPerNode as 1000 and using the approach earlier mentioned with batches of 1000 async requests.
My implementation of batch inserting in an async manner.
Here, I create 1000 async requests and wait for all of them to finish before sending next batch.
Looking for better approach.
async function batchUpload(packets) {
// batchSize is set to 1000.
for(let batch=0; batch<packets.length; batch+=batchSize) {
let promises = [];
for (let x = 0; x<batchSize && x+batch<packets.length; x++) {
promises.push(loadData(packets[batch+x]));
}
// Send next batch after all promises resolved
await Promise.all(promises.map(p=>p.catch(()=>'failed')))
.then(results => console.log(results.filter(x => x=='failed').length, ' requests failed'));
}
}
async function loadData(packet) {
packet['_id'] = packet['_id'].toString();
const key = new Aerospike.Key('test', 'my_set', packet['_id']);
return aerospikeClient.put(key, packet, {}, policy)
.catch(error => {console.log('Failied to insert'); console.log(error);});
}
Thanks for sharing code… I don’t know much about node.js but curious whether you tried running a similar workload using the Java or C benchmark tool (just to have some idea on the performance of your setup when using the simple benchmark tool). The benchmark tools let you simulate write workload with some pretty fine tuning in terms of record size, number of bins, etc…