Asbackup taking similar time on local backup as full backup when using node-list

Hi All, I have approx 8.6 crore of records on my 3 node cluster. When i run asbackup on full cluster it takes approx 16 min 23 sec. And when i do the local backup using node-list option then again its taking approx same time with minor diff. The node that i am backing up having 3.2 crore records approx.Can anyone please help here why there is no difference in backup time in single vs cluster backup.

Thanks in advance.

@Albot @pgupta

It’s probably limited by your scan tuning, or machine speed, based off what you’re describing. See Managing Scans Doc here . What I think is probably happening is that a cluster-wide backup begins a scan against 3 nodes each pumping out a peak of ~10,000 records/s because they are limited by config/threads/media?/cpu? – and then a single-node backup still only pulls ~10,000 records/s. Even though it’s pulling 1/3 of the data, it will still only go as fast as the limit allows. Is 16 minutes not fast enough? Be safe when tuning. Check to see if you have sufficient headroom on media (if persistent), network, and cpu before cranking things up.

@Albot thnx for your response. I am using 3.15.1 build of aerospike. Could you please let me know the way to check these configs as on above mentioned link this option is not available under this version. We have approx 3.2 billion records spreaded over a 27 node cluster and we want to perform this activity as fast as possible. All these boxes are r5ad.Xlarge aws instances having 32 Gb ram with network speed upto 10gbps.

Perhaps this? scan-threads 3.6.0-4.7 - you might need to check the ‘show removed parameters option’ and search for it. I haven’t touched 3.x in years though. On AWS I’ve found it works very well to dump a local backup from a node out to zstd compression to an ST1 EBS volume with max writes/s performance. They actually do better than the GP2/IO series…

asbackup -l localhost:3000 -o - | pzstd -1 | cat > /mnt/st1/backupfile.zst Later on I sync it to s3