I’ve configured a three node cluster with each machine having 64GB memory and a 1TB SSD. I can start the cluster up fresh with my namespace, allocated with 60GB, and I seem to be able to load 1 billion records with no problem (but this may not be true). However if I stop the service and restart it again the log indicates that it loads about half the records and then spends eternity trying to make more room. It never succeeeds.
Has anyone loaded a billion records into an aerospike client and if so, what configuration/machine config did you use.