I can store 1 billion records if the cluster is new but can't reload it

You’d have to share your config and also clarify if that capacity you described is for the cluster or per-node.

1 billion objects * 64B * replication-factor 2 = 128GB. That wouldn’t fit on a single node, so I believe you’re talking about each node having 64GB of DRAM, 128GB for the cluster.

Assuming an even distribution of the data on the nodes, this is 42.66GB per-node. That is over the 60% high watermark for memory (which is the default value) of (60GB * 0.6=) 36GB. You’ll be kicking into evictions, especially when you take down a node. With your data distributed over two remaining machines you’re using 128GB of 120GB available to the cluster…

Take a look at the capacity planning article in the deployment guide. You want to make sure to have enough capacity to hold all your data on an N-1 cluster without breaching the high-water-memory-pct.

Also see: