Data integrity (select aggregation uncorrect?) during node restart


HI all, I have a cluster with 3 nodes.

I inserted 1 million records in a Set.

With AMC I see that:

Node1 has 340000 Master Objects and 337000 Replice Objects

Node2 has 340000 Master Objects and 335000 Replice Objects

Node3 has 320000 Master Objects and 328000 Replice Objects

Then I stop aerospike process (service aerospike stop) on Node3.

Immediately I see:

Node1 has 510000 Master Objects and few Replice Objects

Node2 has 490000 Master Objects and few Replice Objects

In this fase I also see increasing Replice Objects (at the end of migration phase Replice Objects are, as expected, 490000 for Node1 and 510000 for Node2).

During migration phase a select aggregation (a select count) of the records in the Set, answers with correct value: 1 million.

When I restart node3 Master Object immediately decrease for Node1 and Node2 (340000 and 330000) and are 2000 for node3.

In this situation “select aggregation” returns about 700000 recordst and the result go on being wrong until migration is over.

Question is: why data are immediately “congruents” when a node is stopped and are “incongruents” when a node is restarted (in the first case I don’t need to wait until migrations phase is over).

Many thanks for any help.




This is expected behavior

Look at the recommendation 2 at the bottom. Not advisable to run scan while migrations are going on. We are working on improving the behavior there … Will keep you posted.

  • R


Thank you ray (as usual a diligent and accurate answer!)

Another (two little) questions (last questions :slight_smile: )

  1. during migration it’s guaranteed correct result if I insert/delete/select a specific record?

  2. If a node is restarting it’s possible to prevent that insertion of new records will involve that node (till the end of migration)?

(I’d like that new records, master and backup copy, will be stored in other nodes of the cluster).

Thanks again


  1. Yes when you do key based read
  2. No !! As soon as node starts and joins cluster it starts taking reads/writes as others
  3. There is no way to explicitly control which node data is stored. It is determined by system based on hash.

– R


Thank you again, raj!

have a good weekend.