Rack Aware version C-3.13.0.10

Strange thing what i observed after the following steps :

  1. Upgraded cluster to 3.13
  2. Two nodes in partition number 1 while other two nodes in partition 2.
  3. Switched to heartbeat protocol version to v3 and paxos-protocol version to v5
  4. rolling restart of cluster after updating config.
  5. asinfo -v “racks:” shows : ns=test:rack_0=BB9CE6A9F390006,BB98E5D013AD406,BB9DCA5DC559106,BB94E9A22170D06
  6. Dynamicaly assigned rack id 1 to two nodes in partition number one and rack-id 2 to two nodes in partition number two.
  7. Added these rack ids in config and restarted one node to trigger migrations.
  8. Racks of all 4 nodes in cluster were changed.asinfo -v “racks:” shows : ns=test:rack_1=BB9CE6A9F390006,BB98E5D013AD406:rack_2=BB9DCA5DC559106,BB94E9A22170D06
  9. If i am using RF=2 and stopped two nodes on rack 1, still there is no data loss which suggests that cluster has become rack aware .

STRANGE THING IS WHEN I GIVE COMMAND asadm -e “info” output is :

NODE BUILD Rack Aware

10.0.49.131:3000 C-3.13.0.10 none 10.0.49.134:3000 C-3.13.0.10 none 10.0.49.158:3000 C-3.13.0.10 none 10.0.49.203:3000 C-3.13.0.10 none

This shows Rackaware Mode as none for all nodes and logs also displays this line :

Jul 03 2019 06:42:01 GMT: INFO (socket): (socket.c:2567) Node port 3001, node ID bb9ce6a9f390006, rack-aware 10.0.49.203 Jul 03 2019 06:42:01 GMT: INFO (config): (cfg.c:3755) Rack Aware mode not enabled

Given that rack ids are different and there is no data loss after shutting down 2 nodes, asadm -e “info” still shows rack aware not enabled. Is cluster actually rack enabled or not? If not, then why are rack ids different and why there is no data loss after shutting down 2 nodes with same rack id ?

See response in Cluster upgrade for rack awareness - #7 by Arpan_Jhalani.