we have a cluster with two nodes and replica factor set 2 , that means , every node have the whole data , but when we shutdown one node the application down and do not use another node
we only configure one node as seed (is the shutdown node) , but the same configuration had been used in other cluster ,and everything is just fine ,when we shutdown one node in other cluster , the only difference between these cluster , is replica factor , other cluster replica factor is 1 ,means some partition data cant be reached during the downtime .
in my understanding , i think the client would cache the cluster node list from seed node request , and when one node can’t be reached will turn to other nodes , is there something wrong ?