Cluster (Error: (1) unstable-cluster)

FAIL_ON_CLUSTER_CHANGE simply allows you to abort the scan if underlying cluster changes while the scan is running. Its a scan option. If a cluster has no clustering events through the completion of a long running scan, the results are dependable. If cluster changes - nodes leave, or join, a long running scan may not return all eligible results reliably. Depending on what you are doing in a scan, you may or may not care about using fail on cluster change option.

Technically this setup should work as long as there is no network address - port translations going on in the middle and the ip addresses are resolvable on both ends - no private ips locally - exposed as public IPs through NAT - but the risk is all your data moving on the fabric is open to the cloud, unencrypted. No one deploys like this. I don’t have an easy way to test this and figure out where your problems are originating. For the use case you are implementing, you may be better off running these three nodes as single node instances and syncing them by writing to all three in your writes in the application (instantiate three different client objects - one to each node) and doing single local reads at closest node. You have all data in each node in your deployment. Even then your data is exposed to snooping. So overall, this is not how one should deploy. Your application and cluster should be in a LAN, secure behind a firewall at a minimum. Or use a Virtual Private Cloud.