Moving from AWS to AWS VPC setup

The Aerospike Knowledge Base has moved to https://support.aerospike.com. Content on https://discuss.aerospike.com is being migrated to either https://support.aerospike.com or https://docs.aerospike.com. Maintenance on articles stored in this repository ceased on December 31st 2022 and this article may be stale. If you have any questions, please do not hesitate to raise a case via https://support.aerospike.com.

Synposis:

In case of an existing AWS setup that you want to move into a VPC configuration there are something to consider.

Example:

I am running a Single node cluster on AWS and want to move this into a VPC. After adding the node in the VPC and configure the mesh-address with the public IP of the orignal node I am expecting a cluster to form and data to migrate. Instead I am getting this error message.

Jan 08 2015 10:20:15 GMT: INFO (paxos): (paxos.c::2290) Cluster Integrity Check: Detected succession list discrepancy between node bb98ff1d9b33b06 and self bb92b690bbb2802
Jan 08 2015 10:20:15 GMT: INFO (paxos): (paxos.c::2335) CLUSTER INTEGRITY FAULT. [Phase 1 of 2] To fix, issue this command across all nodes: dun:nodes=bb98ff1d9b33b06,bb92b690bbb2802

Explanation:

Aerospike is designed to deliver the IP of each node to the other nodes on the cluster. As you are running this inside a VPC the default address is a class-c address (Aerospike can not see the virtual public IP assigned by AWS to the node) and this is not something the node outside the VPC can address. Therefore the cluster can never form.

Solution:

Run a backup of your existing cluster and restore this on the new node. See Backing up and Restoring Data with asbackup and asrestore | Aerospike Documentation for more information.

1 Like

Hi,

no need to backup and restore, you need to check cluster visibility, if not visible than need to execute dun and undun command with as listed below:

asmonitor -e “info"

To get the node principal ID execute above command.

asinfo -v “dun:nodes=principleID”

this will resolve cluster visibility.

if not work check with this command for all node. be carefull during the execution on production node. check first on test env to get familier.

asadm -e “cluster dun all; shell sleep 5; cluster undun all”