AWS EC2 IP Addressing for Aerospike

The Aerospike Knowledge Base has moved to https://support.aerospike.com. Content on https://discuss.aerospike.com is being migrated to either https://support.aerospike.com or https://docs.aerospike.com. Maintenance on articles stored in this repository ceased on December 31st 2022 and this article may be stale. If you have any questions, please do not hesitate to raise a case via https://support.aerospike.com.

AWS EC2 IP Addressing for Aerospike

Amazon provides multiple types IP addresses for deploying EC2 instances on two platforms, EC2-Classic and EC2-VPC. There are private IP Addresses, Public IP Addresses, and Elastic IP Addresses. This article will refer to IP addressing under the context of an EC2-VPC Platform.

A private IP address cannot be reached from the Internet. EC2-VPC instances by default receive a static private IP address from the range of your VPC addresses.

A public IP address can be reached by the Internet and is by default assigned to Default-VPC instances. However, Nondefault-VPC instances must have public IP address assignment enabled. Public IP addresses are disassociated from an instance when it is stopped or an ENI or EIP are added to the instance.

An Elastic IP address is a static public IP address that remains associated with an instance even when the instance is stopped and restarted.

With the various types of IP addresses available on AWS, Aerospike features can be configured in a number of ways. The following are guidelines that can assist you in configuring Aerospsike on AWS.

Configure Aerospike to use the AWS Private IP Address for Client Access
Configure Aerospike to use the Elastic/Public IP Address for Client Access
Configure the Aerospike Heartbeat Interface on AWS
Configure Aerospike XDR to Use Locally Accessible IP Addresses
Configure Aerospike XDR to Use Elastic/Public IP Addresses

Configuring Aerospike Server to Use AWS Private IP Addresses

What

Configure the Aerospike Server so that Aerospike clients can access a cluster using the AWS private IP addresses.

How To

The private IP address can be configured by just keeping the default setting.

 1. Edit the Aerospike configuration file and verify the network stanza below.


 network {
 	    service {
    	address any
	     	port 3000
        } ...

2. Start the Aerospike server.

	    sudo service aerospike restart

If there are multiple network interfaces(ENI) on the instance and only one interface provides access to the node then the access-address value of the network stanza should be configured.

1. For each of the nodes in the cluster

2. Use ifconfig to list the interfaces and IP addresses.

ifconfig -a

eth0      Link encap:Ethernet  HWaddr 12:CC:47:86:8F:AF  
          inet addr:172.18.10.189  Bcast:172.18.10.255  Mask:255.255.255.0
          inet6 addr: fe80::10cc:47ff:fe86:8faf/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:832 errors:0 dropped:0 overruns:0 frame:0
          TX packets:694 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:98191 (95.8 KiB)  TX bytes:81771 (79.8 KiB)

eth1      Link encap:Ethernet  HWaddr 12:DF:E3:5B:FB:69  
          inet addr:172.18.224.190 Bcast:172.18.10.255  Mask:255.255.255.0
          inet6 addr: fe80::10df:e3ff:fe5b:fb69/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:716 (716.0 b)  TX bytes:1206 (1.1 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:140 (140.0 b)  TX bytes:140 (140.0 b)

3. Edit the /etc/aerospike/aerospike.conf file.

4. Add the access-address line in the network stanza with the private IP address of the selected interface.  In this example the IP address for the interface eth1 is used;

 network {
 	    service {
    	address any
	     	port 3000
	     	access-address 172.18.224.190 
        } ...
        

 3. Restart the Aerospike server.

	    sudo service aerospike restart

Why

This configuration is used when both the Aerospike clients and servers are on the same private subnet or routable AWS private subnets.

More Info

You can confirm access-address by using the Aerospike asinfo command. After the server is restarted use the following command.

asinfo -v service
172.18.224.190:3000	

You can verify the access-addres of the other nodes in the cluster by entering the following command.

asinfo -v services
172.18.224.189:3000,172.18.224.194:3000,172.18.224.195:3000

For more information on the asinfo command:

asinfo

Configuring Aerospike Server using Elastic IP Addresses

What

Configure the Aerospike Server so that Aerospike clients can access a cluster using the Elastic IP addresses of the nodes in the cluster.

How To

For each node in the cluster.

 1. Edit the /etc/aerospike/aerospike.conf file.

2. Add the access-address line in the network stanza with the Elastic IP address of the node.  The address 54.208.32.99 is the       
Elastic IP in the example configuration below.


 network {
 	    service {
    	address any
	     	port 3000
	     	access-address 54.208.32.99  virtual
       } ...
       

3. Restart the Aerospike server.

	   sudo service aerospike restart

Why

Elastic/Public IP address should be configured when the Aerospike clients are accessing the cluster from public network. The Elastic/Public IP addressing for Aerospike clusters should be carefully configured. If configured wrong, it can appear to function correctly. Because an AWS instance can have both a public and private IP address for an ENI, database operations may actually be forward through the seed node as a proxy.

For example assume we have a cluster of three servers. The servers are all part of the same AWS VPC. The addresses for each of the servers are as follows.

Node Internal IP Address Public IP Address
Server_1 172.18.10.76 52.91.243.125
Server_2 172.18.10.82 52.105.13.44
Server_3 172.18.10.26 54.91.34.242

If you perform and ifconfig on the EC2 instance the network interfaces will be display only the private IP addresses. The public IP(Elastic IP) is mapped to the private IP address through network address translation(NAT) and is not part of the server configuration.

ifconfig <br>
eth0      Link encap:Ethernet  HWaddr 12:5A:18:B8:AD:15 
    inet addr:172.18.10.76 Bcast:172.18.10.255 Mask:255.255.255.0 
    inet6 addr: fe80::105a:18ff:feb8:ad15/64 Scope:Link 
    UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1 
    RX packets:809 errors:0 dropped:0 overruns:0 frame:0 
    TX packets:515 errors:0 dropped:0 overruns:0 carrier:0 
    collisions:0 txqueuelen:1000 
    RX bytes:534945 (522.4 KiB)  TX bytes:52512 (51.2 KiB) 

lo        Link encap:Local Loopback  
    inet addr:127.0.0.1  Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING  MTU:65536  Metric:1
    RX packets:2 errors:0 dropped:0 overruns:0 frame:0
    TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0 
    RX bytes:140 (140.0 b)  TX bytes:140 (140.0 b)

You can find out the public IP of your instance by looking on the EC2 Dashboard or by running the aws command on your instance.

[ec2-user@ip-172-18-10-76 ~]$ aws ec2 describe-instances
"NetworkInterfaceId": "eni-b51f0394", 
"PrivateIpAddresses": [
    {
        "PrivateDnsName": "ip-172-18-10-76.ec2.internal", 
        "Association": {
            "PublicIp": "52.91.243.125", 
            "PublicDnsName": "ec2-52-91-243-125.compute-1.amazonaws.com", 
                      "IpOwnerId": "amazon"
       }, 
             "Primary": true, 
             "PrivateIpAddress": "172.18.10.76"
    }  ... 

When the Aerospike client connects to a seed node in the cluster through it’s public IP address, it will receive the internal IP addresses for all the other nodes in the cluster. The client will attempt to add each of the nodes with the internal IP addresses. The client will receive a time out for the nodes other than the seed node.(see client stdout)

2015-11-03 17:55:53.055 INFO Thread 1 Add node BB9A1541F839E12 54.172.74.76:3000 2015-11-03 17:55:54.322 WARN Thread 1 Add node 172.18.10.82:3000 failed: Error Code 11: java.net.SocketTimeoutException: connect timed out 2015-11-03 17:55:55.324 WARN Thread 1 Add node 172.18.10.26:3000 failed: Error Code 11: java.net.SocketTimeoutException: connect timed

All database operations will succeed at this point because the client will use the seed node as a proxy for the other nodes. However the performance will be poor because all the operations will be sent through the seed node as the proxy. Below are the timeout errors when the client attempts to connect to one of the private IP addresses.

2015-11-03 17:55:56.417 WARN Thread 8 Add node 172.18.10.82:3000 failed: Error Code 11: java.net.SocketTimeoutException: connect timed out 2015-11-03 17:55:56.442 write(tps=12 timeouts=0 errors=0) read(tps=62 timeouts=0 errors=0) total(tps=74 timeouts=0 errors=0) 2015-11-03 17:55:57.418 WARN Thread 8 Add node 172.18.10.26:3000 failed: Error Code 11: java.net.SocketTimeoutException: connect timed out

The client will continue to try and add the nodes with internal IP addresses and receive timeouts. Even though operations are succeeding, the timeouts should be a sign that there may be a network problem. You can search into the Aerospike log file (/var/log/aerospike/aerospike.log) to confirm that there are operations being proxied to the nodes with internal IP addresses.

grep  proxy /var/log/aerospike/aerospike.log

Nov 06 2015 01:07:26 GMT: INFO (info): (hist.c::137) histogram dump: proxy (3251) total) msec

Under normal conditions the number of proxy operations should be 0. Here the seed node has proxied 3251 transactions. If you are using AMC with a public IP, you may also notice that only one node will be listed. Yet when you use info from asadm tool on one of the nodes all nodes will be listed.

In order to correct the problem of an external Aerospike client using internal IP addresses the access-address parameter will have to be added to the network configuration of the aerospike.conf file.

network {
	service {
		address any
		port 3000
		access-address 54.208.32.99  virtual
    } ...

When the access-address parameter is used, the database will verify that the address is a valid address for the one of the network ports. If it is not a valid IP for a network port then the database will not start. (Aerospike log output below)

Nov 04 2015 01:15:31 GMT: INFO (cf:misc): (id.c::119) Node ip: 172.18.10.26 Nov 04 2015 01:15:31 GMT: INFO (cf:misc): (id.c::327) Heartbeat address for mesh: 172.18.10.26 Nov 04 2015 01:15:31 GMT: INFO (config): (cfg.c::3231) Rack Aware mode not enabled Nov 04 2015 01:15:31 GMT: INFO (config): (cfg.c::3234) Node id bb96f7dbef24f12 Nov 04 2015 01:15:31 GMT: CRITICAL (config): (cfg.c:3265) external address ‘54.208.32.99’ does not match service addresses ‘172.18.10.26:3000’ Nov 04 2015 01:15:31 GMT: WARNING (as): (signal.c::135) SIGINT received, shutting down Nov 04 2015 01:15:31 GMT: WARNING (as): (signal.c::138) startup was not complete, exiting immediately

The additional keyword “virtual” will have to be added to the end of the access-address line so that the database starts properly. With the access-address set with the keyword virtual the Aerospike client using public IPs will have no problem communicating to all of the nodes in the cluster. Since operations will no longer be proxied through the seed node the performance will be significantly better.

More Info

You can confirm access-address by using the Aerospike asinfo command. After the server is restarted use the following command.

asinfo -v service
172.18.224.190:3000	

You can verify the access-addres of the other nodes in the cluster by entering the following command.

asinfo -v services
172.18.224.189:3000,172.18.224.194:3000,172.18.224.195:3000

For more information on the asinfo command:

asinfo

Configuring Aerospike Heartbeat on AWS

What

Configure the Aerospike server to use the private IP address for transmitting and receiving heartbeats.

How To

1. For each node edit the /etc/aerospike/aerospike.conf file.


2. Modify mode line in the heartbeat stanza to mesh.  

    network {
         service {
            address any
            port 3000
         }

        heartbeat {
            mode mesh
            port 3002 # Heartbeat port for this node.
        }
    }   ...


3. Add a mesh-seed-address-port line with the private IP Addresses for each seed node in the cluster. Save the changes.

    network {
        service {
                address any
                port 3000
           
        } 
    

       heartbeat {
               mode mesh
               port 3002 # Heartbeat port for this node.

            # List one or more other nodes, one ip-address & port per line:
            mesh-seed-address-port 172.18.10.76 3002
            mesh-seed-address-port 172.18.10.82 3002
        }
    } ...

4. Restart the Aerospike server.
	sudo service aerospike restart

Why

Amazon Web Services does not allow multicast addressing. Therefore the default configuration for the heartbeat mode will need to be modified from multicast mode to mesh mode. After changing the mode, the port and the mesh-seed-address-port settings will have to be set. The port should be changed to the port used for the heartbeat. The mesh-seed address-port-setting should be set to the IP address and heartbeat port of a node in the cluster. It is a good idea to have more than one mesh seed node because if a seed node goes offline, a client may not be able to contact the cluster.

In the current versions of Aerospike (heartbeat-protocol=v3), the heartbeat cannot be configured to use the Elastic or Public IP address because the node does not have any interface that binds that IP; the public IP is made available via NAT in the EC2 infrastructure, so the node does not see it directly.

More Info

You can confirm heartbeat configuration by using the Aerospike asinfo command. After the server is restarted use the following command.

asinfo -v  'get-config:context=network.heartbeat' -l
heartbeat-mode=mesh
heartbeat-protocol=v2
heartbeat-address=172.18.10.76
heartbeat-port=3002
heartbeat-interval=150
heartbeat-timeout=10

For more information on the asinfo command:
asinfo

Configuring Aerospike XDR IP Addresses on AWS

Aerospike XDR networking can be configured in a couple of different methods that integrate XDR with AWS networking.

Configuring Aerospike XDR with locally accessible IP Addresses on AWS

What

Configure XDR when the remote cluster’s IP addresses are locally accessible.

How To

1. For each node edit the /etc/aerospike/aerospike.conf file.

2.  Configure the dc-node-address-port parameter in the datacenter sub-stanza of the XDR stanza.  Set dc-node-address-port to the     local IP address to one of the nodes of the remote cluster.
xdr {
    # http://www.aerospike.com/docs/operations/configure/cross-datacenter
    enable-xdr true # Globally enable/disable XDR on local node.
    namedpipe-path /tmp/xdr_pipe # XDR to/from Aerospike communications channel.
    digestlog-path /opt/aerospike/digestlog 100G # Track digests to be shipped.
    errorlog-path /var/log/aerospike/asxdr.log # Log XDR errors.
    xdr-pidfile /var/run/aerospike/asxdr.pid # XDR PID file location.
    local-node-port 3000 # Port on local node used to read records etc.
    info-port 3004 # Port used by tools to monitor XDR health, current config, etc.
    xdr-compression-threshold 1000
    
    # http://www.aerospike.com/docs/operations/configure/cross-datacenter/network
    # Canonical name of the remote datacenter.
    datacenter REMOTE_DC_1 {
            dc-node-address-port 172.18.224.189 3000
} ...

3.  Restart the Aerospike server.

sudo service aerospike restart

Why

A simple example of this configuration on AWS occurs when the two clusters are in the same virtual private cloud (VPC), but each cluster is in a different availability zone. The local and remote clusters can both be reached on the local internal network.

The following is an example of 2 clusters of two servers each. Below is a table of the AWS configuration of the servers.

Cluster AZ VPC Internal IP Ext IP
Local us-east-1a VPC1 172.18.10.189 54.175.209.97
Local us-east-1a VPC1 172.18.10.190 54.164.210.61
Remote us-east-1b VPC1 172.18.224.190 52.90.56.52
Remote us-east-1b VCP1 172.18.224.68 52.23.188.21

Since all the instances are accessible on the same local network, the XDR IP addressing for the first configuration is simple. There is single line in the XDR stanza of the Aerospike configuration file that has to be set for each node in the local cluster. The dc-node-address-port has to be set to the internal IP address of one of the nodes of the remote cluster.

After setting the dc-node-address-port set on each of the local nodes, XDR can be started to begin replication between the two clusters.

Configuring Aerospike XDR with Public/Elastic IP Addresses on AWS by Using the access-address setting.

Communications between local and remote XDR clients can be configured in one of two ways. The first method is to utilize the access-address setting of the remote cluster and the second method is to map the private to public IP address in the XDR datacenter stanza of the local cluster Aerospike configuration files.

What

Configuring XDR by setting access-address on the Aerospike server to use the public IP(Elastic IP) address .

How To

Local Cluster Configuration

1. For each node edit the /etc/aerospike/aerospike.conf file  on the local cluster nodes.


2. Modify mode line in the heartbeat stanza to mesh.  

xdr {
      # http://www.aerospike.com/docs/operations/configure/cross-datacenter

    enable-xdr true # Globally enable/disable XDR on local node.
    namedpipe-path /tmp/xdr_pipe # XDR to/from Aerospike communications channel.
    digestlog-path /opt/aerospike/digestlog 100G # Track digests to be shipped.
    errorlog-path /var/log/aerospike/asxdr.log # Log XDR errors.
    xdr-pidfile /var/run/aerospike/asxdr.pid # XDR PID file location.
    local-node-port 3000 # Port on local node used to read records etc.
    info-port 3004 # Port used by tools to monitor XDR health, current config, etc.
    xdr-compression-threshold 1000

    # http://www.aerospike.com/docs/operations/configure/cross-datacenter/network

    # Canonical name of the remote datacenter.
    datacenter REMOTE_DC_1 {
            dc-node-address-port 10.2.0.154 3000
            dc-node-address-port 10.2.0.201 3000

    }
} ...

Remote Cluster Configuration

1. For each node edit the /etc/aerospike/aerospike.conf file.

2. Configure the access-address of each node in the remote cluster using the Elastic IP and the keyword virtual.  

network {
    service {
            address any
            port 3000
            access-address 52.90.223.53 virtual
    } ... 
    

3. Restart the Aerospike server.

	sudo service aerospike restart 

Why

The previous example demonstrates the use of access-address to setup communications for XDR. This configuration will be more appropriate when both the clusters cannot communicate over a private network and the Aerospike clients access the clusters over the public network.

Configuring Aerospike XDR with Public/Elastic IP Addresses on AWS by Mapping Internal to External IP Addresses

What

Configuring XDR by mapping the private IP addresses of the remote cluster nodes to the public IP addresses of the remote cluster nodes.

How To

1. For each node edit the /etc/aerospike/aerospike.conf file.

2. Configure the access-address of each node in the remote cluster using the Elastic IP and the keyword virtual.  


xdr {
        # http://www.aerospike.com/docs/operations/configure/cross-datacenter

    enable-xdr true # Globally enable/disable XDR on local node.
    namedpipe-path /tmp/xdr_pipe # XDR to/from Aerospike communications channel.
    digestlog-path /opt/aerospike/digestlog 100G # Track digests to be shipped.
    errorlog-path /var/log/aerospike/asxdr.log # Log XDR errors.
    xdr-pidfile /var/run/aerospike/asxdr.pid # XDR PID file location.
    local-node-port 3000 # Port on local node used to read records etc.
    info-port 3004 # Port used by tools to monitor XDR health, current config, etc.
    xdr-compression-threshold 1000

    # http://www.aerospike.com/docs/operations/configure/cross-datacenter/network

    # Canonical name of the remote datacenter.
    datacenter REMOTE_DC_1 {
            dc-node-address-port 10.2.0.154 3000
            dc-node-address-port 10.2.0.201 3000

            # Remote nodes' internal-to-external ip map - include all remote nodes.
            # These are needed only when there are multiple NICs.
        dc-int-ext-ipmap 10.2.0.154 52.90.223.53
              dc-int-ext-ipmap 10.2.0.201 52.23.170.121
       }
    } ...
     

3. Restart the Aerospike server.

sudo service aerospike restart

The IP addresses for each of the remote nodes should have the access-address value set to the external IP address in the network stanza of the aerospike.conf file for each node.

Why

This XDR configuration will map the local addresses to external IP addresses that the two clusters can use to communicate. A simple example of the second configuration can be created on AWS by placing a local cluster in one availability zone with it’s own VPC and placing the remote cluster in another availability zone with a second VPC. In this case either the external IP addresses can be used to communicate between two clusters or VPC peering can be implemented to create routing between the VPCs. For this example the external IP addresses will be used to enable XDR between the two clusters. This configuration is more appropriate when the Aerospike cients access the cluster through the private network and XDR communications are over the pulic network.

The second configuration introduces a second VPC which places the each cluster on separate networks. The following is a table of the public an private IP addresses for the example.

Cluster AZ VPC Internal IP Ext IP
Local us-east-1a VPC1 172.18.10.18 54.175.209.97
Local us-east-1a VPC1 172.18.10.190 54.164.210.61
Remote us-east-1b VPC2 10.2.0.15 52.90.223.53
Remote us-east-1b VCP2 10.2.0.201 52.23.170.121

The second XDR network configuration requires the mapping of the internal to external IP addresses for all the nodes of the remote cluster. First the local address for each remote node should have a dc-node-address-port entry in the aerospike.conf file. After all the nodes have been added, each local address is mapped with a dc-int-ext-ipmap line in the XDR stanza of the aerospike.conf file. With the IP addresses mapped on the local cluster and access-address set to the external addresses on the remote cluster, XDR can be started to begin replication between the two clusters.