One Aerospike cluster for two AWS regions



We have one cluster in one of AWS region and clients from local and non-local regions. All our aerospike nodes communicate between themselves by internal ip address but they have external IP too.

In local region the clients connect to cluster via internal ip and all looks well. All external IPs from all nodes aggregated in DNS pool. It was did for fault tolerance purposes.

But when we’ve connected the clients from non-local region via this pool we’ve noticed that the count of PROXY-REQUESTS has increased.

So, I have two questions:

  1. Why the count of proxy-request has increased?

  2. What is the best practice for installation cluster with different type of clients ( with local ip and external ip)?

Best regards, Yaroslav.


Which client are you using? Some of our clients support internal to external server IP address mappings.

You may also configure your clients to all communicate the the public interface for this node. To do this you would need to configure the access-address to the public IP as a virtual address:

network {
  service {
    access-address PUBLIC_IP virtual

I suspect that the remote clients are only able to connect to the node you have configured to seed from. Since this node doesn’t own everything it will need to proxy the requests to the appropriate nodes.


We are using libevent now. Does it support ?


No, I do not believe libevent will have the feature to translate internal IP addresses to external IP addresses.

You could use the virtual addressing that I mentioned above. You could also use direct connect to create a routable link between the VPCs.

I would caution against exposing the database layer on a public internet facing IP address.