Hi, i want to deploy multiple Aerospike instances on a single machine to reduce the size of each individual instance. However, the official documentation does not recommend deploying multiple instances. I would like to know the reasons behind this recommendation and what the drawbacks of such a deployment are?
asd --help | grep -A 2 multiple
(Enterprise edition only.) If running multiple instances of Aerospike on one
machine (not recommended), each instance must be uniquely designated via this
option.
I had never seen this output (had not used the help command on the asd process itself). I am not sure this --instance would be needed, though (but I may be wrong):
--instance <0-15>
(Enterprise edition only.) If running multiple instances of Aerospike on one
machine (not recommended), each instance must be uniquely designated via this
option.
I thought one can simply make sure the configuration (ports) do not clash between instances.
Having said that, to your question, I think the ‘not recommended’ is just that it adds complexity to the deployment and managing the configuration but this is definitely something that is being done by some users as far as I know to take full advantage of multi socket systems without having to spin up VMs or Pods to take advantage of larger hosts, etc… As a matter of fact, Aerospike even has a configuration, auto-pin, that can be set to numa for this very purpose. There unfortunately isn’t good documentation on this which is probably another reason for not recommending it to avoid less expert users to run into random issues.
Can you talk more about the problem you’re trying to solve? What does running multiple instances of Aerospike solve that you can’t solve with 1 single instance?
Hi Albot, thanks for your time。
Let me introduce our AS usage senario:
We have many customers using Aerospike, and they hope to scale down and decentralize the deployment instances. This approach offers two benefits:
Since there are many customers, the Aerospike deployment for one customer can be distributed across multiple machines.
When a machine fails, the impact on customers is minimized, and data synchronization can be faster.
While using containers can achieve similar functionality, it would introduce additional operational overhead for container management.
So we are attempting multi-instance deployment, but it seems that this deployment approach is rarely used.
Any reason you don’t want to try isolating customers by using dedicated namespaces or sets with set quotas? I frequently use namespaces as a resource container/limiter.
If I were wanting to use dedicated daemons to split resources I would probably choose to use their k8s operator and run this as a Kubernetes deployment instead. Is that an option?
The option of running multipe aerospike daemons per node is only used as a hyperscale type problem solver, where we need to run 1 daemon per numa-node for example. This introduces a lot of complexity out of necessity for scale… if you can solve this another way I would steer clear of this.