Is it possible for me to create a cluster environment on OS X using Vagrant, by possibly creating multiple Vagrant boxes each with aerospike running on them for example. If so, can anyone point me in the direction of how to set up the configs for each box etc - very very new to Aerospike
Using vagrant boxes on same mac for Aerospike cluster is fairly common for development environment.
A few pointers:
Both the vagrant boxes can see each other (ping each other).
Node ID for Aerospike nodeās cluster identity is combination of MAC address and Port. If you have 2 or more vagrant images on same OS-X environment, the cluster may not be able to distinguish between 2 nodes, since they have same MAC address and port. You should use different ports in aerospike.conf file, by default installed in /etc/aerospike folder
Mesh configuration is easiest to make the cluster work in such scenario. You should be able to get multicast to work too, make sure multicast is enabled and you specify interface-address in the aerospike.conf file.
Thanks Samir. I was looking for more of an idiots guide, but I worked it all out in the end - created a Vagrant multi-machine with instances of Aerospike - all runs well.
Another way to create a cluster offers less performance but is easier, and adequate in a dev context, which is to run multiple Aerospike daemons in the same instance, each on a separate port.
Initialize the server a couple of times (or more), each on a unique port.
Open a terminal into each of the initialized directories and start the instance.
The cluster will form automatically using multicast. Tail the log.
wget http://www.aerospike.com/download/server/latest/artifact/tgz > aerospike.tgz
tar zxvf aerospike.tgz
cd aerospike-server
# you may want to edit the default config inside share/etc/aerospike.conf
./bin/aerospike init --home ./i1 --instance i1 --service-port 3000
./bin/aerospike init --home ./i2 --instance i2 --service-port 3010
# you can initialize more instances. now in each instance:
cd i1
sudo ./bin/aerospike start
tail -f var/log/aerospike.log
# in another terminal window
cd i2
sudo ./bin/aerospike start
tail -f var/log/aerospike.log
The example above created a cluster of two aerospike daemons running in the Vagrant instance on ports 3000 and 3010. You can connect to those with the client from the OS X side once you find the IP address of the Vagrant instance.
In this case the client can connect to either ('192.168.119.3', 3000) or ('192.168.119.3', 3010) as the seed node.
This cluster will by default contain an in-memory namespace called test. If you want to nuke what is on it make sure to take down all the nodes in the cluster and remove any artifacts.
# in each aerospike server instance directory
sudo ./bin/aerospike stop
rm var/smd/*
rm var/udf/lua/*
rm var/log/*
Ha ha, okay - well - I have a vagrant set up as followsā¦
This is my Vagrantfile:
Vagrant.configure(2) do |config|
#### DEFINE BOX 1 ####
config.vm.define "aerospike_vm_1" do |aerospike_vm_1|
aerospike_vm_1.vm.box = "aerospike/centos-6.5"
aerospike_vm_1.vm.provider :virtualbox do |vb|
vb.name = "aerospike_vm_1"
end
# Network settings for this box:
aerospike_vm_1.vm.network "private_network", ip: "33.33.33.91"
end
#### END OF BOX 1 ####
#### DEFINE BOX 2 ####
config.vm.define "aerospike_vm_2" do |aerospike_vm_2|
aerospike_vm_2.vm.box = "aerospike/centos-6.5"
aerospike_vm_2.vm.provider :virtualbox do |vb|
vb.name = "aerospike_vm_2"
end
# Network settings for this box:
aerospike_vm_2.vm.network "private_network", ip: "33.33.33.92"
end
#### END OF BOX 2 ####
#### DEFINE BOX 3 ####
config.vm.define "aerospike_vm_3" do |aerospike_vm_3|
aerospike_vm_3.vm.box = "aerospike/centos-6.5"
aerospike_vm_3.vm.provider :virtualbox do |vb|
vb.name = "aerospike_vm_3"
end
# Network settings for this box:
aerospike_vm_3.vm.network "private_network", ip: "33.33.33.93"
end
#### END OF BOX 3 ####
end
I then have this as the identical aerospike.conf file on each VM boxā¦
# Aerospike database configuration file.
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 4
transaction-queues 4
transaction-threads-per-queue 4
proto-fd-max 15000
}
logging {
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any
port 3000
access-address 33.33.33.91 # (92 / 93 on the other servers)
network-interface-name eth2 # Needed for Node ID
}
heartbeat {
mode multicast
address 239.1.99.2
port 9918
interface-address 33.33.33.91
interval 150
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace temp01 {
replication-factor 2
memory-size 1G
default-ttl 0
storage-engine device {
file /opt/aerospike/data/temp01.dat
filesize 2G
data-in-memory false
}
That seems to work really well and using Vagrant Manager (http://www.vagrantmanager.com) I can spin them up individually or all at once as and when I need them - works great for DEV and you can of course set up the namespaces however you want
While using the above combination, i got the below error
Jul 16 2016 12:59:07 GMT: CRITICAL (config): (cfg.c:3375) external address āACCESSā does not match service addresses ā10.0.2.15:3000;172.28.128.8:3000;33.33.33.95:3000ā
Thanks for you example. I did like your config in 3 VM VirtualBox.
But I donāt know how to use aerospike client.
I try to new clients with array of hosts of 3 aerospike servers, when I put data, it save into all of them. It true or me doing something wrong?
I read docs of aerospike, it wrote something like: āaerospike donāt have master nodeā. I think I just new client with one of them, and data will be balance and distributed between 3 aerospike server .
Canāt you help me to using client of aerospike with this case.
Yes, you are right, if you have 3 servers in your cluster, then you can connect and write to any single one of them, and the data will be replicated across however many you have set as your replication factor. Sounds as if it is working fine
Thanks you for your reply.
Canāt you give me some command or code to checking my setting for node cluster is true, because when I try to putting record in one of IPās servers, just one server have data.
I see it have button āAdd Nodeā, I try to using that button for adding difference IPās server of two servers. Then I donāt know it is true or wrong? Because I see 1 Node up, 2 Nodes down, and my data donāt replicate with another Node.