Redis was not designed to be a distributed database, so whatever replication it may have must be provided by a module on the application side or middleware. Aerospike was designed as a distributed database, and takes care of creating replica copies at the server layer according to your namespace definition. It also automatically handles nodes joining or leaving the cluster, rebalancing the data when that happens, to be in compliance with your namespace config.
In your case, it seems you want to have 3 copies of the data, one in a master partition, and two replica partitions. That would be replication-factor 3 in that namespace. There is no ‘master’ node and ‘slave’ nodes. Each node in Aerospike has an even share of the master partitions (there are 4096 partitions per-namespace), and if you set a replication factor greater than 1 each node also has an even number of replica partitions. This means that no node in Aerospike is special, there’s no signle point of failure. All the nodes take an even load of reads and writes thanks to this distribution.
By default the Aerospike client reads from the node with the master partition of the record. The replica copies are there for durability, so data doesn’t become unavailable if you lose a node (or 2 nodes in this case). You can modify the read replica policy from MASTER to ANY allowing the client to distribute the reads across the nodes.
One more thing, Redis doesn’t have persistent storage out of the box. If you want to similarly use Aerospike as a distributed cache, configure the storage engine of your namespace to be data in memory without persistence.