Handling timeout in case of counter bin

For counters you’d use the optimization of data-in-index storage, which places the 8B integer in the primary index metadata entry. This means you don’t spend an extra 8B on top of the initial 64B of metadata. Also, the read/write operations don’t have to first find the metadata in the primary index, then skip to the storage location in DRAM - it happens in place, which means much lower latency.

For in-memory namespaces, if you want to push latencies down further, you should be making use of more sprigs in your configuration (partition-tree-sprigs) and CPU pinning (auto-pin true).

Read up on the request flow for the standard Aerospike configuration is such that every update locks the record before modifying it. The node with the master partition of the record will communicate the replica write to the node(s) with replica partitions of the record. Only after the local write and the replica writes are accepted does the client get back a success on the write.

If you have multiple Aerospike nodes, and are concerned about consistency in the rare event of a cluster splitting, you should be reading up on the strong consistency mode of Aerospike Enterprise Edition 4. You have both Linearizable Strong Consistency and Sequential Strong Consistency as consistency modes to choose from.

If you feel skeptical about a system you don’t know, you should be doing two things. First, educate yourself - I provided documentation for you to read. Second, test it for yourself. The community can help you with configuration tips. You have the asbenchmark tool (in the tools package) to assist you with testing your cluster’s performance after you configure it.