Cluster with Error 503 AEROSPIKE_ERR_SERVER_FULL

Aerospike 3.3.17 In-Memory with 8 nodes (growth from 5 nodes), each node with 28GB DB Size.

The C Client got Error 503 AEROSPIKE_ERR_SERVER_FULL when two nodes > 90% DB Usage, while other nodes still with low Usage (83% ~90%):

Monitor> info
===NODES===
2014-09-08 18:10:46.641218
Sorting by IP, in Ascending order: 
ip:port   Build   Cluster      Cluster   Free   Free   Migrates              Node         Principal   Replicated    Sys
              .      Size   Visibility   Disk    Mem          .                ID                ID      Objects   Free
              .         .            .    pct    pct          .                 .                 .            .    Mem
node2    3.3.17         8         true      0     22      (0,0)   BB9C006DE924A3C   BB9E8F5DD924A3C      5197035     44
node3    3.3.17         8         true      0     20      (0,0)   BB9A886DE924A3C   BB9E8F5DD924A3C      5317413     43
node4    3.3.17         8         true      0     19      (0,0)   BB96088DE924A3C   BB9E8F5DD924A3C      5350882     42
node5    3.3.17         8         true      0     26      (0,0)   BB900C5DD924A3C   BB9E8F5DD924A3C      4887975     47
node6    3.3.17         8         true      0     28      (0,0)   BB9E8F5DD924A3C   BB9E8F5DD924A3C      4812299     48
node7    3.3.17         8         true      0     23      (0,0)   BB9A086DE924A3C   BB9E8F5DD924A3C      5122786     44
node8    3.3.17         8         true      0     21      (0,0)   BB9D077DE924A3C   BB9E8F5DD924A3C      5237100     44
node9    3.3.17         8         true      0     23      (0,0)   BB90097DE924A3C   BB9E8F5DD924A3C      5128620     44
Number of nodes displayed: 8


 ===NAMESPACE===
Total (unique) objects in cluster for test : 20527055
Total (unique) objects in cluster for bar : 0
Note: Total (unique) objects is an under estimate if migrations are in progress.


ip/namespace  Avail   Evicted    Master     Repl     Stop   Used   Used      Used   Used    hwm   hwm
                Pct   Objects   Objects   Factor   Writes   Disk   Disk       Mem    Mem   Disk   Mem
                  .         .         .        .        .      .      %         .      %      .     .
node3/test      n/a         0   2615581        2     true    n/a    n/a   25.45 G     91     50    60
node3/bar       n/a         0         0        2    false    n/a    n/a    0.00 B      0     50    60
node8/test      n/a         0   2566922        2    false    n/a    n/a   25.06 G     90     50    60
node8/bar       n/a         0         0        2    false    n/a    n/a    0.00 B      0     50    60
node4/test      n/a         0   2812519        2     true    n/a    n/a   25.61 G     92     50    60
node4/bar       n/a         0         0        2    false    n/a    n/a    0.00 B      0     50    60
node9/test      n/a         0   2599654        2    false    n/a    n/a   24.53 G     88     50    60
node9/bar       n/a         0         0        2    false    n/a    n/a    0.00 B      0     50    60
node7/test      n/a         0   2575775        2    false    n/a    n/a   24.52 G     88     50    60
node7/bar       n/a         0         0        2    false    n/a    n/a    0.00 B      0     50    60
node5/test      n/a         0   2353940        2    false    n/a    n/a   23.39 G     84     50    60
node5/bar       n/a         0         0        2    false    n/a    n/a    0.00 B      0     50    60
node6/test      n/a         0   2265229        2    false    n/a    n/a   23.03 G     83     50    60
node6/bar       n/a         0         0        2    false    n/a    n/a    0.00 B      0     50    60
node2/test      n/a         0   2737435        2    false    n/a    n/a   24.87 G     89     50    60
node2/bar       n/a         0         0        2    false    n/a    n/a    0.00 B      0     50    60
Number of rows displayed: 16

Shouldn’t the Cluster be auto-balanced? One/two of the nodes get full shall not response FULL in cluster level.

Hi Hanson,

Data is distributed randomly across 4096 partitions, and partitions are distributed randomly across all nodes. This is how data is auto-balanced across the cluster. Auto rebalancing redistributes partitions when a node is removed or added to the cluster.

http://www.aerospike.com/docs/architecture/data-distribution.html

Two of your nodes are currently full, and there aren’t any evictions taking place so the TTL for your objects must be 0. Partitions owned by these nodes will not be able to accept new writes until enough space is freed to drop below the 90% default stop-writes setting. Space can be reduced by either issuing deletes or adding additional capacity by either increasing memory-size or adding additional nodes to the cluster.

Memory size can be adjusted across the cluster with asmonitor: asmonitor -e "asinfo -v 'set-config:context=namespace;id=test;memory-size=<adjusted memory size>'"

Assuming you have 32 GB of space on these nodes, I would not recommend exceeding 30 GB setting.

Understood the condition of rebalancing. What is the cause for such unbalanced DB Usage? (83% - 92%)

What I did:

  1. Initial with 5 nodes Cluster, and keep sending 5K TPS of Write
  2. Remove one node via “service aerospike stop”
  3. Add back that node after Migration completed (0, 0)
  4. Add 3 nodes one by one after Migration completed (0, 0) for each
  5. Change to 200K TPS of Write, and remove/add back a node