Hello,
I have installed aerospike community 4.5.0.5 using helm chart available on stable repository charts/stable/aerospike at master · helm/charts · GitHub (chart v0.2.3)
I have 3 dedicated nodes (with pod replicas 6) for aerospike with n1-highmem-2 machine type with Kubernetes (GKE) version 1.12.5-gke.5 on container-optimized os, and I use the following parameters
replicaCount: 6
resources:
requests:
cpu: 740m
memory: 5G
namespace ssd {
replication-factor 2
memory-size 4G
default-ttl 0
high-water-memory-pct 80
stop-writes-pct 90
storage-engine device {
file /opt/aerospike/data/ssd.dat
filesize 200G
}
}
I have minimal deployments (using tolerations) on my nodes to be safe. On kubernetes monitoring memory, the memory seems to increase by itself (no increase in the number of records since the beginning of the curve)
The memory usage view by aerospike seems OK
$ asadm -e "info"
Seed: [('127.0.0.1', 3000, None)]
Config_file: /root/.aerospike/astools.conf, /etc/aerospike/astools.conf
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2019-03-25 10:33:17 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Node Ip Build Cluster Migrations Cluster Cluster Principal Client Uptime
. Id . . Size . Key Integrity . Conns .
aero-aerospike-0.aero-aerospike-mesh.default.svc.cluster.local:3000 BB9068A340A580A 10.52.138.6:3000 C-4.5.0.5 6 0.000 8F4213AA394F True BB9078B340A580A 584 67:45:00
aero-aerospike-1.aero-aerospike-mesh.default.svc.cluster.local:3000 BB9068B340A580A 10.52.139.6:3000 C-4.5.0.5 6 0.000 8F4213AA394F True BB9078B340A580A 580 67:44:09
aero-aerospike-2.aero-aerospike-mesh.default.svc.cluster.local:3000 BB9058C340A580A 10.52.140.5:3000 C-4.5.0.5 6 0.000 8F4213AA394F True BB9078B340A580A 585 67:43:18
aero-aerospike-3.aero-aerospike-mesh.default.svc.cluster.local:3000 *BB9078B340A580A 10.52.139.7:3000 C-4.5.0.5 6 0.000 8F4213AA394F True BB9078B340A580A 582 67:42:31
aero-aerospike-4.aero-aerospike-mesh.default.svc.cluster.local:3000 BB9078A340A580A 10.52.138.7:3000 C-4.5.0.5 6 0.000 8F4213AA394F True BB9078B340A580A 584 67:41:53
aero-aerospike-5.aero-aerospike-mesh.default.svc.cluster.local:3000 BB9068C340A580A 10.52.140.6:3000 C-4.5.0.5 6 0.000 8F4213AA394F True BB9078B340A580A 587 67:41:17
Number of rows: 6
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Namespace Usage Information (2019-03-25 10:33:17 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace Node Total Expirations,Evictions Stop Disk Disk HWM Avail% Mem Mem HWM Stop
. . Records . Writes Used Used% Disk% . Used Used% Mem% Writes%
ssd aero-aerospike-0.aero-aerospike-mesh.default.svc.cluster.local:3000 30.999 M (0.000, 0.000) false 31.156 GB 16 50 80 1.848 GB 47 80 90
ssd aero-aerospike-1.aero-aerospike-mesh.default.svc.cluster.local:3000 31.994 M (0.000, 0.000) false 32.153 GB 17 50 79 1.907 GB 48 80 90
ssd aero-aerospike-2.aero-aerospike-mesh.default.svc.cluster.local:3000 32.264 M (0.000, 0.000) false 32.427 GB 17 50 79 1.923 GB 49 80 90
ssd aero-aerospike-3.aero-aerospike-mesh.default.svc.cluster.local:3000 31.966 M (0.000, 0.000) false 32.126 GB 17 50 79 1.905 GB 48 80 90
ssd aero-aerospike-4.aero-aerospike-mesh.default.svc.cluster.local:3000 31.822 M (0.000, 0.000) false 31.981 GB 16 50 79 1.897 GB 48 80 90
ssd aero-aerospike-5.aero-aerospike-mesh.default.svc.cluster.local:3000 33.645 M (0.000, 0.000) false 33.817 GB 17 50 78 2.005 GB 51 80 90
ssd 192.690 M (0.000, 0.000) 193.660 GB 11.485 GB
Number of rows: 7
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Namespace Object Information (2019-03-25 10:33:17 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace Node Total Repl Objects Tombstones Pending Rack
. . Records Factor (Master,Prole,Non-Replica) (Master,Prole,Non-Replica) Migrates ID
. . . . . . (tx,rx) .
ssd aero-aerospike-0.aero-aerospike-mesh.default.svc.cluster.local:3000 30.999 M 2 (15.592 M, 15.407 M, 0.000) (0.000, 0.000, 0.000) (0.000, 0.000) 0
ssd aero-aerospike-1.aero-aerospike-mesh.default.svc.cluster.local:3000 31.994 M 2 (15.482 M, 16.511 M, 0.000) (0.000, 0.000, 0.000) (0.000, 0.000) 0
ssd aero-aerospike-2.aero-aerospike-mesh.default.svc.cluster.local:3000 32.264 M 2 (16.320 M, 15.944 M, 0.000) (0.000, 0.000, 0.000) (0.000, 0.000) 0
ssd aero-aerospike-3.aero-aerospike-mesh.default.svc.cluster.local:3000 31.966 M 2 (15.756 M, 16.210 M, 0.000) (0.000, 0.000, 0.000) (0.000, 0.000) 0
ssd aero-aerospike-4.aero-aerospike-mesh.default.svc.cluster.local:3000 31.822 M 2 (15.825 M, 15.997 M, 0.000) (0.000, 0.000, 0.000) (0.000, 0.000) 0
ssd aero-aerospike-5.aero-aerospike-mesh.default.svc.cluster.local:3000 33.645 M 2 (17.369 M, 16.276 M, 0.000) (0.000, 0.000, 0.000) (0.000, 0.000) 0
ssd 192.690 M (96.345 M, 96.345 M, 0.000) (0.000, 0.000, 0.000) (0.000, 0.000)
Number of rows: 7
Result of distribution command
$ asadm -e "show distribution"
Seed: [('127.0.0.1', 3000, None)]
Config_file: /root/.aerospike/astools.conf, /etc/aerospike/astools.conf
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ssd - TTL Distribution in Seconds (2019-03-25 10:39:52 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Percentage of records having ttl less than or equal to value measured in Seconds
Node 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
aero-aerospike-0.aero-aerospike-mesh.default.svc.cluster.local:3000 0 0 0 0 0 0 0 0 0 0
aero-aerospike-1.aero-aerospike-mesh.default.svc.cluster.local:3000 0 0 0 0 0 0 0 0 0 0
aero-aerospike-2.aero-aerospike-mesh.default.svc.cluster.local:3000 0 0 0 0 0 0 0 0 0 0
aero-aerospike-3.aero-aerospike-mesh.default.svc.cluster.local:3000 0 0 0 0 0 0 0 0 0 0
aero-aerospike-4.aero-aerospike-mesh.default.svc.cluster.local:3000 0 0 0 0 0 0 0 0 0 0
aero-aerospike-5.aero-aerospike-mesh.default.svc.cluster.local:3000 0 0 0 0 0 0 0 0 0 0
Number of rows: 6
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ssd - Object Size Distribution in bytes (2019-03-25 10:39:52 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Percentage of records having objsz less than or equal to value measured in bytes
Node 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
aero-aerospike-0.aero-aerospike-mesh.default.svc.cluster.local:3000 1023 1023 1023 1023 2047 2047 2047 2047 2047 88063
aero-aerospike-1.aero-aerospike-mesh.default.svc.cluster.local:3000 1023 1023 1023 1023 2047 2047 2047 2047 2047 89087
aero-aerospike-2.aero-aerospike-mesh.default.svc.cluster.local:3000 1023 1023 1023 1023 2047 2047 2047 2047 2047 79871
aero-aerospike-3.aero-aerospike-mesh.default.svc.cluster.local:3000 1023 1023 1023 1023 2047 2047 2047 2047 2047 57343
aero-aerospike-4.aero-aerospike-mesh.default.svc.cluster.local:3000 1023 1023 1023 1023 2047 2047 2047 2047 2047 89087
aero-aerospike-5.aero-aerospike-mesh.default.svc.cluster.local:3000 1023 1023 1023 1023 2047 2047 2047 2047 2047 60415
Number of rows: 6
It looks like a lot of memory allocation but I am very worried about eviction as there is a pod that exceeds the requested value
Do you recommend more memory requested by pod ? What is the recommended margin memory ? Is /sys/fs/cgroup/memory/memory.limit_in_bytes read by aerospike (where memory limit is set) and so do you suggest to set the memory limit on kubernetes spec (can be dangerous?) ?
Thanks for help !