Helm, issue with create trial enterprise cluster

Hello, was trying to create cluster with helm

wget https://aerospike-trial-download.s3.us-west-2.amazonaws.com/trial-features.conf -O features.conf
kubectl -n aerospike create secret generic aerospike-license --from-file=features.conf
helm upgrade -i aerospike -n aerospike aerospike/aerospike-cluster -f values_cluster.yaml

values.yaml

commonName: aerocluster
replicas: 1
image:
  repository: aerospike/aerospike-server-enterprise
  tag: 6.0.0.1
imagePullSecrets: {}
podSpec:
  multiPodPerHost: true

aerospikeAccessControl:
  users:
    - name: admin
      secretName: auth-secret
      roles:
        - sys-admin
        - user-admin

aerospikeConfig:
  service:
    feature-key-file: /etc/aerospike/secret/features.conf
  security: {}
  network:
    service:
      port: 3000
    fabric:
      port: 3001
    heartbeat:
      port: 3002
  namespaces:
    - name: gluu
      memory-size: 100000 
      #1073741824 # 1GiB
      replication-factor: 1
      storage-engine:
        type: memory

aerospikeNetworkPolicy: {}
#rackConfig:
#  namespaces:
#    - gluu
#  racks:
#    - id: 1
#      zone: us-east-2
validationPolicy: {}
seedsFinderServices: {}
operatorClientCert: {}
devMode: false


storage:
  filesystemVolumePolicy:
    cascadeDelete: true
    initMethod: deleteFiles
  volumes:
    - name: workdir
      source:
        persistentVolume:
          storageClass: ebs-sc
          volumeMode: Filesystem
          size: 3Gi
      aerospike:
        path: /opt/aerospike
    - name: aerospike-config-secret
      source:
        secret:
          secretName: aerospike-license
      aerospike:
        path: /etc/aerospike/secret

Logs

2022-09-06T17:16:14.230Z INFO controllers.AerospikeCluster Updating ConfigMap {"aerospikecluster": "default/aerocluster", "ConfigMap": {"namespace": "default", "name": "aerocluster-0"}}
2022-09-06T17:16:14.236Z DEBUG lib.asconfig AerospikeConfig {"aerospikecluster": "default/aerocluster", "config": {"logging":[{"any":"info","name":"console"}],"namespaces":[{"memory-size":100000,"name":"gluu","replication-factor":1,"storage-engine":{"type":"memory"}}],"network":{"fabric":{"port":3001},"heartbeat":{"mode":"mesh","port":3002},"service":{"access-addresses":["<access-address>"],"access-port":3000,"alternate-access-addresses":["<alternate-access-address>"],"alternate-access-port":3000,"port":3000}},"security":{},"service":{"cluster-name":"aerocluster","feature-key-file":"/etc/aerospike/secret/features.conf","node-id":"ENV_NODE_ID"}}, "image": "aerospike/aerospike-server-enterprise:6.0.0.1"}
2022-09-06T17:16:14.236Z DEBUG lib.asconfig AerospikeConfig {"aerospikecluster": "default/aerocluster", "conf": "\nlogging {\n\n    console {\n        context any    info\n    }\n}\n\nnamespace gluu {\n    memory-size    100000\n    replication-factor    1\n    storage-engine    memory\n}\n\nnetwork {\n\n    fabric {\n        port    3001\n    }\n\n    heartbeat {\n        mode    mesh\n        port    3002\n    }\n\n    service {\n        access-address    <access-address>\n        access-port    3000\n        alternate-access-address    <alternate-access-address>\n        alternate-access-port    3000\n        port    3000\n    }\n}\n\nsecurity {\n}\n\nservice {\n    cluster-name    aerocluster\n    feature-key-file    /etc/aerospike/secret/features.conf\n    node-id    ENV_NODE_ID\n}\n"}
2022-09-06T17:16:14.262Z INFO controllers.AerospikeCluster Scaling up pods {"aerospikecluster": "default/aerocluster", "currentSz": 0, "desiredSz": 1}
2022-09-06T17:16:14.268Z INFO controllers.AerospikeCluster Removing pvc for removed pods {"aerospikecluster": "default/aerocluster", "pods": []}
2022-09-06T17:16:14.326Z INFO controllers.AerospikeCluster Waiting for statefulset to be ready {"aerospikecluster": "default/aerocluster", "WaitTimePerPod": "3m0s"}
2022-09-06T17:16:16.343Z DEBUG controllers.AerospikeCluster Check statefulSet pod running and ready {"aerospikecluster": "default/aerocluster", "pod": "aerocluster-0-0"}
2022-09-06T17:16:26.350Z DEBUG controllers.AerospikeCluster Check statefulSet pod running and ready {"aerospikecluster": "default/aerocluster", "pod": "aerocluster-0-0"}
2022-09-06T17:16:36.359Z DEBUG controllers.AerospikeCluster Check statefulSet pod running and ready {"aerospikecluster": "default/aerocluster", "pod": "aerocluster-0-0"}
2022-09-06T17:16:36.365Z ERROR controllers.AerospikeCluster Failed to scaleUp StatefulSet pods {"aerospikecluster": "default/aerocluster", "stsName": "aerocluster-0", "error": "failed to wait for statefulset to be ready: statefulSet pod aerocluster-0-0 failed: pod failed message in container aerospike-server: back-off 10s restarting failed container=aerospike-server pod=aerocluster-0-0_default(f107c6f5-ee45-4110-80a1-a0bf5e24cd32) reason: CrashLoopBackOff"}
github.com/aerospike/aerospike-kubernetes-operator/controllers.(*SingleClusterReconciler).reconcileRacks
 /workspace/controllers/rack.go:79
github.com/aerospike/aerospike-kubernetes-operator/controllers.(*SingleClusterReconciler).Reconcile
 /workspace/controllers/reconciler.go:72
github.com/aerospike/aerospike-kubernetes-operator/controllers.(*AerospikeClusterReconciler).Reconcile
 /workspace/controllers/aerospikecluster_controller.go:121
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.2/pkg/internal/controller/controller.go:298
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.2/pkg/internal/controller/controller.go:253
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.2/pkg/internal/controller/controller.go:214
2022-09-06T17:16:36.365Z ERROR controller-runtime.manager.controller.aerospikecluster Reconciler error {"reconciler group": "asdb.aerospike.com", "reconciler kind": "AerospikeCluster", "name": "aerocluster", "namespace": "default", "error": "failed to wait for statefulset to be ready: statefulSet pod aerocluster-0-0 failed: pod failed message in container aerospike-server: back-off 10s restarting failed container=aerospike-server pod=aerocluster-0-0_default(f107c6f5-ee45-4110-80a1-a0bf5e24cd32) reason: CrashLoopBackOff"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.2/pkg/internal/controller/controller.go:253
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.2/pkg/internal/controller/controller.go:214
2022-09-06T17:16:36.365Z INFO controllers.AerospikeCluster Reconciling AerospikeCluster {"aerospikecluster": "default/aerocluster"}

Hum… I am not much familiar with Helm… are you able to have a server start up on its own with that feature file? Anything in terms of logs from the db node itself as it seems Helm is just saying that it didn’t start (or that’s how I am interpreting it).

Since you are trying out enterprise it is likely more straightforward to use the Aerospike Kubernetes Operator to spin up your cluster.

1 Like