Changing storage-engine value from device to memory

The Aerospike Knowledge Base has moved to Content on is being migrated to either or Maintenance on articles stored in this repository ceased on December 31st 2022 and this article may be stale. If you have any questions, please do not hesitate to raise a case via

Changing storage-engine value from device to memory (and vice versa)

Problem Description

Is a full cluster restart necessary when modifying the storage-engine configuration namespace stanza parameter value or is a rolling restart possible?
Also, what will be the impact during downtime when changing the storage-engine value?


  1. Prior to performing any changes
  • Verify there will be no sizing issues when changing the storage-engine setting.
  • Create a backup of your data.
  1. You can change the storage-engine on a running cluster without cluster downtime, proceeding one node at a time (rolling restart).
  • As storage-engine is a static configuration parameter it can only be set by restarting the node.
  • Changing storage-engine can be completed with a rolling restart and have (temporarily!) non-matching storage engines across your cluster. For example, there can be nodes in a cluster with persistance on SSD and node with data in memory only.
  1. Data-Persistence with Downtime versus No Downtime
  • If you want to keep data persistence when going from SSD to data in memory, you would need to include in the configuration file namespace stanza the parameter data-in-memory true and cold start from the data on the SSD.
  • If you do not want to keep data persistence when going from SSD to data in memory, you will have to wait for migrations to repopulate the node, and hence would want to wait for migrations to complete between each node.
  1. Changing from memory only to persistence
  • When going from memory only to persistence on a device or file, all records must be smaller than the configured write-block-size of the device otherwise replica writes and migrations will run fail.






1 Like

Hi. What is the proper way to do this (storage-engine memory) within k8? I presume I cannot merely log into the shell of the container and modify aerospike.conf directly within the pod. Wouldn’t those changes get reverted when the pod reboots? It feels like some higher level k8 orchestration needs to be performed but I can’t find any documentation for managing an aerospike cluster in k8. I am using GKE.

You may have seen the Deploying Aerospike clusters in Kubernetes, and not sure if that helps much, but I would expect the config to be what you provide and fully under your control…