Help understanding data-in-memory true and evictions

I keep getting conflicting information about this, so I’m bringing it to the forums.

Here is what I’m trying to achieve.

  • All data gets written to both SSD and memory.
  • As data reaches ~80G, I hit my high-water-memory-pct, and start evicting objects from ram. ( I still want my data on disk).
  • Next at ~90G I will hit stop-writes. This idea is that I never hit high-water-disk-pct, and thus my data is safe and sound.

Is this valid? I’m not sure if the eviction process actually deletes data or not. If it simply abandons the object reference from the primary index, then the data is as good as gone (from both memory and device).

namespace test_ssd {
  high-water-memory-pct 80
  high-water-disk-pct   50
  stop-writes-pct       90

  memory-size 100G

  storage-engine device {
    device /dev/sdb1 ## 200G
    write-block-size 128k
    data-in-memory true
  }
}

Not sure if AS supports this kind of setup. My understanding from doc’s is that you can either persist it to ssd (no copy in memory other than write cache) OR have a complete namespace in memory with persistence to a file (namespace size limited by memory-size). The concept is different to the ones that CouchBase & Co. use.

But I’m sure that sooner or later AS will support this kind of setup too. Maybe this helps. Correct me, if any of my knowledge about this seems to be deprecated.

Cheers, Manuel

This is correct.

There are 3 main configuration options, SSD persisted, RAM only, or RAM with persistence (on HDD or SSD). You cannot have ‘some of the data in RAM’ and some (more) on SSD. And yes, there is a ‘write cache’ configurable through the post-write-queue configuration parameter.

The confusion may come from the delete mechanism, which will remove the entry in RAM only (no tomb-stoning on SSD) and let the data on the device (until the write block goes through the defragmentation and new data written over it). So, in case of cold start, some of the deleted data may come back.