Move data from mem backed namespace to disk backed namespace based on ttl

How would I achieve a move of data from a mem backed namespace to a disk backed namespace based on the mem backed ttl expiry. I don’t want to do this with some scan udf and all its race conditions. How can I do this natively or with a udf callback in Aerospike?

You will need to write a program to do it. Like a python script or something more sophisticated based off the data volume to move. You can use ‘Aerospike Expression’ to help filter TTL for you if you have an idea on the buckets you want to split it into, but there’s no native option to move a record to a new namespace. it will need to go into a client program and back into the server.

The problem I am trying to solve: Consider the reason for the request such that a use case for a very short term is hitting write optimised enterprise level ssd in a reasonably sized cluster initially too frequently and I’m thinking a memory backed name space to cache this initial write load with a short ttl moving the data out of the memory backed namespace into the disk backed namespace, admittedly not a unique solution to the issue.

What I was thinking to solve this issue: I was hoping for something to eliminate races and prevent the creation of extra work for the db constantly from an external piece of software that is regularly scanning the entire namespace. A callback or whatever works efficiently in the underlying implementation that is already tracking ttl, would allow us to define some behaviour with AS as the trigger.

High level logic: Record about to ttl out of the memory backed namespace, AS call’s the registered udf with the record reference. In our use case we could define something to insert this record into the disk backed namespace and return control in, our case an un mutated record ready for release.

Right, yes, that makes sense. I also pitched this as a solution to an endurance/IO problem in the past (think exabyte scale endurance problems). We ended up aggregating writes higher up on the application using Spark/custom logic. I’ve been bugging Srini and the other Aerospike folks to give us some decent write caching, but its complicated of course. What you’re suggesting though isn’t possible right now without having some intermediary application and would have to be 1 or more feature requests :confused:

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.