Millisecond precision on ttl

We use psetex on Redis for caches and we’d like to move that to an Aerospike namespace but for it to work we need shorter cache lengths. Even if we set a ttl of 1 the records seem to exist for almost 2 seconds.

It doesn’t need to be down to the millisecond level as Redis but at least precision to tenths of a second would allow us to move these caches to Aerospike.

In Aerospike, there is a period background process which will remove records which expired. However, if a record is accessed before that background process has kicked in, Aerospike will still check the TTL of the record, and remove as appropriate.

Can you give more details on your case? Are you doing a read and expect the record no longer exist? Is there multiple nodes in the system? Could the clock possibly be skewed?

Thanks.

Yes we’re expecting the record to no longer exist. Really 1 second precision is an incredibly blunt object for the use case we’re considering so even if it existed no longer than exactly 1 second from a put operation until a subsequent get that’s still too long.

We get heavy hotspots of many thousands of requests per second, and the data cannot really be over 1 second stale. The solution is very short lived caches (under 1 second). Redis works fantastic for this at the moment with 800ms caches and from set - get if we’re over it at all, the record is gone.

If we test against Aerospike and Redis with 1 second TTL on the data, we don’t see the data in Redis after 1.1 second while Aerospike it’s a bit more inconsistent but it seems >1 and <2 always.

The clocks are synced but interestingly, even if the clock is skewed on Redis we still get accurate expiration.

It’s ok if this kind of sub 1s TTL precision isn’t planned, it’s clear with the minimum value of 1 it’s not really currently intended or incredibly short lived memory caches. We’d love to move those caches to Aerospike if/when TTL precision increases though. In the meantime longer lived caches (10s+) will definitely be moved.

The clocks are synced but interestingly, even if the clock is skewed on Redis we still get accurate expiration.

Clock sync between and client and server is not required for the expiration. Client only deals with delta, server takes care of absolute time. ofcourse if it Aerospike cluster then server node in the cluster should be in sync.

If we test against Aerospike and Redis with 1 second TTL on the data, we don’t see the data in Redis after 1.1 second while Aerospike it’s a bit more inconsistent but it seems >1 and <2 always.

Assuming insert is started at time T1, Finishes at time T2, Read happens at T3. How are you measuring time is T3-T1 = 1sec or T3-T2 = 1sec ?

– R

I think this is getting a little off what I initially intended here :slight_smile:

Even if the expiration was exactly 1 second, it still would be nice to have ms precision on the ttl. We’re experimenting with storing a ttl in milliseconds and letting the application expire items in the sub 1 second range to work around that.

To answer the above question anyway it’s basically T3 - T1. The time between T2-T1 is around a milisecond or less so it wouldn’t explain a 500ms+ overage.

Basically doing this:

  • put with 1s ttl (asynchronously)
  • wait 1.5s
  • read

Doing that we generally get a result even when the client is local to a single node cluster with a memory namespace.

I’m not super concerned with the above. It’s not that surprising (to me) that with as blunt of a metric as seconds that it’s not removed at exactly 1 second.

Got it. Sorry I missed the point initially. Will take into consideration for having more granularity on TTL. Thanks.

This feature request would require additional index bits for the TTL to increase its precision. An index entry is very compact and every bit needs to be carefully allocated. Sorry, I cannot justify spending these bits on this feature.

This topic was automatically closed 6 days after the last reply. New replies are no longer allowed.