Is Lua the ideal solution?
and faster than a
write operation with policy set to AS_POLICY_EXISTS_CREATE,
followed by multiple operations increment and read?
The only way I can think of is to store read count in another bin on the record and increment via client operation or server UDF. This would not work for single bin namespaces nor for “x seconds” range.
Is there a reasonable maximum number of reads that you expect in the last x number for seconds? What is that number? For example, if it is something like max 20 reads in last 10 seconds - then I can think of a way to do it - ie the method has to fit within the record size limitation. Is the record stored on persistent storage (SSD/HDD) - so max 1Mb or write-block-size whichever is smaller or storage is in RAM - then record can be much much larger - ? The way I would implement it is to put a limited list in a bin of of each timestamp in seconds when record was read and use Operate() to update the list and do the read.
So, another bin in the record (hence wont work with single bin namespace) - which has a list of timestamps prepended and then trimmed to a max size allowable by record size limit. Count of reads within x seconds can be obtained using the list values.
I don’t quite understand what that gets you. Plus if the number of reads is high, udf concept will not work. Each udf module gets it own state cache in memory. Each module can consume up to 128 such caches at any given time on a node. So if your concurrent reads to the cluster utilizing this UDF exceed 128 per node, udf performance will degrade once you are already using the 128 concurrent invocations per node.