Hello,
we have a double field type in Aerospike bin which need to be incremented in highly load conditions.
So far it worked fine with a use of atomic increment operation provided by Aerospike natively.

Unfortunately, now we have to use BigInteger instead of double, and there is no atomic increment from Aerospike now.

Which approach you recommend if performance is the most important point and requests for update more likely appears in concurrent way.
Thank you in advance.

We don’t support BigInt as a native type. However we do support 8 byte longs which allows storage of integers between -9,223,372,036,854,775,808 and 9,223,372,036,854,775,807. These do seem like quite generous values - for instance you can store time as epoch nano seconds using this level of accuracy. So although we don’t support BigInt are you sure the 8 byte long type won’t work for you? It allows increments as per your request.

Hi Ken
Thank you for the response.
Unfortunately long type is not enough (
The value we have to store and increment is more than 18 digits and could be bigger in a year or two.

A trick you can use is to store counters in more than one location and then randomly increment. If you add together the values you always get the right total. This continues to give you atomicity of increments. This is a technique we use when creating counters which are updated with a high frequency - you can see how it helps avoid ‘hot key’ errors.

However I would ask whether or not the number you are storing could be simplified in some way. It takes a long time to count to 9 * 10 ^18. If you are starting from some initially high number perhaps subtract that away to start with.

So imagine you are counting the number of people entering a football stadium. You can’t have just one person doing that - you have many - one at each entrance. Each of those people increments their count by one when a new person enters the stadium. You can find out the total number of people in the stadium by asking each of the ‘counters’ for their current value. Yes there are point in time issues potentially here but it also may well be ‘good enough’.

The idea of sharding a counter is analagous. Rather than updating one counter, you update one of n. You choose the counter you increment randomly. This arrangement gives you atomic increments, and allows you to get the total at any time by reading all the counters and adding the results. You could put the counters in a set for example and scan the set to make sure you ‘get’ them all, or do a batch get iterating from 1 to n if your key is something like my-counter-<i> where i is allowed to be 1 … n.

I guess i’ve got your idea. It’s interesting… a few details why it might be not final yet:

if one of the counter is already long.MAX - I have to create additional one? of have them created ahead?

value could be incremented and decremented

if decrement operation get result below 0 - i have to decline operation

Maybe more context will help. This “value” is kind of balance of a user. And for some users value could be much bigger long.MAX in a future.
Did this change your proposal?
Thank you.

You need to size the number of counters correctly. So what is your likely Max value. Call this GLOBAL_MAX. The number of required counters is then GLOBAL_MAX / long.MAX - maybe add 10% to this number to allow for variation.

With decrements - you could make use of our Expression language in such a way that an error is thrown if a value is < 0 .