Mongo upsert-like capability?



Cool, I’ll give that a try. In case I wasn’t clear earlier, I planned on keeping every tag as another bin and adding secondary indexes on them… querying this system by the tags’ values is the whole point. (In case you’re curious, we’re trying to implement something very similar to KairosDB, InfluxDB, or Prometheus, but we need these atomic increments, which none of these systems have).

Which leads me to my question about increment behavior… can I increment records with multiple bins or just the one I want to store the number I’m incrementing? I came up with some code to try what I thought would work but didn’t to try to highlight what I’m asking:

brand = 'awesomesauce'
metric = 'updates'
ts = 1425412470000
attributes = {

uniq_string_ts = 'awesomesauce:updates:1425412470000:passed=F,channel=WEB,mode=DEFAULT,guid=cmE2SAhY,updated=F'
as_key = (AS_NS, AS_SET, uniq_string_ts)
value = 3
record = attributes
record['value'] = value
record['ts'] = ts
record['brand'] = brand
record['metric'] = metric

def put():
    client.put(as_key, record)
    client.increment(as_key, 'value', value)
  except Exception as e:
    print "Problem incrementing in Aerospike.. exception was: %s" % str(e)

def inc():
    client.increment(as_key, 'value', value)
  except Exception as e:
    print "Problem incrementing in Aerospike.. exception was: %s" % str(e)

def read():
  (key, meta, r) = client.get(as_key)
  print "record of key: %s" % r

Is there a way I can combine the increment and put? Because when I try something like putting the entire record as another parameter in the increment call it doesn’t seem to include any more data in the record…


The increment() method is a special case of operate(), AKA record multi-ops. The operate() method allows you to do several increments on the same record atomically, and actually several other things combined into a single call. Those include touching the record, writing to a bin, appending, prepending, incrementing, and reading a bin (reads happen after writes).

See the API documentation for the method, and the multi-ops sample in our Quick Guide. So in reference to your scenario, you can put, increment, and read all in one. Does that help?


Uh… I think so. I’ll look at the docs a little more closely and try what you’re suggesting.

After my post I came up with a variation that does a check for the key then increments value if it exists or puts the entire record there if it doesn’t for now, which seems to be working. My main problem now is on the read path. When I try to query for the records by other bins, I’m getting AEROSPIKE_ERR_INDEX_NOT_FOUND when I can clearly see the secondary index on my namespace/set in AMC…

>>> query = as_client.query(AS_NS, AS_SET)
>>>'value', 'ts')

>>> from aerospike import predicates as p
>>> query.where( p.equals('channel', "WEB") )

>>> def print_result((key, metadata, record)):
...   print (key, record)
>>> query.foreach( print_result)
Traceback (most recent call last):
  File "", line 1, in 
Exception: (201L, 'AEROSPIKE_ERR_INDEX_NOT_FOUND', 'src/main/aerospike/aerospike_query.c', 223)

Read with secondary indexes

Hey Joshua, can you open this last part in the Operations side of the discussion forum?

It’s a bit off the initial topic :wink:


absolutely. Read with secondary indexes