Issues with lua udfs in calling map module

udf

#1

I am using Ruby client and facing issues with map modules in aerospike which always return nil value.

I am referring to the documentation http://www.aerospike.com/docs/udf/api/map.html local m=map() aerospike attempt to call global ‘map’ (a nil value). Same is the issue with list,llist etc

Also,I tried using different clients but the issue remains same.

When tried calling map,i get this response

local m=map() stdin:1: attempt to call global ‘map’ (a nil value) stack traceback: stdin:1: in main chunk [C]: in ?

The issue is clearly with Aerospike extensions to Lua.I want to know where I m getting wrong?


#2

Harshita,

Are you running normal record UDF or Aggregations. Which version of the server are you using and on which platform.

Can you try executing this UDF from aql. This is to eliminate client as cause.

aql > execute <udfname>.<function>(<params>) on <ns>.<set> where PK=<pk>

– Raj


#3

@raj version : Aerospike Community Edition build 3.8.2.3 platform : Ubuntu 15.10

I just want to run record udfs.I tried using aql too but it is also not returning anything.

Thanks!


#4

Will it be possilbe to share the lua code.

– R


#5

@raj

It turned out that the problem was in my udf code only. I tried running the below udf and now it is working fine.

function readBin(r, name) local function mapper(rec) local element = map() element[name] = rec[name]; return element end return mapper® end

Thanks!

Still I have a query.

As ruby client doesn’t support aggregation,is there any way I can use Ruby client to find all activities of a subscriber_key in descending order of timestamp_key where my PK=combination of ( activity_key,timestamp_key) ?


#6

@harshita23sharma,

Since the Ruby client currently does not support aggregations, you’ll need to scan the whole set and handle the aggregation logic within your application. You can limit the scan to only return the subscriber_key for each record to minimize data transfer. Also, to ensure you receive the full PK and not just the key hash in the scan results you need to ensure send_key is set to true in the write policy, when the records get created. With the Ruby client, this is the default.