How bypass timeout with udf in scan?

Hello, I need to execute a udf record and stream over all the records but both return timeout I increased to 300000 but is not enought time. There’s some alternative, as run queries in background that bypass or hasn’t timeouts?

Thanks

Can you give us details on how you are executing the scan, code snippets, versions, etc. I ran into a similar issue using AQL… i think. but there’s not much detail here to say for certain

I run this scan AGGREGATE myspace.sum() on namespace_.campaigns

I’m planning to run an aggregate process by each possible key in my “segments” map, to bypass the timeout, but is not the wished solution, some idea?

the code is a simple map aggregation

function sum(s)
    
    local function filterStatus(rec)
        return (rec['segments']~= nil)  
        end
 local function my_aggregate_fn(mapa,rec)
            if ( mapa['total']== nil) then
                  mapa['total'] = 0
                end
     mapa['total'] = mapa['total'] + 1
         if (rec['segments']~= nil) then
  for key in map.keys(rec['segments']) do
         if ( mapa[key]== nil) then
                  mapa[key] = 0
                end
            mapa[key] =  mapa[key] + 1
            end
      end
  return map
end
local  function my_reduce_fn(a,b)
     local out = map()     
  out['total']  = ( a['total']  + b['total'] )
  return out
end
     return s :    aggregate({total = 0} , my_aggregate_fn): reduce(my_reduce_fn)
end

I assume you’re using AQL then? What version of server/tools? Can you try executing it through the client api instead? AQL is a bit funky and really isn’t mean for much except convenient debugging. It acts funny on certain object types, sizes, and on udf invocations im my experience.

© 2015 Copyright Aerospike, Inc. | All rights reserved. Creators of the Aerospike Database.