How to setup timeout policy for AGGREGATE query?

It seems that foreach has a policy param.

I tried:

policy = { AS_POLICY_W_TIMEOUT: 5000, AS_POLICY_W_RETRY: AS_POLICY_RETRY_ONCE } result = query.foreach(lambda v: result.append(v), policy)

But query which works locally still fails when I switch to remote node (it returns lots of data). Any thoughts?

Hello,

Do other operations work against that remote node, such as get() without the policy options passed? Does it work for you when you query without those policies?

Ronen

I assume that what is happening is that you’re getting so many results back that the process bloats - you are appending each record to a list, which is essentially what aerospike.query.results() does, but this is even slower.

For starters, the timeout for a query has to do with the time the client will wait until the first result streams back from the cluster. It does not limit how long the query will run.

Next, think about what you’re doing with the records. Each one of those takes up memory. If you buffer the results and don’t constrain them you may be running into significant memory usage on the application side. Instead, handle each record as you need in the callback (extract the data you need, aggregate it, etc)

import aerospike
from aerospike import predicates as p
import pprint

pp = pprint.PrettyPrinter(indent=2)
try:
  client = aerospike.client({'hosts': [ ('192.168.119.23', 3000) ]}).connect()
  q = client.query('test', 'demo')
  q.where(p.equals('gender', 'f'))
  q.select( ['name', 'age'] )
  # prints out the records streaming back from the query
  q.foreach(lambda r: pp.pprint(r))
except Exception as e:
  print("error: {0}".format(e), file=sys.stderr)
  sys.exit(1)