SIGSEGV during get_many, while concurrent is True

Hi, we’re using python client in a C++ program, that runs python modules. We were using python client in version 2.0.3 and everything was ok. We have few clients, that are communicating against clusters in version 3.13.0.4. After upgrading to the latest python client (2.2.3) SIGSEGV occured while executing get_many method with policy, that has concurrency set to True. After setting concurrency to False, everything is ok. I’m not able to simulate this behavior on simple python script and stable data.

Here is traceback from the core file:

Program terminated with signal SIGSEGV, Segmentation fault.

#0  0x00002ad6c46f9ff4 in as_command_parse_bins (pp=pp@entry=0x2ad6d05ff2a8, err=err@entry=0x2ad6d0601a20, rec=rec@entry=0x7ffcdb702c18, n_bins=<optimized out>,  
    deserialize=deserialize@entry=true) at src/main/aerospike/as_command.c:955 
955     src/main/aerospike/as_command.c: No such file or directory. 
(gdb) bt 
#0  0x00002ad6c46f9ff4 in as_command_parse_bins (pp=pp@entry=0x2ad6d05ff2a8, err=err@entry=0x2ad6d0601a20, rec=rec@entry=0x7ffcdb702c18, n_bins=<optimized out>,  
    deserialize=deserialize@entry=true) at src/main/aerospike/as_command.c:955 
#1  0x00002ad6c46e6abd in as_batch_parse_record (deserialize=<optimized out>, rec=<optimized out>, msg=<optimized out>, err=<optimized out>, pp=<optimized out>) 
    at src/main/aerospike/aerospike_batch.c:118 
#2  as_batch_parse_records (err=err@entry=0x2ad6d0601a20, buf=buf@entry=0x2ad6d05ff3f0 "\026", size=size@entry=8108, task=task@entry=0x7ffcdb701f10) 
    at src/main/aerospike/aerospike_batch.c:307 
#3  0x00002ad6c46e6e30 in as_batch_parse (err=0x2ad6d0601a20, sock=0x2ad6d06014b0, node=<optimized out>, max_idle=<optimized out>, deadline_ms=522997532, udata=<optimized out>) 
    at src/main/aerospike/aerospike_batch.c:359 
#4  0x00002ad6c46f8d21 in as_command_execute (cluster=0xb5c0ca0, err=err@entry=0x2ad6d0601a20, policy=policy@entry=0x2ad6d06019e0, cn=cn@entry=0x2ad6d0601a00,  
    command=command@entry=0x2ad6d0601530 "\002\003", command_len=command_len@entry=1174, parse_results_fn=parse_results_fn@entry=0x2ad6c46e6d41 <as_batch_parse>,  
    parse_results_data=parse_results_data@entry=0x7ffcdb701f10) at src/main/aerospike/as_command.c:473 
#5  0x00002ad6c46e7dd1 in as_batch_index_records_execute (task=0x7ffcdb701f10) at src/main/aerospike/aerospike_batch.c:518 
#6  0x00002ad6c46e7ea4 in as_batch_command_execute (task=0x7ffcdb701f10) at src/main/aerospike/aerospike_batch.c:722 
#7  as_batch_worker (data=0x7ffcdb701f10) at src/main/aerospike/aerospike_batch.c:744 
#8  0x00002ad6c4712fd0 in as_thread_worker (data=0xb5c0d10) at src/main/aerospike/as_thread_pool.c:53 
#9  0x00002ad611ce1064 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 
#10 0x00002ad611fde62d in clone () from /lib/x86_64-linux-gnu/libc.so.6

I’have tried compile python client against c client 4.1.10, but it doesn’t help.

I’ll ping the client devs, but I’m also really curious why you would use the python client from c++ instead of the c client?

Well, maybe I wasn’t so clear. The C++ program process data. As a part of processing “pipeline” it run python modules, that are using python client.

Could you share the approximate amount of keys you are passing in the calls to get_many? If you have a representative key that would be useful as well, I’m mainly interested in what type the primary key is. If you could also provide the version of Python you are running that could be useful as well.

@rmakrs:

Could you share the approximate amount of keys you are passing in the calls to get_many? 20-60

representative key (whole tuple): (‘scan’, ‘realtime-reputations’, ‘header_from_fulldomain|info.problem.cc’)

version of Python: 2.7.9

example of stored data: https://pastebin.com/hDY8UkB4

@manana thanks for all that information. We’ll look into and see if we can find any reason for that ocurring.

For now I would just recommend setting 'concurrent':False as you indicated that prevents the issue.