LDT Allocation error

Hi ,

I stumbled upon below error today :

Feb 10 2015 11:45:27 GMT: WARNING (udf): (udf_aerospike.c::178) udf__aerospike_get_particle_buf: Invalid Operation [Bin myldtbin data too big size=136648]... Fail
Feb 10 2015 11:45:27 GMT: WARNING (udf): (udf_aerospike.c::459) udf_aerospike_setbin: Allocation Error [Map-List: bin myldtbin data size too big: pbytes 136648]... Fail
Feb 10 2015 11:45:27 GMT: WARNING (udf): ([C]::-1) [ERROR]<lib_lmap_2014_12_14.B:lmap.put_all()>TopRec Update Error rc(-1) 

Is there any limitation here ?

Server : 3.4.1


What is your write block size for the namespace??

– R

Oh… Thanks !
It is set as 128 kb.

It was set because it was recommended by this page : https://www.aerospike.com/docs/operations/configure/namespace/storage/

What happens if I set it to default of 1mb. Since my 99% data will be less than 128Kb, will it have any effect on performance (using SSD) ? Does internally aerospike creates block of given size( 1mb) even if data is much less than it ?



Write block size defines the write size on the device. Even if your records size are small the system would bundle multiple records into single write block while flushing to storage. The performance characteristic may change. What exact behavior are you worried about. Please check this

Other option would be to change default value for LDT. You can do that using configurator. Whenever performing an operation pass this Map as parameter. I am assuming you are using Java Client.

configMap = {
    'Compute': "compute_settings",
    'WriteBlockSize': 1024*1024

Sample request in aql would look like

aql> execute llist.add(‘bin’, 1, ‘JSON{‘Compute’:”compute_settings”, “WriteBlockSize":1048576}’) on test.demo where PK=‘1'

Note that configurator takes effect only at the time of the create. It will only effect the future creates.

BTW which LDT (llist? ) are you using ??

– R

Hi Raj,

Thanks again for the info. If the system bundles multiple records then it is great. But why have you recommended that specific 128K size for SSDs ?

I am using LMAP :

LargeMap lmap = client.getLargeMap(writePolicy, key, ldtBinName , null);
lmap.put(ldtBinValue); // Here ldtBinValue id a Map<String,Object>

It would be great if I can just change block size for this ldt instead of all bins. I am mainly worried if increasing the block-size will reduce my read/write speed. Or will it unnecessarily consume more space on the aerospike servers.

You mentioned that performance characteristic may change . What characteristics are you refering here ?

Regards, Holmes


The performance would be terms of number of IOPS load in the storage device. I will grab exact detail.

What version of server are you using ?

– R

I am using aerospike-server-3.4.1

Regards, Holmes


There is no performance difference between 128k or 1Mb; this is old information. We will fix the documentation …

– R

How to configure LargeMap via JavaClient? Could you post some example?


Please consider using llist (this is sorted map). Lmap has been deprecated. LLIST now is autoconfiguring you need not configure anything per llist.

Llist is b+tree. Only configuration to use now is size of each node of b+tree. You can set it through …

asinfo -v 'set-config:context=namespace;id=test;ldt-page-size=1024' 

Please refer to java client example for sample programs – R

1 Like


There are some examples in Java that use a LargeList as a Map, a Stack and a Queue, at GitHub - helipilot50/aerospike-LDT-techniques: Large Data Type techniques with Aerospike.