LDT Allocation error

Holmes,

Holmes,

Write block size defines the write size on the device. Even if your records size are small the system would bundle multiple records into single write block while flushing to storage. The performance characteristic may change. What exact behavior are you worried about. Please check this

Other option would be to change default value for LDT. You can do that using configurator. Whenever performing an operation pass this Map as parameter. I am assuming you are using Java Client.

configMap = {
    'Compute': "compute_settings",
    'WriteBlockSize': 1024*1024
}   

Sample request in aql would look like

aql> execute llist.add(‘bin’, 1, ‘JSON{‘Compute’:”compute_settings”, “WriteBlockSize":1048576}’) on test.demo where PK=‘1'

Note that configurator takes effect only at the time of the create. It will only effect the future creates.

BTW which LDT (llist? ) are you using ??

– R