How to solve the problem "Error Code 13: Record too big"

I am new to Aerospike. Whenever my Java client writes big data, it gives me an error Error Code 13: Record too big

Here’s my namespace

namespace uapi {
        replication-factor 1
        memory-size 40G
        default-ttl 1d # 30 days, use 0 to never expire/evict.
        ldt-enabled true
        storage-engine device {
               # file /opt/aerospike/data/uapi.dat
               file /cache/aerospike/data/uapi.dat
               filesize 937G
               write-block-size 1048576
               data-in-memory false # Store data in memory in addition to file.
       }
}

You should use write-block-size 1M, but otherwise this error means that your record is larger than the 1M block size. When your data is on SSD this is a hard limit on the max record size. To store data larger than that you will need to use a composite key, such as your-record-1, your-record-2 and use a batch-read to fetch both and combine the data on the application side.

Thanks for the help. I have solved the problem with LargeList by break down my object into pieces

I noticed that for memory namespaces the limit does not occur. Any chance that this 1M limit for disk namespaces get increased in future Aerospike versions?

That’s a write-block-size limit, and only applies to the namespace storage-engine of type device.

I understand it is a configuration limit. I would like to understand why this is limited to 1M and if there is any plan to increase this limit in future Aerospike versions.

Redis allows me to store max 512MB of XML, JSON or any serialized object per record. Since Aerospike does not allow me to do so, I have to setup 2 cache managers in Spring: aerospikeCacheManager and redisCacheManager, For bigger objects, we store them in Redis. For smaller object and needs persistence after reboot, we store them in Aerospike. Hopefully Aerospike will have some improvements in future versions

13 bits in the Primary Index are currently used for identifying how many 128B chunks to read. 2^13 = 8K, reads on device are in 128B units. 8K x 128B ==> 1 MB. My guess is increasing record size would be rather non-trivial.

1 Like