A growing list (max ~150kB): single-record list vs bucketing

I want to efficiently store a growing list (max ~ 150kB) in database. Possible considerations were:

  1. A single-record list (Aerospike’s native list complex datatype) with write-block-size = 256kB; or
  2. A custom, multi-record, bucketing implementation.

I implemented a custom bucketing type as follows:

  1. A list is represented by (i) one metadata record and (ii) one or more data buckets.
  2. The metadata record has a list of all data buckets belonging to that list.
  3. After each ListOperation.append, the data bucket’s size is returned to the client. If the size threshold is reached, the client creates a new bucket and registers it to the metadata record for future appends. This checking should be efficient because no extra operations are required most of the time when the threshold (a value stored on the client side) has not been met.
  4. In a get-all-values operation, get the bucket list from the metadata and then do a batch get for all buckets.

I then tested it, with the following settings/config: Hardware. Macbook Pro 13 (2015) host with Debian 7 VM limited to 80% of one core and 256MB ram.

Aerospike namespace. In-memory = false. Writes to disk (a 2GB, fixed-size vbox disk file). 256K write block size, ldt enabled (true).

Test Parameters. All permutations of the following parameter variations were tested at least 3 times:

  1. Iteration count: 250, 500, 1000, 2000, 4000, 6000, 8000 (each iteration = one append + one getAll operation)
  2. Bucket size: 100, 500, 1000, 2000, 3000
  3. Append payload: (i) 1x 20-byte value (via ListOperation.append); and (ii) 10x 20-byte value (via ListOperation.appendItems)

Results In almost all cases, a simple Aerospike list CDT was equally fast or faster than the bucketing list in both read and write situations. Possible explanations:

  1. Caching by the streaming write buffer skews results in favour of the simple list type, and wouldn’t happen in the production environment where many other parallel database ops would render the caching ineffective.
  2. The copy-on-write savings (copying a bucket for each append instead of the whole list) does not offset the additional querying cost (reading the metadata before inserting into the identified bucket; extra bucket creation every so often).
  3. Some other condition beyond my understanding.

Question Ive seen bucketing suggested by the Aerospike website (forget where, but it suggested time-based bucketing). Is bucketing only for avoiding the write-block-size record size limitation, or are there supposed to be performance benefits that my benchmark does not reveal?

** On the go currently, but can post implementation and test code later**

Hey! I just bumped into your post while I was looking for a fix for one of my queries. I want to use a single bin in my set which is a list. When I append to this list, for every record the list maxes out at 222 bytes. I cant enter more items into the list and I dont get any errors on aql either.

I see that you have talked about a growing list upt 150K, that is something similar what I need to do. Here is the config for my namespace

namespace test {
    replication-factor 2
    memory-size 4G
    default-ttl 30d 

    storage-engine device {
            file /opt/aerospike/data/bar.dat
            filesize 16G
            data-in-memory true 
    }

}

What am I missing here?

@anil_khuteta how do you know your list is maxed out at 222B? If you’re saying an AQL SELECT query doesn’t return more, that’s because it’s not supposed to beyond a certain list size (afaik). To get the whole list. use a Java client to get() the entire list.

For further questions please start your own thread and private message me about it so my own thread does not get buried. Thanks anil.