I want to efficiently store a growing list (max ~ 150kB) in database. Possible considerations were:
- A single-record list (Aerospike’s native list complex datatype) with write-block-size = 256kB; or
- A custom, multi-record, bucketing implementation.
I implemented a custom bucketing type as follows:
- A list is represented by (i) one metadata record and (ii) one or more data buckets.
- The metadata record has a list of all data buckets belonging to that list.
- After each ListOperation.append, the data bucket’s size is returned to the client. If the size threshold is reached, the client creates a new bucket and registers it to the metadata record for future appends. This checking should be efficient because no extra operations are required most of the time when the threshold (a value stored on the client side) has not been met.
- In a get-all-values operation, get the bucket list from the metadata and then do a batch get for all buckets.
I then tested it, with the following settings/config: Hardware. Macbook Pro 13 (2015) host with Debian 7 VM limited to 80% of one core and 256MB ram.
Aerospike namespace. In-memory = false. Writes to disk (a 2GB, fixed-size vbox disk file). 256K write block size, ldt enabled (true).
Test Parameters. All permutations of the following parameter variations were tested at least 3 times:
Iteration count: 250, 500, 1000, 2000, 4000, 6000, 8000 (each iteration = one
- Bucket size: 100, 500, 1000, 2000, 3000
Append payload: (i) 1x 20-byte value (via
ListOperation.append); and (ii) 10x 20-byte value (via
Results In almost all cases, a simple Aerospike list CDT was equally fast or faster than the bucketing list in both read and write situations. Possible explanations:
- Caching by the streaming write buffer skews results in favour of the simple list type, and wouldn’t happen in the production environment where many other parallel database ops would render the caching ineffective.
- The copy-on-write savings (copying a bucket for each append instead of the whole list) does not offset the additional querying cost (reading the metadata before inserting into the identified bucket; extra bucket creation every so often).
- Some other condition beyond my understanding.
Question Ive seen bucketing suggested by the Aerospike website (forget where, but it suggested time-based bucketing). Is bucketing only for avoiding the write-block-size record size limitation, or are there supposed to be performance benefits that my benchmark does not reveal?
** On the go currently, but can post implementation and test code later**