Can one record in two write blocks?

I set write block size = 1 MB, if the size of each record is 750 KB, my question is can one record in two write blocks ? Or each 750KB record take one write block ?

You can’t give multiple blocks to a single record. You can, however, come up with a system where you fragment your data across multiple records.

Ex… 1 meta data record pointing to 3 sub records Record pk=foo Bin: SubRecs: 3

Record pk=foo1 Bin (1/3 of data)

Foo2… Foo3… And so on. This of course had transactional problems, but it’s a work around.

Or just store your data in memory. No limit there.

Multiple records may reside in a single wblock but a single record cannot span multiple wblocks.

So, if the size of record is 750KB, the write block is 1M, what about the remaining 250KB? Store other smaller records ? or store some metadata? I found it is not very easy to estimate the disk size. Is there any good way to estimate the disk size?

Thanks Albot, I know you mean, but my question is, I set write block size 1 MB, my real record is around 750KB, so it is okie, and there is 250 KB remaining here, what can this 250 KB do ?

The wblock will flush when a write arrives which cannot fit in the remaining space. So if all records were 750KB then there would always be 250KB of unused space per wblock.

I have 12000 records which are larger than 500 KB, and the write block size is 1 MB, in theory, it should take 12000MB, around 12 GB ? Is my estimation correct ?

From a purist point of view, wblock size can be set to multiples of 128 Bytes. For current SSDs 128KB is optimal and highly recommended. You can set it to 512KB or 768KB if your record is consistently 750KB and you want to optimize space. If you are storing into a filesystem in HDD, that should be OK. On SSDs with these odd block sizes, you are recommended to characterize performance using the ACT tool for that block size. I don’t know of anyone who has done this ie use 768KB write-block on SSDs. If someone has, please chime in! Or any other reason why this is a bad idea?

Tested - I was incorrect. Looks like write-block size can be in multiples of 2 with minimum 128 (Bytes). So you can set it to 128, 256, 512, 1K, 2K, 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K & 1M. …(Bytes). I tried setting 768K and got: Jun 07 2017 12:57:22 GMT: FAILED ASSERTION (drv_ssd): (drv_ssd.c:3485) attempted to configure non-round write block size 786432 This is in ver I did a quick validation with 128, 1K, 4K & 256K settings, – the server did start up. Did not try loading any data.

Just to test the upper limit, I also tried 2M - it failed.

FAILED ASSERTION (drv_ssd): (drv_ssd.c:3479) attempted to configure write block size in excess of 1048576

On the lower limit, I could not start with 32 or lower but it actually did start up with 64 but I am sure it cannot save any records with that setting. So I tried with AQL and promptly failed (write-block-size = 64 Bytes):

aql> insert into ns1.testset (PK, mybin) values ('k1', 2)

Note: I had wipe out the storage (I am only testing in file storage, not SSD) when I changed write-block-size and restarted the node - existing data format on storage was incompatible. The above tests (lower than 128K) may not be valid for SSD - I did not test.

==> For SSDs, 128K , 256K, 512K, 1M are the only values to consider for write-block-size with 128K recommended.

You do not need to wipe storage if the new write block size is divisible by the prior.