FAQ - will truncate free up space immediately


#1

FAQ - will the truncate command free up space immediately

The truncate command will free up memory immediately but the available disk space (contiguous free blocks) will gradually increase as defragmentation proceeds.

Examples

To illustrate the effect of truncate. We can use a cluster of 2 nodes to test.

The initial objects count:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Namespace Usage Information~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace                Node     Total   Expirations,Evictions     Stop       Disk    Disk     HWM   Avail%          Mem     Mem    HWM      Stop
        .                   .   Records                       .   Writes       Used   Used%   Disk%        .         Used   Used%   Mem%   Writes%
test        10.0.100.135:3000   888.960 K   (0.000,  22.931 K)      false    1.378 GB   35      50      51       54.417 MB   6       60     90
test        10.0.100.235:3000   718.119 K   (0.000,  26.234 K)      false    1.113 GB   28      50      65       43.989 MB   5       60     90
test                              1.607 M   (0.000,  49.165 K)               2.491 GB                            98.406 MB

We issue the truncate command against one of the nodes:

asinfo -v "truncate:namespace=test;"

And eventually, the the device_available_pct goes back up to 99.

test        10.0.100.135:3000   0.000     (0.000,  22.931 K)      false    0.000 B    0       50      99       162.750 KB   1       60     90
test        10.0.100.235:3000   0.000     (0.000,  26.234 K)      false    0.000 B    0       50      99       162.750 KB   1       60     90
test                            0.000     (0.000,  49.165 K)               0.000 B                             325.500 KB

From the logs:

egrep 'master|device|memory|truncate' aerospike.log
.
.
.
May 30 2018 19:41:54 GMT: INFO (info): (ticker.c:241)    system-memory: free-kbytes 2526764 free-pct 81 heap-kbytes (252814,257288,530432) heap-efficiency-pct 47.7
May 30 2018 19:41:54 GMT: INFO (info): (ticker.c:371) {test} objects: all 718119 master 339318 prole 378801 non-replica 0
May 30 2018 19:41:54 GMT: INFO (info): (ticker.c:444) {test} memory-usage: total-bytes 46126272 index-bytes 45959616 sindex-bytes 166656 used-pct 4.30
May 30 2018 19:41:54 GMT: INFO (info): (ticker.c:483) {test} device-usage: used-bytes 1194950016 avail-pct 65 cache-read-pct 0.00
May 30 2018 19:42:04 GMT: INFO (info): (ticker.c:241)    system-memory: free-kbytes 2526756 free-pct 81 heap-kbytes (252815,257292,530432) heap-efficiency-pct 47.7
May 30 2018 19:42:04 GMT: INFO (info): (ticker.c:371) {test} objects: all 718119 master 339318 prole 378801 non-replica 0
May 30 2018 19:42:04 GMT: INFO (info): (ticker.c:444) {test} memory-usage: total-bytes 46126272 index-bytes 45959616 sindex-bytes 166656 used-pct 4.30
May 30 2018 19:42:04 GMT: INFO (info): (ticker.c:483) {test} device-usage: used-bytes 1194950016 avail-pct 65 cache-read-pct 0.00
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:213) {test} got command to truncate to now (265405330155)
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:456) {test} truncating to 265405330155
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:467) {test} starting truncate
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:456) {test} truncating to 265405330157
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:471) {test} flagging truncate to restart
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:574) {test} truncated records (718119,2055444)
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:582) {test} restarting truncate
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:574) {test} truncated records (0,2055444)
May 30 2018 19:42:10 GMT: INFO (truncate): (truncate.c:578) {test} done truncate
May 30 2018 19:42:14 GMT: INFO (info): (ticker.c:241)    system-memory: free-kbytes 2530772 free-pct 81 heap-kbytes (252813,257296,530432) heap-efficiency-pct 47.7
May 30 2018 19:42:14 GMT: INFO (info): (ticker.c:371) {test} objects: all 0 master 0 prole 0 non-replica 0
May 30 2018 19:42:14 GMT: INFO (info): (ticker.c:444) {test} memory-usage: total-bytes 166656 index-bytes 0 sindex-bytes 166656 used-pct 0.02
May 30 2018 19:42:14 GMT: INFO (info): (ticker.c:483) {test} device-usage: used-bytes 0 avail-pct 66 cache-read-pct 0.00
May 30 2018 19:42:24 GMT: INFO (info): (ticker.c:241)    system-memory: free-kbytes 2530776 free-pct 81 heap-kbytes (252814,257296,530432) heap-efficiency-pct 47.7
May 30 2018 19:42:24 GMT: INFO (info): (ticker.c:371) {test} objects: all 0 master 0 prole 0 non-replica 0
May 30 2018 19:42:24 GMT: INFO (info): (ticker.c:444) {test} memory-usage: total-bytes 166656 index-bytes 0 sindex-bytes 166656 used-pct 0.02
May 30 2018 19:42:24 GMT: INFO (info): (ticker.c:483) {test} device-usage: used-bytes 0 avail-pct 66 cache-read-pct 0.00
.
.
.
May 30 2018 19:45:45 GMT: INFO (info): (ticker.c:241)    system-memory: free-kbytes 2530784 free-pct 81 heap-kbytes (252819,257296,530432) heap-efficiency-pct 47.7
May 30 2018 19:45:45 GMT: INFO (info): (ticker.c:371) {test} objects: all 0 master 0 prole 0 non-replica 0
May 30 2018 19:45:45 GMT: INFO (info): (ticker.c:444) {test} memory-usage: total-bytes 166656 index-bytes 0 sindex-bytes 166656 used-pct 0.02
May 30 2018 19:45:45 GMT: INFO (info): (ticker.c:483) {test} device-usage: used-bytes 0 avail-pct 84 cache-read-pct 0.00
May 30 2018 19:45:55 GMT: INFO (info): (ticker.c:241)    system-memory: free-kbytes 2530776 free-pct 81 heap-kbytes (252815,257296,530432) heap-efficiency-pct 47.7
May 30 2018 19:45:55 GMT: INFO (info): (ticker.c:371) {test} objects: all 0 master 0 prole 0 non-replica 0
May 30 2018 19:45:55 GMT: INFO (info): (ticker.c:444) {test} memory-usage: total-bytes 166656 index-bytes 0 sindex-bytes 166656 used-pct 0.02
May 30 2018 19:45:55 GMT: INFO (info): (ticker.c:483) {test} device-usage: used-bytes 0 avail-pct 85 cache-read-pct 0.00
.
.
.
May 30 2018 19:48:35 GMT: INFO (info): (ticker.c:241)    system-memory: free-kbytes 2530784 free-pct 81 heap-kbytes (252816,257296,530432) heap-efficiency-pct 47.7
May 30 2018 19:48:35 GMT: INFO (info): (ticker.c:371) {test} objects: all 0 master 0 prole 0 non-replica 0
May 30 2018 19:48:35 GMT: INFO (info): (ticker.c:444) {test} memory-usage: total-bytes 166656 index-bytes 0 sindex-bytes 166656 used-pct 0.02
May 30 2018 19:48:35 GMT: INFO (info): (ticker.c:483) {test} device-usage: used-bytes 0 avail-pct 98 cache-read-pct 0.00
May 30 2018 19:48:40 GMT: INFO (nsup): (thr_nsup.c:898) {test} nsup-done: master-objects (0,0) expired (0,0) evicted (26234,0) evict-ttl 0 waits (0,0) total-ms 4
May 30 2018 19:48:45 GMT: INFO (info): (ticker.c:241)    system-memory: free-kbytes 2530788 free-pct 81 heap-kbytes (252820,257296,530432) heap-efficiency-pct 47.7
May 30 2018 19:48:45 GMT: INFO (info): (ticker.c:371) {test} objects: all 0 master 0 prole 0 non-replica 0
May 30 2018 19:48:45 GMT: INFO (info): (ticker.c:444) {test} memory-usage: total-bytes 166656 index-bytes 0 sindex-bytes 166656 used-pct 0.02
May 30 2018 19:48:45 GMT: INFO (info): (ticker.c:483) {test} device-usage: used-bytes 0 avail-pct 99 cache-read-pct 0.00

In this case, it took more than 6 minutes for the avail-pct to increase to 99%. The used-bytes/index-bytes dropped to 0 immediately.

Notes

  • Examing the defragmentation around that time:
grep 19:4[1-8]: aerospike.log | grep defrag | grep test
May 30 2018 19:41:39 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 1194950016 free-wblocks 689126 write-q 0 write (1412002,0.0) defrag-q 0 defrag-read (1052808,0.0) defrag-write (359973,0.0)
May 30 2018 19:41:59 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 1194950016 free-wblocks 689126 write-q 0 write (1412002,0.0) defrag-q 0 defrag-read (1052808,0.0) defrag-write (359973,0.0)
May 30 2018 19:42:19 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 696950 write-q 0 write (1412029,1.4) defrag-q 351112 defrag-read (1411771,17948.2) defrag-write (360000,1.4)
May 30 2018 19:42:39 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 714968 write-q 0 write (1412029,0.0) defrag-q 333094 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:42:59 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 732986 write-q 0 write (1412029,0.0) defrag-q 315076 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:43:19 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 751057 write-q 0 write (1412029,0.0) defrag-q 297005 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:43:39 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 769122 write-q 0 write (1412029,0.0) defrag-q 278940 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:43:59 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 787149 write-q 0 write (1412029,0.0) defrag-q 260913 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:44:19 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 805188 write-q 0 write (1412029,0.0) defrag-q 242874 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:44:39 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 823222 write-q 0 write (1412029,0.0) defrag-q 224840 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:44:59 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 841251 write-q 0 write (1412029,0.0) defrag-q 206811 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:45:19 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 859339 write-q 0 write (1412029,0.0) defrag-q 188723 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:45:39 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 877416 write-q 0 write (1412029,0.0) defrag-q 170646 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:45:59 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 895463 write-q 0 write (1412029,0.0) defrag-q 152599 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:46:19 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 913487 write-q 0 write (1412029,0.0) defrag-q 134575 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:46:39 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 931514 write-q 0 write (1412029,0.0) defrag-q 116548 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:46:59 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 949590 write-q 0 write (1412029,0.0) defrag-q 98472 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:47:19 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 967631 write-q 0 write (1412029,0.0) defrag-q 80431 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:47:39 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 985637 write-q 0 write (1412029,0.0) defrag-q 62425 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:47:59 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 1003664 write-q 0 write (1412029,0.0) defrag-q 44398 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:48:19 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 1021727 write-q 0 write (1412029,0.0) defrag-q 26335 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:48:39 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 1039758 write-q 0 write (1412029,0.0) defrag-q 8304 defrag-read (1411771,0.0) defrag-write (360000,0.0)
May 30 2018 19:48:59 GMT: INFO (drv_ssd): (drv_ssd.c:2072) {test} /opt/aerospike/test.dat: used-bytes 0 free-wblocks 1048062 write-q 0 write (1412029,0.0) defrag-q 0 defrag-read (1411771,0.0) defrag-write (360000,0.0)

Notice that the defrag-q is piling up but there is no disk activities as entire blocks are freed up when the entire namespace is truncated.

Note: For fully removing a namespace, one can simply proceed with a rolling restart removing the namespace one node at a time.

Keywords

TRUNCATE AVAIL-PCT FREE-PCT

Timestamp

05/31/2018