Defrag not keeping up

Well at 71,000 TPS writes, you are definitely filling multiple entire write blocks of 128KB within one second. So there should be no need to defrag.

What is the record TTL? Do you have records that are expiring causing the blocks to then defrag?

When you are initially creating records, you are getting a fully populated block of data comprising multiple records - as many as can fit in the write-block-size. When some of those records either are replaced by an update to a new block or expire due to expiring default-ttl or record ttl, the block gets fragmented. When useful records are less than defrag-lwm-pct (50%) in the block, the block becomes candidate for defragging. Remaining good records from this block are re-written in a new block along with other records from other blocks getting defragmented. This is how defrag works. So if you were just creating new records at a good rate, 71K on 5 nodes is more than adequate, you should not defrag till records start expiring.

So what is the default TTL of your namespace or of the records when you create them?