I think the answer is a little different, although the result is the same.
Aerospike has a very effective system for batching writes (128K blocks to 1M blocks, depending on config), and you can configure caching those writes as well.
We have found some applications & customer installations where the write cache is very effective - that’s why we offer it at the database level - although the working set size can be hard to predict, so we recommend not counting on cache effect while sizing.
It will always be more effective to use the database’s mechanism for write coalescing, and for write cache management, rather than going to the storage system and having them do the work. At the database layer, we have the ability of using the transaction semantics to buffer, or not buffer, and also continually adapting to interesting device features ( like transactional memory blocks & SSDs with write capacitors ).
It is common for high performance databases to desire as much direct hardware control as possible - these are old tricks.
With Aerospike, the extra work done by the RAID layer serves no purpose - at best, it is neutral, at worse, there will be corner cases that lead to extra latency distribution and memory management and RAID card internal bus traffic. In some RAID cards, we have observed significant performance bottlenecks even in JBOD mode.
Lower level (kernel and device) cache optimizations are effective when the application can’t be re-written, when multiple applications access the same data, or when you need a reliable single-server storage system. RAID optimizations have their place - just not with Aerospike, which uses distribution for HA, and is the only process managing that data (and thus the locks and consistency).
The use of high performance databases is why RAID controller vendors have created things like a “fast path”. These effects are more noticeable at the higher throughput of Flash/SSD, because with higher speed, the overhead of these RAID cache algorithms becomes a greater percentage of the overall transaction speed. The need for “fastpath” is linked to the rise of SSDs.
Thus, this isn’t an issue about SSDs, but is made worse by SSD velocities. Direct control (like O_DIRECT and O_SYNC) are common for high performance databases - the database can cache better.
It is ironic that someone might buy an expensive RAID controller, then have to pay even more money to turn off the RAID features (fastpath).