Is Aerospike good for handling less than 500 GB of data?

Aerospike would be a perfect fit, rather than overkill.

First, it’s one of the only distrubted databases that can do strong consistency, and has for years since version 4.0 (March 2018). Verified with Jepsen testing, a long track record at financial institutions (used in actual financial use case), instant payment systems, and so on. It’s the only database that can do strong consistency at high performance, regardless of scale.

One main advantage is being able to start with a small dataset, and grow it many orders of magnitude, from GiBs to PiBs, without modifying your application. No need to add caches, complex retrieve-from-storage logic. Sure, the cluster will need to grow vertically (bigger machines) or horizontally (more machines) or both, but that can be done in a live cluster, without taking down your application. “Write once, scale to any size”. That’s not an official slogan here, but it’s the reality of using Aerospike (of course with the caveat that you should always do proper capacity planning).

By the way, I wrote an article about a different use case that contains the explanation of why performance doesn’t degrade as the dataset scales up (again, as long as the cluster has adequate capacity).

1 Like