Standard i3 vs i3.metal

Hi,

Our aerospike cluster is currently on v3.12 on 15 i3.4xlarge instances. From a cost perspective we could easily move to 4 i3.metal servers and keep the same storage/cpu and even gain a little on memory.

We are currently setting up some benchmarks to try to see if there is a large difference between virtualized / on-metal performance, but I’m curious if anyone has any experience moving to the i3.metal instances. Was there a large increase in I/O throughput? Any downsides? Any hints on tuning that woudl be different from the virtualized instances?

Thanks in advance!

From the datasheet I have, the i3.metal should get about 32X in total, with the drives combined, and the i3.4xlarge only has 8X… so 15*8=~ 120 you’d need 3.75(4) i3.metal instances… RAM/Disk should work with about 4 instances too… make sure you test though, and also consider that if you lose a node you’ll be losing 25% of your work horses. A big benefit though is that you shouldn’t be ‘sharing’ an i3.metal with anyone on that host, your network is guaranteed instead of an ‘up to’ metric, and you won’t have any latency or other issues that come with using a hypervisor.

2 Likes

One more note, the r5d/c5d’s are usually faster and a better deal overall… Curious if you’ve tested them?

I have not tried the other instance types but they look promising.

At first glance it looks like the cost per disk-byte may be cost prohibitive for our current workload. Our current workload relies heavily on disk based (vs RAM based) storage. >1Tb RAM vs 24Tb disk. We don’t really have much in the way of on-cluster CPU demands either.

However, we are looking at trying to split the data model and storing frequently used data in RAM with secondary indexes and pushing more work to the cluster so these are interesting options for that. Thanks for the tip!

1 Like