Certification results for SSDs Installed with RAID vs. Directly attached Storage


#1

Hi

We are moving to a new datacenter that only support servers with RAID controllers attached. Our current aerospike servers have SSDs directly attached to motherboard. (Both Current and new servers use the same type of SSD disks).

The new server has been configured with JBOD. Although certification results do show that it passes the certification test criteria, certification results do show that with RAID it is slower.

Is this normal or should there be something wrong in configuration?

The following is JBOD configuration:

JBOD configuration on New Server:

/opt/MegaRAID/storcli/storcli64 -LDGetProp -Cache -Lall -aAll
Adapter 0-VD 0(target id: 0): Cache Policy:WriteBack, ReadAhead, Direct, No Write Cache if bad BBU
Adapter 0-VD 1(target id: 1): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU
Adapter 0-VD 2(target id: 2): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU
Adapter 0-VD 3(target id: 3): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU
Adapter 0-VD 4(target id: 4): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU

New server cert results for 1 disk on load 6X:

 ./latency_calc/act_latency.py -l sdb_6x_output.txt
data is act version 3.0

        trans                                              device
        %>(ms)                                             %>(ms)

avg     8.89   7.27   6.46   5.19   1.12   0.01   0.00     7.93   4.70   2.98   1.81   0.11   0.00   0.00

max     8.94   7.31   6.48   5.22   1.16   0.02   0.00     7.98   4.74   3.00   1.82   0.11   0.00   0.00

Old server cert results for 1 disk on load 6X:

/latency_calc/act_latency.py -l Server3_actconfig_6x_1d_sdc_last.txt
data is act version 3.0

        trans                                              device
        %>(ms)                                             %>(ms)

avg     5.57   1.22   0.19   0.13   0.02   0.00   0.00     5.50   1.13   0.09   0.05   0.01   0.00   0.00

max     7.45   1.66   0.28   0.18   0.02   0.00   0.00     7.37   1.56   0.14   0.07   0.01   0.00   0.00

For some reason another 2 previous similar Topic posts didn't log directly on portal when i posted them. I guess you can remove them :)

#2

On first glance, the cache policies don’t seem correct. The recommended cache settings are:

  • read: NoReadAhead
  • write: WriteThrough

These results do depend strongly on the RAID controller. Can you share the model of that? If you can share the server manufacturer, that can also help.

Also, in some cases, with a single SSD, the performance may be good. But as you add disks, the results may not scale linearly. Again, this is because of the RAID controller.