Hadware upgrade - is it possible to mix ssd sizes in namespace?


#1

Hi We have two node cluster with single ssd namespace. Documentation describes upgrade process by adding new ssd disks, but it does not mention nothing about disk sizes. Is it possible to upgrade namespace by adding disks in mixed sizes? For example is it possible to create namespace with storage defined like this?:

    storage-engine device {
            # device /dev/sda1 - system
            device /dev/sdb1   # /dev/sdb1 - 400GB
            device /dev/sdc1  #  /dev/sdc1 - 400GB
            device /dev/sdd1  #  /dev/sdd1 - 800GB 
            device /dev/sde1  #  /dev/sde1 - 1,1TB
            data-in-memory false
 }

– Thank You in advance


#2

Each disk will be limited by the smallest disk. You could partition the disks into equal size partitions.


#3

okay, so as an extension,

can I have different ssd’s:: in terms of sizes and device names for a single namespace on different nodes?

i.e

Node 1 :: Namespace 1{ device /dev/sdb1 # 300GB device /dev/sdb2 # 500GB }

Node 2 :: Namespace 1{ device /dev/sdb3 # 600GB device /dev/sdb8 # 800GB }

Node 3 :: Namespace 1{ device /dev/sdb4 # 450GB device /dev/sdb5 # 550GB }

can I have something like this:: or is it that the conf files have to be identical on all the nodes

Regards


#4

Again, for each disk on each node will be limited to the lowest capacity disk on that node.

Data is distributed across all node assuming that the cluster storage resources are homogeneous. So if you are using replication Node 2 will be limited by Node 1 when replica writes begin to fail due to lack of resources.

Similarly, if your data is evictable and your disk hwm breaches, Node 1 will evict master records locally and replica copies from Nodes 2 and 3. In addition to Node 2 having more writable replicas due to its larger capacity, this eviction will cause Node 2 to accept even more writable copies since the eviction of writable copies from the other nodes allows for its replica writes to succeed. This feedback loop will cause node 2 to handle a very disproportionate percentage of all write traffic (and possibly reads depending on client configuration).