Moving to 10G network may benefit from tweaking their TCP network settings
Here are some TCP settings that can tweaked while benchmarking your use case and move to 10G
When you switch to 10G NICs we would recommend also start using jumbo frames. This is a little tricky b/c all nodes in the cluster will need to be changed at the same time. Also, if XDR is used you will need to make sure the remote cluster is also using jumbo frames.
# This sets the max OS receive buffer size for all types of connections. net.core.rmem_max=33554432 # This sets the max OS send buffer size for all types of connections. net.core.wmem_max=33554432 # This sets the default OS receive buffer size for all types of connections. net.core.rmem_default=65536 # This sets the default OS send buffer size for all types of connections. net.core.wmem_default=65536 # The first value tells the kernel the minimum receive buffer for each TCP connection, and this buffer is always allocated to a TCP socket, even under high pressure on the system. # The second value specified tells the kernel the default receive buffer allocated for each TCP socket. This value overrides the /proc/sys/net/core/rmem_default value used by other protocols. # The third and last value specified in this variable specifies the maximum receive buffer that can be allocated for a TCP socket. net.ipv4.tcp_rmem=’4096 87380 33554432’ # The first value in this variable tells the minimum TCP send buffer space available for a single TCP socket. # The second value in the variable tells us the default buffer space allowed for a single TCP socket to use. # The third value tells the kernel the maximum TCP send buffer space. net.ipv4.tcp_wmem=’4096 65536 33554432’ # This will ensure that immediately subsequent connections use these values. net.ipv4.route.flush=1 net.ipv4.tcp_window_scaling=1 net.ipv4.tcp_timestamps=1 net.ipv4.tcp_sack=1 # increase the length of the processor input queue ## For 10G net.core.netdev_max_backlog = 30000 ## For 1G net.core.netdev_max_backlog = 5000