Increasing traffic throughput using bonding / teaming of interfaces
You may require more network throughput than your server allows on a single interface.
In this case, one solution could be to use interface bonding, also called teaming, or Interface Aggregation. Interface bonding allows you to present 2 physical links as a single virtual interface for communications.
For this to work, both the switch and the server must be aware of the bonding of interfaces and the mode that is being used. As such, a smart switch is required. The physical network interface on the server must understand bonding as well (accepted and supported mode).
The way this works is as follows: say you have 2 interfaces on the server: em1 and em2. These interfaces are plugged in, using physical cables to a switch, say to switch ports 5 and 6. For this to work, you need to configure bonding for em1 and em2 on your server. Once done, you also need to connect to your smart switch and configure physical ports 5 and 6 to be presented as a bonded interface (as bonding must be configured both ways).
With this configuration, the server will advertise a single interface MAC address - the new interface being ‘bond0’ over the 2 physical links. The switch will be aware of the bonding and will use the 2 physical links for the single MAC address.
You could say that bonding is joining 2 physical links (layer 2) and presenting them as a single Layer 2 interface with a single MAC. Note that this is below the Layer 3 - IP addressing. As such, your em1 and em2 interfaces will not be assigned IP addresses as they will be treated as physical links for the bond0 interface. The bond0 interface with the assigned and advertised MAC address will be where you will be assigning IP addresses (or creating further bridges or VLANs from). em1 and em2 now serve only as links for the bond0 interface, which you will be using.
Note that if you use teaming, you should always tcpdump capture on the bonded interface, bond0 (or interfaces further created from it, bridged or VLAN), not em1/em2. Capturing traffic on just one physical interface will not capture all the traffic required.
This works as the interface MAC that is presented to outside world of this bond is the one of bond0. Bonding has multiple modes. For this to work, the same mode must be configured on both sides - switch and server. The following modes are available in short:
- Round-robin (balance-rr) - this is the simpliest way of balancing. It sends packets back and forth on both physical links in a round-robin fashion. A fault will cause packet loss until interfaces are reconfigured.
- Active-backup (active-backup) - in this mode, the primary link is used as active, and the second is kept as backup link. If the physical connection on the primary link fails, this mode will fail the traffic over to the second physical link. There is no throughput gain in this.
- XOR (balance-xor) - Transmit network packets based on a hash of the packet’s source and destination. This selects the same NIC slave for each destination MAC address (or IP or IP/PORT depending if xor is layer 2 based or based on layer 2/3+4; layer 3 is IP, layer 4 is protocol - tcp/udp/icmp/etc - and port). A fault will cause packet loss until interfaces are reconfigured.
- Broadcast (broadcast) - this will transmit traffic on all interfaces at once and provides highest level of fault tollerance, but no throughput gain.
- IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP) - The LACP mode is most advanced. LACP provides dynamic link aggregation recognition and mode switch. With this mode, a similar load-balance to the balance-xor will aoccur. In case of a link failure, the failover to single-link use will be automated. This provides both balancing and fault tolerance. This link is set up dynamically between two LACP-supporting peers. Note that in the past there have been some issues between manufacturers not implementing the LACP 802.3ad properly and different cards not working as expected. Test before use.
Special modes on the linux kernel - this mode does not require switch support:
- Adaptive transmit load balancing (balance-tlb) - it is achieved by the linux kernel directly, where load is sent on each link according to requirements (outbound balancing). The receiving is done over one link only. If the receiving physical link fails, the other interface takes over the MAC address of the failed interface and starts receiving the traffic.
- Adaptive load balancing (balance-alb) - this is balance-tlb + receive load balancing (rlb). The receive load balancing is achieved by ARP negotiation, in that the kernel responds with the ARP requests for the bond0 MAC on different interfaces, depending on who is asking (kind of balance-xor). For this to work, the switch must not have a sticky ARP table cache.
Example interface bonding on ubuntu using /etc/network/interfaces file and LACP.
$ echo "bonding" >> /etc/modules $ cat <<EOF > /etc/network/interfaces auto em1 iface em1 inet manual bond-master bond0 auto em2 iface em2 inet manual bond-master bond0 auto bond0 iface bond0 inet static address 192.168.0.10 gateway 192.168.0.1 netmask 255.255.255.0 bond-mode 4 bond-miimon 100 bond-lacp-rate 1 bond-slaves em1 em2 EOF
As you can see, em1 and em2 simply become link aggregation physical interfaces that we won’t be using for anything else. Our primary interface now is bond0 that we will be using for everything (layer 3 IP assignment, layer 4 communications, or VLAN/bridging).
- Ubuntu bonding and explanation of bond-mode modes: https://help.ubuntu.com/community/UbuntuBonding
- General information on link aggregation: https://en.wikipedia.org/wiki/Link_aggregation
BOND BONDING TEAMING INTERFACES INCREASE THROUGHPUT