Ethernet Channel Bonding aka NIC Teaming on Linux Systems

Опубликовано:

Ethernet Channel Bonding enables two or more Network Interfaces Card (NIC) to a single virtual NIC card which may increase the bandwidth and provides redundancy of NIC Cards. This is a great way to achieve redundant links, fault tolerance or load balancing networks in production system. If one physical NIC is down or unplugged, it will automatically move resources to other NIC card. Channel/NIC bonding will work with the help of bonding driver in Kernel. We’ll be using two NIC to demonstrate the same.

 

There are almost six types of Channel Bond types are available. Here, we’ll review only two type of Channel Bond which are popular and widely used.

  1. 0: Load balancing (Round-Robin) : Traffic is transmitted in sequential order or round-robin fashion from both NIC. This mode provides load balancing and fault tolerance.
  2. 1: Active-Backup : Only one slave NIC is active at any given point of time. Other Interface Card will be active only if the active slave NIC fails.

Creating Ethernet Channel Bonding

We have two Network Ethernet Cards i.e eth1 and eth2 where bond0 will be created for bonding purpose. Need superuser privileged to execute below commands.

Load Balancing (Round-Robin)

Configure eth1

Mention parameter MASTER bond0 and eth1 interface as a SLAVE in config file as shown below.

# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
USERCTL=no
MASTER=bond0
SLAVE=yes
Configure eth2

Here also, specify parameter MASTER bond0 and eth2 interface as a SLAVE.

# vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE="eth2"
TYPE="Ethernet"
ONBOOT="yes"
USERCTL=no
#NM_CONTROLLED=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
Create bond0 Configuration

Create bond0 and configure Channel bonding interface in the “/etc/sysconfig/network-scripts/” directory called ifcfg-bond0.

The following is a sample channel bonding configuration file.

# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.246.130
NETMASK=255.255.255.0
BONDING_OPTS="mode=0 miimon=100"

Note: In the above configuration we have chosen Bonding Options mode=0 i.e Round-Robin and miimon=100 (Polling intervals 100 ms).

Let’s see interfaces created using ifconfig command which shows “bond0” running as the MASTER both interfaces “eth1” and “eth2” running as SLAVES.

# ifconfig
bond0     Link encap:Ethernet  HWaddr 00:0C:29:57:61:8E
          inet addr:192.168.246.130  Bcast:192.168.246.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe57:618e/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:17374 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16060 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1231555 (1.1 MiB)  TX bytes:1622391 (1.5 MiB)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:57:61:8E
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:16989 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8072 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1196931 (1.1 MiB)  TX bytes:819042 (799.8 KiB)
          Interrupt:19 Base address:0x2000

eth2      Link encap:Ethernet  HWaddr 00:0C:29:57:61:8E
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:385 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7989 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:34624 (33.8 KiB)  TX bytes:803583 (784.7 KiB)
          Interrupt:19 Base address:0x2080
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:480 (480.0 b)  TX bytes:480 (480.0 b)

Restart Network service and interfaces should be OK.

# service network restart
Shutting down interface bond0:                             [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface bond0:                               [  OK  ]

Checking the status of the bond.

# watch -n .1 cat /proc/net/bonding/bond0

Sample Ouput

Below output shows that Bonding Mode is Load Balancing (RR) and eth1 & eth2 are showing up.

Every 0.1s: cat /proc/net/bonding/bond0                         Thu Sep 12 14:08:47 2013

Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 2
Permanent HW addr: 00:0c:29:57:61:8e
Slave queue ID: 0
Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 2
Permanent HW addr: 00:0c:29:57:61:98
Slave queue ID: 0

Create Active Backup

In this scenario, Slave interfaces remain same. only one change will be there in the bond interface ifcfg-bond0 instead of ‘0‘ it will be ‘1‘ which is shown as under.

# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.246.130
NETMASK=255.255.255.0
BONDING_OPTS="mode=1 miimon=100"

Restart network service and check the status of bonding.

# service network restart
Shutting down interface bond0:                             [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface bond0:                               [  OK  ]

Checking the status of the bond with command.

# watch -n .1 cat /proc/net/bonding/bond0

Sample Output

Bonding Mode is showing fault-tolerance (active-backup) and Slave Interface is up.

Every 0.1s: cat /proc/n...  Thu Sep 12 14:40:37 2013

Ethernet Channel Bonding Driver: v3.6.0 (September 2
6, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 00:0c:29:57:61:8e
Slave queue ID: 0
Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 00:0c:29:57:61:98
Slave queue ID: 0

Note: Manually down and up the Slave Interfaces to check the working of Channel Bonding. Please see the command as below.

Примечание: для проверки работы отключить и включить интерфейс выполнив команду указанную ниже
# ifconfig eth1 down
# ifconfig eth1 up

Понравилась статья, расскажи о ней друзьям, нажми кнопку!