In This Section: |
Link aggregation, also known as interface bonding, joins multiple physical interfaces to be one virtual interface: a bond interface. Configure a bond interface for High Availability (redundancy), or for load sharing (increased throughput).
For more about Link Aggregation, see the R76 ClusterXL Administration Guide.
Link aggregation, also known as interface bonding, joins multiple physical interfaces together into a virtual interface, known as a bond interface. A bond interface can be configured for High Availability redundancy or for load sharing, which increases connection throughput above that which is possible using one physical interface.
For more about Link Aggregation, see the R76 ClusterXL Administration Guide.
bond0
).A bond contains a minimum of one and may contain up to eight slave interfaces. All slave interfaces contained in a bond share a common IP address and may share the same MAC address. We recommend that each cluster member contain the same quantity of identical slave interfaces.
You can configure Link Aggregation using one of the following strategies:
Clusters, by definition, provide redundancy and high availability at the gateway level. Link Aggregation, however, adds interface and switch redundancy by providing automatic failover to a standby interface card within the same VSX Gateway.
In a High Availability deployment, only one interface is active at a time. If an interface or connection fails, the bond fails over to a standby slave interface. Bonding High Availability failover occurs in one of these cases:
The Link Aggregation High Availability mode, when deployed with ClusterXL, enables a higher level of reliability by providing granular redundancy in the network. This granular redundancy is achieved by using a fully meshed topology, which provides for independent backups for both NICs and switches.
In this scenario:
Load sharing provides the ability to spread traffic over multiple slave interfaces, in addition to providing interface redundancy. All interfaces are always active.
Traffic is balanced between interfaces in a manner similar to the way load sharing balances traffic between cluster members. Load sharing operates according to either the IEEE 802.3ad or the XOR standard.
In Load Sharing mode, each individual connection is assigned to a specific slave interface. For a specific connection, only the designated slave interface is active. In the event of a failure of the designated slave interface, the traffic fails over to another interface, adding that connection's to the traffic it is already handling.
Either of the following failure scenarios can induce bond failover:
Either of these occurrences will induce a failover, either to another slave interface within the bond, or between cluster members, depending on the circumstances.
Note - The bond failover operation requires a network interface card that supports the Media-Independent Interface (MII) standard. |
Link-state initiated failover occurs in this sequence:
When the number of available slave interfaces is fewer than the critical minimum number of interfaces, failover to other cluster members occurs.
CCP failover occurs only when other cluster members are not down, in this sequence.
ClusterXL monitors VLAN IDs for connectivity failure or miscommunication, and initiates failover when necessary. By default, both the highest and the lowest VLAN IDs are monitored for failure. This is done by sending ClusterXL Control Protocol (CCP) packets on round-trip paths at a set interval.
You can configure VSX to monitor all VLANs.
When a failure is detected, a log of the failure is recorded in SmartView Tracker.
By default, the highest and lowest VLAN IDs indicate the status of the physical connection. These VLAN IDs are always monitored and a connectivity failure in either initiates a failover. In most deployments this is the desired setting, as it supports the primary purpose of the feature (detecting a connectivity failure) and the traffic generated on the network is light. However, this setting only detects VLAN configuration problems on the switch for the highest and lowest VLAN IDs.
This section explains how to configure High Availability on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each cluster member.
Use the
value for the active-backup
parameter to configure High Availability.mode
This is a workflow of CLI commands to configure Link Aggregation in High Availability mode.
When you are enslaving configured interfaces, make sure that these interfaces are not used in other configurations.
To configure High Availability:
add bonding group <bond id>
set bonding group <bond id> mode active-backup
add bonding group <bond id> interface <IF name>
Do this command again for all of the slave interfaces.
show bonding group <bond id>
To show more information about the bond, from Expert mode run cat /proc/net/bonding/<bond id>
When you are updating an existing configuration to Link Aggregation, it is necessary to reconfigure the relevant objects to connect to the newly created bond. This includes Virtual Systems, Virtual Routers and Virtual Switches. You can perform these actions using SmartDashboard. In most cases, these definitions can be found in the object Properties window.
For large existing VSX deployments containing many Domain Management Servers and virtual devices, use the
command to reconfigure existing object topologies. For example, in a Multi-Domain Security Management deployment with 200 Domains, each with many virtual devices, it is faster to use vsx_util change_interfaces
. This command automatically replaces the interface with the new bond on all relevant objects. vsx_util change_interfaces
To configure the newly created bond:
The Physical Interface Properties window opens.
The Interface Properties window opens.
You can also replace a bond interface with one that is being used.
To reconfigure objects with vsx_util change_interfaces:
Important - In a Multi-Domain Security Management environment, all Domain Management Servers must be unlocked in order for this operation to succeed. |
This section explains how to configure Load Sharing on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each cluster member.
Configure one of these Load Sharing modes for the bond interface:
This is a workflow of CLI commands to configure Link Aggregation in Load Sharing mode.
When you are enslaving configured interfaces, make sure that these interfaces are not used in other configurations.
To configure Load Sharing:
add bonding group <bond id>
set bonding group <bond id> mode <round-robin|xor|8023AD>
add bonding group <bond id> interface <IF name>
Do this command again for all of the slave interfaces.
show bonding group <bond id>
To show more information about the bond, from Expert mode run cat /proc/net/bonding/<bond id>
Note - The Critical Required Interfaces feature is supported for ClusterXL only. |
A bond in Load Sharing mode is considered to be down when fewer than a critical minimum number of slave interfaces remain up. When not explicitly defined, the critical minimum number of interfaces in a bond of n interfaces is n-1. Failure of a second interface will cause the entire bond to be considered down, even if the bond contains more than two interfaces.
If a smaller number of interfaces will be able to handle the expected traffic, you can increase redundancy by explicitly defining the number of critical interfaces. Divide your maximal expected traffic speed by the speed of your interfaces and round up to a whole number to determine an appropriate number of critical interfaces.
To explicitly define the number of critical interfaces, create and edit the following file:
$FWDIR/conf/cpha_bond_ls_config.conf
Each line of the file should be of the following syntax:
<bondname> <critical#>
For example, if bond0 has seven interfaces and bond1 has six interfaces, file contents could be:
bond0 5
bond1 3
In this case bond0 would be considered down when three of its interfaces have failed. bond1 would be considered down when four of its interfaces have failed.
If you are running Performance Pack in a multi-core system, after you define bonds, set affinities manually. Use the
parameter of the -s
command, see the R76 Performance Pack Administration Guide.sim affinity
Note - sim affinity commands take effect only if the Performance Pack is enabled and actually running. Performance Pack begins running when you install a policy for the first time. |
For optimal performance, set affinities according to the following guidelines:
sim affinity
using the -s
option.cat /proc/net/bonding/<bond name>.
For example, you might have four processing cores (0-3) and six interfaces (0-5), distributed among two bonds:
bond0 |
bond1 |
---|---|
eth0 |
eth3 |
eth1 |
eth4 |
eth2 |
eth5 |
Two of the cores will need to handle two interfaces each. An optimal configuration might be:
bond0 |
|
bond1 |
|
---|---|---|---|
eth0 |
core 0 |
eth3 |
core 0 |
eth1 |
core 1 |
eth4 |
core 1 |
eth2 |
core 2 |
|
|
|
|
eth5 |
core 3 |
These are sample configuration commands for Cisco switches.
Switch#conf t Switch(config)#port-channel load-balance src-dst-ip Switch(config)#interface FastEthernet <all the participating interfaces> Switch(config-if)#channel-group 1 mode active Switch(config-if)#channel-protocol lacp Switch(config-if)#exit Switch(config)#interface port-channel 1 Switch(config-if)#switchport access vlan <the wanted vlan number> Switch(config-if)#end Switch#write |
Switch#conf t Switch(config)#port-channel load-balance src-dst-ip Switch(config)#interface FastEthernet <all the participating interfaces> Switch(config-if)#channel-group 1 mode on Switch(config-if)#exit Switch (config)#interface port-channel 1 Switch(config-if)#switchport access vlan <the wanted vlan number> Switch(config-if)#end Switch#write |
cat/proc/net/bonding/<bond id>
cphaconf show_bond <bond-name>
link
as no.cphaprob state
If any of the cluster members have a
other than active, see Monitoring Cluster Status (cphaprob state) in the R76 ClusterXL Administration Guide.Firewall State
When using certain switches, connectivity delays may occur during some internal bond failovers. With the various features that are now included on some switches, it can take close to a minute for a switch to begin servicing a newly connected interface. The following are suggestions for reducing the startup time after link failure.
Note - PortFast is not applicable if the bond group on the switch is configured as Trunk. |
The PortFast feature should never be used on ports that connect to other switches or hubs. It is important that the Spanning Tree complete the initialization procedure in these situations. Otherwise, these connections may cause physical loops where packets are continuously forwarded (or even multiply) in such a way that network will ultimately fail.
The following are the commands necessary to enable PortFast on a Gigabit Ethernet 1/0/15 interface of a Cisco 3750 switch running IOS.
cisco-3750A#conf t
cisco-3750A(config)#interface gigabitethernet1/0/15
cisco-3750A(config-if)#spanning-tree portfast
This section contains a summary of the Gaia CLI commands that configure Link Aggregation.
This section is a quick reference for Link Aggregation commands. The next sections include procedures for different tasks, including explanations of the configuration options.
Use these commands to configure link aggregation.
Syntax:
{add | delete} bonding group <bondID> interface <IFName>
set bonding [group <bondID>] [primary <IFName>] [mii-interval <ms>] [up-delay <ms> | down-delay <ms>] [mode {round-robin | active-backup | xor [xmit-hash-policy {layer2 | layer3+4}]| 8023AD [lacp-rate {slow | fast}]}]
{<bondID> |
show bonding group
}
groups
Parameters
Parameter |
Description |
---|---|
bondID |
ID of bond, an integer between 1 and 1024 |
IFName |
Name of interface to add to the bond |
|
Name of primary interface in the bond |
|
Frequency that the system polls the Media Independent Interface (MII) to get status |
|
Waiting time to confirm the interface status before taking the specified action (0-5000 ms, default = 200 ms) |
|
|
|
Link Aggregation Control Protocol packet transmission rate:
|
|
Algorithm for interface selected by TCP/IP layer |
Example
set bonding group 666 20 eth2
show bonding groups
Output
Bonding Interface: 20
Bond Configuration
xmit_hash_policy Not configured
down-delay 200
primary Not configured
mode round-robin
up-delay 200
mii-interval 100
lacp_rate Not configured
Bond Interfaces
eth2
eth3
To add a new bond interface:
<bondIDadd bonding group
>
Example:
add bonding group 777
To delete a bond interface:
delete bonding group <
bondID>
Define how interfaces are activated in a bond:
round-robin
- Interfaces activated in order by ID (default)active-backup
- On active interface down, failover to primary interface first, and to other interfaces if primary is downxor
- Interface activation by TCP/IP layer (layer2 or layer3+4). You can set the LACP packet transmission rate for xor mode or 8023AD mode. After you set one of these Load Sharing modes, enter this option:
{lacp-rate
| slow
}fast
where
is every 30 seconds, and slow
is every one second.fast
8023AD
- Link Aggregation Control Protocol load shares traffic by dynamic interface activation, with full interface monitoring between gateway and switch. In this mode only, you can set the algorithm for interface selection, according to the specified TCP/IP layer: xmit-hash-policy
{layer2
| layer3+4
}To define the bond operating mode:
BondIDset bonding group <
mode> mode <
option> [
]
Example:
set bonding group 777 mode xor xmit-hash-policy layer3+4
A bond interface typically contains between two and eight slave interfaces. This section shows how to add and remove a slave interface. The slave interface must not have IP addresses assigned to it.
To add a slave interface to a bond:
add bonding group <bondID> interface <IFName>
Example:
add bonding group 777 interface eth4
Note - Do not change the bond state manually. This is done automatically by the bonding driver. |
To delete a slave interface from a bond:
delete bonding group <bondID> interface <IFName>
Example:
delete bonding group 777 interface eth4
Note - You must delete all non-primary slave interfaces before you remove the primary slave interface. |
With the Active-Backup operating mode, the system automatically fails over to the primary slave interface, if available. If the primary interface is not available, the system fails over to a different slave interface. By default, the first slave interface that you define is the primary interface. You must define the slave interfaces and set the operating mode as Active-Backup before doing this procedure.
Note - You must delete all non-primary slave interfaces before you remove the primary slave interface. |
To define the primary slave interface:
set bonding group <bondID> mode active-backup primary <IFName>
Example
add bonding group 777 interface eth4
set bonding group 777 mode active-backup primary eth4
This sets the frequency of requests sent to the Media Independent Interface (MII) to confirm that a slave interface is up. The valid range is 1-5000 ms. The default is 100 ms.
To configure the monitoring interval:
<bondIDset bonding group
<ms> mii-interval
>
Example:
set bonding group 777 mii-interval 500
To disable monitoring:
bondIDset bonding group <
> mii-interval 0
This parameter defines the waiting time, in milliseconds, to confirm the slave interface status before taking the specified action. Valid values are 0 to 5000 ms. The default is 200 ms.
To configure the UP and Down delay times:
set bonding group <bondID> down-delay <ms>
set bonding group <bondID> up-delay <ms>
Example:
set bonding group 777 down-delay 500
When using Load Sharing modes (XOR or 802.3ad), you can configure these parameters:
To set the LACP rate:
set bonding group <bondID> lacp-rate {slow | fast}
Example: set bonding group 777 mode 8023AD lacp-rate slow
To set the Transmit Hash Policy:
set bonding group <bondID> xmit-hash-policy <layer>
Example: set bonding group 777 mode xor xmit-hash-policy layer2
To make sure that a Link Aggregation is working for a bond interface, run this command in expert mode:
cat /proc/net/bonding/<bondID>
Example with output:
cat /proc/net/bonding/bond666 Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth2 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 100 Down Delay (ms): 200 Slave Interface: eth2 MII Status: up Link Failure Count: 2 Permanent HW addr: 00:50:56:94:11:de |