This section explains how to configure Load Sharing on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each cluster member.
Configure one of these Load Sharing modes for the bond interface:
Included Topics |
This is a workflow of CLI commands to configure Link Aggregation in Load Sharing mode.
When you are enslaving configured interfaces, make sure that these interfaces are not used in other configurations.
To configure Load Sharing:
add bonding group <bond id>
set bonding group <bond id> mode <round-robin|xor|8023AD>
add bonding group <bond id> interface <IF name>
Do this command again for all of the slave interfaces.
show bonding group <bond id>
To show more information about the bond, from Expert mode run cat /proc/net/bonding/<bond id>
Note - The Critical Required Interfaces feature is supported for ClusterXL only. |
A bond in Load Sharing mode is considered to be down when fewer than a critical minimum number of slave interfaces remain up. When not explicitly defined, the critical minimum number of interfaces in a bond of n interfaces is n-1. Failure of a second interface will cause the entire bond to be considered down, even if the bond contains more than two interfaces.
If a smaller number of interfaces will be able to handle the expected traffic, you can increase redundancy by explicitly defining the number of critical interfaces. Divide your maximal expected traffic speed by the speed of your interfaces and round up to a whole number to determine an appropriate number of critical interfaces.
To explicitly define the number of critical interfaces, create and edit the following file:
$FWDIR/conf/cpha_bond_ls_config.conf
Each line of the file should be of the following syntax:
<bondname> <critical#>
For example, if bond0
has seven interfaces and bond1
has six interfaces, file contents could be:
bond0 5
bond1 3
In this case bond0
would be considered down when three of its interfaces have failed. bond1
would be considered down when four of its interfaces have failed.
If you are running Performance Pack in a multi-core system, after you define bonds, set affinities manually. Use the -s
parameter of the sim affinity
command, see the R77 Performance Tuning Administration Guide.
Note - sim affinity commands take effect only if the Performance Pack is enabled and actually running. Performance Pack begins running when you install a Policy for the first time. |
For optimal performance, set affinities according to the following guidelines:
sim affinity
using the -s
option.cat /proc/net/bonding/<bond name>.
For example, you might have four processing cores (0-3) and six interfaces (0-5), distributed among two bonds:
bond0 |
bond1 |
---|---|
eth0 |
eth3 |
eth1 |
eth4 |
eth2 |
eth5 |
Two of the cores will need to handle two interfaces each. An optimal configuration can be:
bond0 |
|
bond1 |
|
---|---|---|---|
eth0 |
core 0 |
eth3 |
core 0 |
eth1 |
core 1 |
eth4 |
core 1 |
eth2 |
core 2 |
|
|
|
|
eth5 |
core 3 |