This section explains how to configure Load Sharing on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each cluster member.
Configure one of these Load Sharing modes for the bond interface:
This is a workflow of CLI commands to configure Link Aggregation in Load Sharing mode.
When you are enslaving configured interfaces, make sure that these interfaces are not used in other configurations.
To configure Load Sharing:
add bonding group <bond id>
set bonding group <bond id> mode <round-robin|xor|8023AD>
add bonding group <bond id> interface <IF name>
Do this command again for all of the slave interfaces.
show bonding group <bond id>
To show more information about the bond, from Expert mode run cat /proc/net/bonding/<bond id>
Note - The Critical Required Interfaces feature is supported for ClusterXL only.
A bond in Load Sharing mode is considered to be down when fewer than a critical minimal number of slave interfaces remain up. When not explicitly defined, the critical minimal number of slave interfaces, which must remain up, in a bond of n interfaces is n-1. Failure of an additional slave interface (when n-2 slave interfaces remain up) will cause the entire bond interface to be considered down, even if the bond contains more than two slave interfaces.
If a smaller number of slave interfaces will be able to handle the expected traffic, you can increase redundancy by explicitly defining the critical minimal number of slave interfaces. Divide your maximal expected traffic speed by the speed of your slave interfaces and round up to a whole number to determine an appropriate number of critical slave interfaces.
To define the critical number of slave interfaces explicitly, create and edit the following file:
$FWDIR/conf/cpha_bond_ls_config.conf
Each line of the file should be written in the following syntax:
<Name_of_Bond> <critical_minimal number_of_slave>
For example, if bond0
has 7 slave interfaces, and bond1
has 6 slave interfaces, file contents could be:
bond0 5
bond1 3
In this example:
bond0
would be considered down when 3 of its slave interfaces have failed.bond1
would be considered down when 4 of its slave interfaces have failed.If you are running SecureXL in a multi-core system, after you define bonds, set affinities manually. Use the sim affinity -s
command.
Note - The sim affinity
commands take effect only if the SecureXL is enabled and actually running. SecureXL begins running when you install a Policy for the first time.
For optimal performance, set affinities according to the following guidelines:
sim affinity -s
.cat /proc/net/bonding/<bond name>.
For example, you might have four processing cores (0-3) and six interfaces (0-5), distributed among two bonds:
bond0 |
bond1 |
---|---|
eth0 |
eth3 |
eth1 |
eth4 |
eth2 |
eth5 |
Two of the CPU cores will need to handle two interfaces each. An optimal configuration can be:
bond0 |
|
bond1 |
|
---|---|---|---|
eth0 |
core 0 |
eth3 |
core 0 |
eth1 |
core 1 |
eth4 |
core 1 |
eth2 |
core 2 |
|
|
|
|
eth5 |
core 3 |