This section explains how to configure High Availability on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each VSX Cluster Member.
Use the active-backup
value for the mode
parameter to configure High Availability.
This is a workflow of CLI commands to configure Link Aggregation in High Availability mode.
Notes:
To configure the Link Aggregation in High Availability mode:
When you are updating an existing configuration to Link Aggregation, it is necessary to reconfigure the relevant objects to connect to the newly created bond. This includes Virtual Systems, Virtual Routers and Virtual Switches. You can perform these actions in SmartConsole. In most cases, these definitions can be found in the object Properties window.
For large existing VSX deployments containing many Domain Management Servers and Virtual Devices, use the vsx_util change_interfaces
command on the Management Server to reconfigure existing object topologies. For example, in a Multi-Domain Server deployment with 200 Domains, each with many Virtual Devices, it is faster to use vsx_util change_interfaces
. This command automatically replaces the interface with the new bond on all relevant objects.
To configure the newly created bond for a cluster:
The Physical Interface Properties window opens.
The Interface Properties window opens.
Important - In a Multi-Domain Server environment, all Domain Management Servers must be unlocked in order for this operation to succeed. Meaning, you need to disconnect all SmartConsole clients from all Domain Management Servers.
To reconfigure objects with vsx_util change_interfaces:
vsx_util change_interfaces
command and follow the on-screen instructions:This section explains how to configure Load Sharing on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each VSX Cluster Member.
Configure one of these Load Sharing modes for the bond interface:
This is a workflow of CLI commands to configure Link Aggregation in Load Sharing mode.
Notes:
To configure the Link Aggregation in Load Sharing mode:
Note - The Critical Required Interfaces feature is supported for ClusterXL only.
A bond in Load Sharing mode is considered to be down when fewer than a critical minimal number of slave interfaces remain up. When not explicitly defined, the critical minimal number of slave interfaces, which must remain up, in a bond of n interfaces is n-1. Failure of an additional slave interface (when n-2 slave interfaces remain up) will cause the entire bond interface to be considered down, even if the bond contains more than two slave interfaces.
If a smaller number of slave interfaces will be able to handle the expected traffic, you can increase redundancy by explicitly defining the critical minimal number of slave interfaces. Divide your maximal expected traffic speed by the speed of your slave interfaces and round up to a whole number to determine an appropriate number of critical slave interfaces.
To define the critical number of slave interfaces explicitly, create and edit the following file:
$FWDIR/conf/cpha_bond_ls_config.conf
Each line of the file should be written in the following syntax:
<Name_of_Bond> <critical_minimal number_of_slave>
For example, if bond0
has 7 slave interfaces, and bond1
has 6 slave interfaces, file contents could be:
bond0 5
bond1 3
In this example:
bond0
would be considered down when 3 of its slave interfaces have failed.bond1
would be considered down when 4 of its slave interfaces have failed.If you are running Performance Pack in a multi-core system, after you define bonds, set affinities manually. Use the sim affinity -s
command.
Note - The sim affinity
commands take effect only if the Performance Pack is enabled and actually running. Performance Pack begins running when you install a Policy for the first time.
For optimal performance, set affinities according to the following guidelines:
sim affinity -s
.cat /proc/net/bonding/<bond name>.
For example, you might have four processing cores (0-3) and six interfaces (0-5), distributed among two bonds:
bond0 |
bond1 |
---|---|
eth0 |
eth3 |
eth1 |
eth4 |
eth2 |
eth5 |
Two of the CPU cores will need to handle two interfaces each. An optimal configuration can be:
bond0 |
|
bond1 |
|
---|---|---|---|
eth0 |
core 0 |
eth3 |
core 0 |
eth1 |
core 1 |
eth4 |
core 1 |
eth2 |
core 2 |
|
|
|
|
eth5 |
core 3 |