Print Download PDF Send Feedback

Previous

Next

Configuring Load Sharing Mode

This section explains how to configure Load Sharing on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each cluster member.

Configure one of these Load Sharing modes for the bond interface:

Configuring the Load Sharing Bond

This is a workflow of CLI commands to configure Link Aggregation in Load Sharing mode.

When you are enslaving configured interfaces, make sure that these interfaces are not used in other configurations.

To configure Load Sharing:

  1. Create the High Availability bond. Run:

    add bonding group <bond id>

    set bonding group <bond id> mode <round-robin|xor|8023AD>

  2. Define the slave interfaces. Run add bonding group <bond id> interface <IF name>

    Do this command again for all of the slave interfaces.

  3. Define the number of critical interfaces.
  4. For configurations that use Performance Pack, configure the core affinities.
  5. Make sure that the bond is configured correctly. Run show bonding group <bond id>

    To show more information about the bond, from Expert mode run cat /proc/net/bonding/<bond id>

  6. Open SmartConsole and configure the cluster object.

Setting Critical Required Interfaces

Note - The Critical Required Interfaces feature is supported for ClusterXL only.

A bond in Load Sharing mode is considered to be down when fewer than a critical minimal number of slave interfaces remain up. When not explicitly defined, the critical minimal number of slave interfaces, which must remain up, in a bond of n interfaces is n-1. Failure of an additional slave interface (when n-2 slave interfaces remain up) will cause the entire bond interface to be considered down, even if the bond contains more than two slave interfaces.

If a smaller number of slave interfaces will be able to handle the expected traffic, you can increase redundancy by explicitly defining the critical minimal number of slave interfaces. Divide your maximal expected traffic speed by the speed of your slave interfaces and round up to a whole number to determine an appropriate number of critical slave interfaces.

To define the critical number of slave interfaces explicitly, create and edit the following file:

$FWDIR/conf/cpha_bond_ls_config.conf

Each line of the file should be written in the following syntax:

<Name_of_Bond> <critical_minimal number_of_slave>

For example, if bond0 has 7 slave interfaces, and bond1 has 6 slave interfaces, file contents could be:

bond0 5

bond1 3

In this example:

Setting Affinities

If you are running SecureXL in a multi-core system, after you define bonds, set affinities manually. Use the sim affinity -s command.

Note - The sim affinity commands take effect only if the SecureXL is enabled and actually running. SecureXL begins running when you install a Policy for the first time.

For optimal performance, set affinities according to the following guidelines:

  1. Run sim affinity -s.
  2. Whenever possible, dedicate one processing core to each interface. See sk33520.
  3. If there are more interfaces than CPU cores, one or more CPU cores handle two interfaces. Use interface pairs of the same position with internal and external bonds.
    1. To view interface positions in a bond, run:

      cat /proc/net/bonding/<bond name>.

    2. Note the sequence of the interfaces in the output, and compare this for the two bonds (external bond and its respective internal bond). Interfaces that appear in the same position in the two bonds are interface pairs and set to be handled by one processing core.

    For example, you might have four processing cores (0-3) and six interfaces (0-5), distributed among two bonds:

    bond0

    bond1

    eth0

    eth3

    eth1

    eth4

    eth2

    eth5

    Two of the CPU cores will need to handle two interfaces each. An optimal configuration can be:

    bond0

     

    bond1

     

    eth0

    core 0

    eth3

    core 0

    eth1

    core 1

    eth4

    core 1

    eth2

    core 2

     

     

     

     

    eth5

    core 3