Print Download PDF Send Feedback

Previous

Next

Configuring Bond in High Availability Mode

This section explains how to configure High Availability on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each VSX Cluster Member.

Use the active-backup value for the mode parameter to configure High Availability.

Configuring the High Availability Bond

This is a workflow of CLI commands to configure Link Aggregation in High Availability mode.

Notes:

To configure the Link Aggregation in High Availability mode:

  1. Add the bonding group.
  2. Add slave interfaces to the bonding group.
  3. Make sure that the bond is configured correctly.
  4. Open SmartConsole and configure the cluster object.

Updating the Interface Topology

When you are updating an existing configuration to Link Aggregation, it is necessary to reconfigure the relevant objects to connect to the newly created bond. This includes Virtual Systems, Virtual Routers and Virtual Switches. You can perform these actions in SmartConsole. In most cases, these definitions can be found in the object Properties window.

For large existing VSX deployments containing many Domain Management Servers and Virtual Devices, use the vsx_util change_interfaces command on the Management Server to reconfigure existing object topologies. For example, in a Multi-Domain Server deployment with 200 Domains, each with many Virtual Devices, it is faster to use vsx_util change_interfaces. This command automatically replaces the interface with the new bond on all relevant objects.

Reconfiguring the Bond

To configure the newly created bond for a cluster:

  1. Connect with SmartConsole to the Security Management Server or Main Domain Management Server used to manage the VSX Cluster.
  2. Delete the slave interfaces from the bond that you are not using.
    1. From the navigation tree, click Topology.
    2. From the navigation tree, click Physical Interfaces.
    3. Select the slave interface, and click Remove.
    4. Click OK.
    5. Do these steps again for all the slave interfaces.
  3. From Gaia Clish on each VSX Cluster Member, create the new bond interface.
  4. Connect with SmartConsole to the Security Management Server or Main Domain Management Server used to manage the VSX Cluster.
  5. From the Gateways & Servers view or Object Explorer, double-click the VSX Cluster object.
  6. From the left navigation tree, click Physical Interfaces.
  7. Click Add, and configure the bond interface.

    The Physical Interface Properties window opens.

    1. Enter the bond name.
    2. If the bond is a VLAN trunk, select VLAN Trunk.
    3. Click OK.
  8. From the left navigation tree, click Topology.
  9. Do these steps for each interface that you are adding to the bond:
    1. Double-click the interface.

      The Interface Properties window opens.

    2. From Interface, select the bond interface.
    3. Click OK.
  10. Install the VSX Policy (<Name of VSX Cluster Object>_VSX) on the VSX Cluster object.

Reconfiguring Topology with 'vsx_util change_interfaces'

Important - In a Multi-Domain Server environment, all Domain Management Servers must be unlocked in order for this operation to succeed. Meaning, you need to disconnect all SmartConsole clients from all Domain Management Servers.

To reconfigure objects with vsx_util change_interfaces:

  1. Close SmartConsole windows for the Security Management Server and all Domain Management Servers that use the designated interface.
  2. Connect to the command line on the Management Server.
  3. Log in the Expert Mode.
  4. Run the vsx_util change_interfaces command and follow the on-screen instructions:
    1. Enter the IP address of the Security Management Server or Main Domain Management Server.
    2. Enter the management administrator name and password.
    3. Select VSX Cluster object.
    4. Select Apply changes to the management database and to the VSX Gateway/Cluster members immediately.
    5. When prompted, select the interface to be replaced.
    6. When prompted, select the replacement bond interface.
    7. If you wish to replace additional interfaces, enter "y" when prompted and repeat the above steps.
    8. To complete the process, enter "n".

Configuring Load Sharing Mode

This section explains how to configure Load Sharing on a bond interface. Run the CLI commands from the VSX Gateway (VS0) context. For a cluster configuration, run these commands on each VSX Cluster Member.

Configure one of these Load Sharing modes for the bond interface:

Configuring the Load Sharing Bond

This is a workflow of CLI commands to configure Link Aggregation in Load Sharing mode.

Notes:

To configure the Link Aggregation in Load Sharing mode:

  1. Add the bonding group.
  2. Add slave interfaces to the bonding group.
  3. Define the number of critical interfaces.
  4. For configurations that use Performance Pack, configure the core affinities.
  5. Make sure that the bond is configured correctly.
  6. Open SmartConsole and configure the VSX Cluster object.

Setting Critical Required Interfaces

Note - The Critical Required Interfaces feature is supported for ClusterXL only.

A bond in Load Sharing mode is considered to be down when fewer than a critical minimal number of slave interfaces remain up. When not explicitly defined, the critical minimal number of slave interfaces, which must remain up, in a bond of n interfaces is n-1. Failure of an additional slave interface (when n-2 slave interfaces remain up) will cause the entire bond interface to be considered down, even if the bond contains more than two slave interfaces.

If a smaller number of slave interfaces will be able to handle the expected traffic, you can increase redundancy by explicitly defining the critical minimal number of slave interfaces. Divide your maximal expected traffic speed by the speed of your slave interfaces and round up to a whole number to determine an appropriate number of critical slave interfaces.

To define the critical number of slave interfaces explicitly, create and edit the following file:

$FWDIR/conf/cpha_bond_ls_config.conf

Each line of the file should be written in the following syntax:

<Name_of_Bond> <critical_minimal number_of_slave>

For example, if bond0 has 7 slave interfaces, and bond1 has 6 slave interfaces, file contents could be:

bond0 5

bond1 3

In this example:

Setting Affinities

If you are running Performance Pack in a multi-core system, after you define bonds, set affinities manually. Use the sim affinity -s command.

Note - The sim affinity commands take effect only if the Performance Pack is enabled and actually running. Performance Pack begins running when you install a Policy for the first time.

For optimal performance, set affinities according to the following guidelines:

  1. Run sim affinity -s.
  2. Whenever possible, dedicate one processing core to each interface. See sk33520.
  3. If there are more interfaces than CPU cores, one or more CPU cores handle two interfaces. Use interface pairs of the same position with internal and external bonds.
    1. To view interface positions in a bond, run:

      cat /proc/net/bonding/<bond name>.

    2. Note the sequence of the interfaces in the output, and compare this for the two bonds (external bond and its respective internal bond). Interfaces that appear in the same position in the two bonds are interface pairs and set to be handled by one processing core.

    For example, you might have four processing cores (0-3) and six interfaces (0-5), distributed among two bonds:

    bond0

    bond1

    eth0

    eth3

    eth1

    eth4

    eth2

    eth5

    Two of the CPU cores will need to handle two interfaces each. An optimal configuration can be:

    bond0

     

    bond1

     

    eth0

    core 0

    eth3

    core 0

    eth1

    core 1

    eth4

    core 1

    eth2

    core 2

     

     

     

     

    eth5

    core 3