Open Frames Download Complete PDF Send Feedback Print This Page

Previous

Multi-Queue

In This Section:

Introduction to Multiple Traffic Queues

Basic Multi-Queue Configuration

Multi-Queue Administration

Advanced Multi-Queue settings

Special Scenarios and Configurations

Troubleshooting

This section covers Multi-Queue.

Introduction to Multiple Traffic Queues

By default, each network interface has one traffic queue handled by one CPU. You cannot use more CPUs for acceleration than the number of interfaces handling traffic. Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, more than one CPU is used for acceleration.

Multi-Queue Requirements and Limitations

  • Multi-Queue is not supported on single core computers.
  • Network interfaces must support Multi-Queue
  • The number of queues is limited by the number of CPUs and the type of interface driver:

Driver type

Maximum number of rx queues

Igb

4

Ixgbe

16

Deciding if Multi-Queue is needed

This section will help you decide if you can benefit from configuring Multi-Queue. We recommend that you do these steps before configuring Multi-Queue:

  • Make sure that SecureXL is enabled
  • Examine the CPU roles allocation
  • Examine CPU Utilization
  • Decide if more CPUs can be allocated to the SND
  • Make sure that network interfaces support Multi-Queue

Making sure that SecureXL is enabled

  1. On the Security Gateway, run: fwaccel stat
  2. Examine the Accelerator Status value:
[Expert@gw-30123d:0]# fwaccel stat
Accelerator Status : on
Accept Templates   : enabled
Drop Templates     : disabled
NAT Templates      : disabled by user
 
Accelerator Features : Accounting, NAT, Cryptography, Routing,
                       HasClock, Templates, Synchronous, IdleDetection,
                       Sequencing, TcpStateDetect, AutoExpire,
                       DelayedNotif, TcpStateDetectV2, CPLS, WireMode,
                       DropTemplates, NatTemplates, Streaming,
                       MultiFW, AntiSpoofing, DoS Defender, ViolationStats,
                       Nac, AsychronicNotif, ERDOS
Cryptography Features : Tunnel, UDPEncapsulation, MD5, SHA1, NULL,
                        3DES, DES, CAST, CAST-40, AES-128, AES-256,
                        ESP, LinkSelection, DynamicVPN, NatTraversal,
                        EncRouting, AES-XCBC, SHA256
 

SecureXL is enabled if the value of this field is: on.

Note - Multi-Queue is relevant only if SecureXL is enabled.

Examining the CPU roles allocation

To see the CPU roles allocation, run: fw ctl affinity –l

This command shows the CPU affinity of the interfaces, which assigns SND CPUs. It also shows the CoreXL firewall instances CPU affinity. For example, if you run the command on a Security Gateway:

[Expert@gw-30123d:0]# fw ctl affinity -l
Mgmt: CPU 0
eth1-05: CPU 0
eth1-06: CPU 1
fw_0: CPU 5
fw_1: CPU 4
fw_2: CPU 3
fw_3: CPU 2
 

In this example:

  • The SND is running on CPU 0 and CPU1
  • CoreXL firewall instances are running on CPUs 2-5

If you run the command on a VSX gateway:

[Expert@gw-30123d:0]# fw ctl affinity -l
Mgmt: CPU 0
eth1-05: CPU 0
eth1-06: CPU 1
VS_0 fwk: CPU 2 3 4 5
VS_1 fwk: CPU 2 3 4 5
 

In this example:

  • The SND is running on CPU 0-1
  • CoreXL firewall instances (part of fwk processes) of all the Virtual System are running on CPUs 2-5.

Examining CPU Utilization

  1. On the Security Gateway, run: top.
  2. Press 1 to toggle the SMP view.

    This shows the usage and idle percentage for each CPU. For example:

In this example:

  • SND CPUs (CPU0 and CPU1) are approximately 30% idle
  • CoreXL firewall instances CPUs are approximately 70% idle

Deciding if more CPUs can be allocated to the SND

If you have more network interfaces handling traffic than CPUs assigned to the SND , you can allocate more CPUs for SND. For example, if you have the following network interfaces:

  • eth1-04 – connected to an internal network
  • eth1-05 – connected to an internal network
  • eth1-06 – connected to the DMZ
  • eth1-07 – connected to the external network

And running fw ctl affinity -l shows this IRQ affinity:

[Expert@gw-30123d:0]# fw ctl affinity -l
Mgmt: CPU 0
eth1-04: CPU 1
eth1-05: CPU 0
eth1-06: CPU 1
eth1-07: CPU 0
fw_0: CPU 5
fw_1: CPU 4
fw_2: CPU 3
fw_3: CPU 2
 

You can use the Sim affinity utility to change an interface's IRQ affinity to use more CPUs for the SND. You can do this:

  • Even before the Multi-Queue feature is activated
  • If you have more network interfaces handling traffic than CPUs assigned to the SND

Making sure that the network interfaces support Multi-Queue

Multi-Queue is supported only on network cards that use igb (1Gb) or ixgbe (10Gb) drivers. Before upgrading these drivers, make sure that the latest version supports Multi-Queue.

Gateway type

Expansion Card Model

Security Appliance

Multi-Queue is supported on these expansion cards for 4000, 12000, and 21000 appliances:

  • CPAC-ACC-4-1C
  • CPAC-ACC-4-1F
  • CPAC-ACC-8-1C
  • CPAC-ACC-2-10F
  • CPAC-ACC-4-10F

IP appliance

The XMC 1Gb card is supported on:

  • IP1280
  • IP2450

Open server

Network cards that use igb (1Gb) or ixgbe (10Gb) drivers

  • To view which driver an interface is using, run: ethtool -i <interface name>.
  • When installing a new interface that uses the igb or ixgbe driver, run: cpmq reconfigure and reboot.

Recommendation

We recommend configuring Multi-Queue when:

  • CPU load for SND is high (idle is less than 20%) and
  • CPU load for CoreXL firewall instances are low (idle is greater than 50%)
  • You cannot assign more CPUs to the SND by changing interface IRQ affinity

Basic Multi-Queue Configuration

The cpmq utility is used to view or change the current Multi-Queue configuration.

Configuring Multi-Queue

The cpmq set command lets you to configure Multi-Queue on supported interfaces.

To configure Multi-Queue:

  • On the gateway, run: cpmq set

    This command:

    • Shows all supported interfaces that are active
    • Lets you change the Multi-Queue configuration for each interface.

    Network interfaces that are down are not in the output.

    Note -

    • Multi-Queue lets you configure a maximum of five interfaces
    • You must reboot the gateway after changing the Multi-Queue configuration

Querying the current Multi-Queue configuration

The cpmq get command shows the Multi-Queue status of supported interfaces.

To see the Multi-Queue configuration:

Run: cpmq get [-a]

The -a option shows the Multi-Queue configuration for all supported interfaces (both active and inactive). For example:

[Expert@gw-30123d:0]# cpmq get -a
 
Active igb interfaces:
eth1-05 [On]
eth1-06 [Off]
eth1-01 [Off]
eth1-03 [Off]
eth1-04 [On]
 
Non active igb interfaces:
eth1-02 [Off]
 

Status messages

Status

Meaning

On

Multi-Queue is enabled on the interface.

Off

Multi-Queue is disabled on the interface.

Pending On

Multi-Queue currently disabled. Multi-Queue will be enabled on this interface only after rebooting the gateway.

Note: Pending on can also indicate bad configuration or system errors. For more, see the section on troubleshooting.

Pending Off

Multi-Queue enabled. Multi-Queue will be disabled on this interface only after rebooting the gateway.

In this example:

  • Two interfaces are up with Multi-Queue enabled

    (eth1-05, eth1-04)

  • Three interfaces are up with Multi-Queue disabled

    (eth1-06, eth1-01, eth1-03)

  • One interface that supports Multi-Queue is down

    (eth1-02)

Running the command without the -a option shows the active interfaces only.

Multi-Queue Administration

There are two main roles for CPUs applicable to SecureXL and CoreXL:

  • SecureXL and CoreXL dispatcher CPU (the SND - Secure Network Distributor)

    You can manually configure this using the sim affinity -s command.

  • CoreXL firewall instance CPU

    You can manually configure this using the fw ctl affinity command.

For best performance, the same CPU should not work in both roles. During installation, a default CPU role configuration is set. For example, on a twelve core computer, the two CPUs with the lowest CPU ID are set as SNDs and the ten CPUs with the highest CPU IDs are set as CoreXL firewall instances.

Without Multi-Queue, the number of CPUs allocated to the SND is limited by the number of network interfaces handling the traffic. Since each interface has one traffic queue, each queue can be handled by only one CPU at a time. This means that the SND can use only one CPU at a time per network interface.

When most of the traffic is accelerated, the CPU load for SND can be very high while the CPU load for CoreXL firewall instances can be very low. This is an inefficient utilization of CPU capacity.

Multi-Queue lets you configure more than one traffic queue for each supported network interface, so that more than one SND CPU can handle the traffic of a single network interface at a time. This balances the load efficiently between SND CPUs and CoreXL firewall instances CPUs.

Advanced Multi-Queue settings

Advanced Multi-Queue settings include:

  • Controlling the number of queues
  • IRQ Affinity
  • Viewing CPU Utilization

Controlling the number of queues

Controlling the number of queues depends on the driver type:

Driver

type

Queues

Recommended number

of rx queues

ixgbe

  • When configuring Multi-Queue for an ixgbe interface, an RxTx queue is created per CPU. You can control the number of active rx queues using rx_num.
  • All tx queues are active.

16

igb

When configuring Multi-Queue for an igb interface, the number of tx and rx queues is calculated by the number of active rx queues.

4

  • By default on a Security Gateway, the number of active rx queues is calculated by:

    active rx queues = Number of CPUs – number of CoreXL firewall instances

  • By default on a VSX gateway, the number of active rx queues is calculated by:

    active rx queues = the lowest CPU ID that an fwk process is assigned to

To control the number of active rx queues:

Run: cpmq set rx_num <igb/ixgbe> <number of active rx queues>

This command overrides the default value.

To view the number of active rx queues:

Run: cpmq get rx_num <igb/ixgbe>

To return to the recommended number of rx queues:

On a Security Gateway, the number of active queues changes automatically when you change the number of CoreXL firewall instances (using cpconfig). This number of active queues does not change if you configure the number of rx queues manually.

Run: cpmq set rx_num <igb/ixgbe> default

IRQ Affinity

The IRQ affinity of the queues is set automatically when the operating system boots, as shown (rx_num set to 3):

rxtx-0 -> CPU 0

rxtx-1 -> CPU 1

rxtx-2 -> CPU 2

and so on. This is also true in cases where rx and tx queues are assigned with a separated IRQ:

rx-0 -> CPU 0

tx-0 -> CPU 0

rx-1 -> CPU 1

tx-1 -> CPU 1

and so on.

  • You cannot use the sim affinity or the fw ctl affinity commands to change and query the IRQ affinity for Multi-Queue interfaces.
  • You can reset the affinity of Multi-Queue IRQs by running: cpmq set affinity
  • You can view the affinity of Multi-Queue IRQs by running: cpmq get -v

    Important - Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can affect performance.

Viewing CPU Utilization

  1. Find the CPUs assigned to Multi-Queue IRQs by running: cpmq get -v. For example:
[Expert@gw-30123d:0]# cpmq get -v
 
Active igb interfaces:
eth1-05 [On]
eth1-06 [Off]
eth1-01 [Off]
eth1-03 [Off]
eth1-04 [On]
 
multi-queue affinity for igb interfaces:
 
eth1-05:
 
  
 
      irq     |       cpu     |       queue
-----------------------------------------------------
        178             0                TxRx-0
        186             1                TxRx-1
 
eth1-04:
 
        irq     |       cpu     |       queue
-----------------------------------------------------
        123             0                TxRx-0
        131             1                TxRx-1
 

In this example:

  • Multi-Queue is enabled on two igb interfaces (eth1-05 and eth1-04)
  • The number of active rx queues is configured to 2 (for igb, the number of queues is calculated by the number of active rx queues).
  • The IRQs for both interfaces are assigned to CPUs 0-1.
  1. Run: top
  2. Press 1 to toggle to the SMP view.

    In the above example, CPU utilization of Multi-Queue CPUs is approximately 50%, as CPU0 and CPU1 are handling the queues (as shown in step 1).

Adding more Interfaces

Due to IRQ limitations, you can configure a maximum of five interfaces with Multi-Queue.

To add more interfaces, run: cpmq set -f

Special Scenarios and Configurations

  • In Security Gateway mode: Changing the number of CoreXL firewall instances when Multi-Queue is enabled on some or all interfaces

    For best performance, the default number of active rx queues is calculated by:

    Number of active rx queues = number of CPUs – number of CoreXL firewall instances

    This configuration is set automatically when configuring Multi-Queue. When changing the number of instances, the number of active rx queues will change automatically if it was not set manually.

  • In VSX mode: changing the number of CPUs that the fwk processes are assigned to
  • The default number of active rx queues is calculated by:

    Number of active rx queues = the lowest CPU ID that an fwk process is assigned to

    For example:

[Expert@gw-30123d:0]# fw ctl affinity -l
Mgmt: CPU 0
eth1-05: CPU 0
eth1-06: CPU 1
VS_0 fwk: CPU 2 3 4 5
VS_1 fwk: CPU 2 3 4 5
 

In this example

  • The number of active rx queues is set to 2.
  • This configuration is set automatically when configuring Multi-Queue.
  • It will not automatically update when changing the affinity of the Virtual System. When changing the affinity of the Virtual System, make sure to follow the instructions in Advanced Multi-Queue settings.

The effects of changing the status of a Multi-Queue enabled interface

  • Changing the status to DOWN

    The Multi-Queue configuration is saved when you change the status of an interface to down.

    Since the number of interfaces with Multi-Queue enabled is limited to five, you may need to disable Multi-Queue on an interface after changing its status to down to enable Multi-Queue on other interfaces.

  • To disable Multi-Queue on non-active interfaces:
    1. Activate an interface.
    2. Disable the Multi-Queue using the cpmq set command.
    3. Deactivate the interface.
  • Changing the status to UP

    You must reset the IRQ affinity for Multi-Queue interfaces if, in this order, you:

    • Enabled Multi-Queue on the interface
    • Changed the status of the interface to down
    • Rebooted the gateway
    • Changed the interface status to up.

    This problem does not occur if you are running automatic sim affinity (sim affinity -a). Automatic sim affinity runs by default, and has to be manually canceled using the sim affinity -s command.

    To set the static affinity of Multi-Queue interfaces again, run: cpmq set affinity.

Adding a network interface

  • When adding a network interface card to a gateway that uses igb or ixgbe drivers, the Multi-Queue configuration can change due to interface indexing. If you add a network interface card to a gateway that uses igb or ixgbe drivers make sure to run Multi-Queue configuration again or run: cpmq reconfigure.
  • If a reconfiguration change is required, you will be prompted to reboot the computer.

Changing the affinity of CoreXL firewall instances 

  • For best performance, we recommend that you do not assign both SND and a CoreXL firewall instance to the same CPU.
  • When changing the affinity of the CoreXL firewall instances to a CPU assigned with one of the Multi-Queue queues, we recommend that you reconfigure the number of active rx queues following this rule:

    Active rx queues = the lowest CPU number that a CoreXL firewall instance is assigned to

  • You can configure the number of active rx queues by running:

    cpmq set rx_num <igb/ixgbe> <value/default>

Troubleshooting

  • After reboot, the wrong interfaces are configured for Multi-Queue

    This can happen after changing the physical interfaces on the gateway. To solve this issue:

    1. Run: cpmq reconfigure
    2. Reboot.

    Or configure Multi-Queue again.

  • After configuring Multi-Queue and rebooting the gateway, some of the configured interfaces are shown as down. These interfaces were up before the gateway reboot. The cpmq get –a command shows the interface status as Pending on.

    This can happen when not enough IRQs are available on the gateway. To resolve this issue do one of these:

    • Disable some of the interfaces configured for Multi-Queue
    • Manually reduce the number of active rx queues (rx_num) using the cpmq set rx_num command, and reboot the gateway
  • When changing the status of interfaces, all the interface IRQs are assigned to CPU 0 or to all of the CPUs

    This can happen when an interface status is changed to UP after the automatic affinity procedure runs (the affinity procedure runs automatically during boot).

    To solve this issue, run: cpmq set affinity

    This problem does not occur if you are running automatic sim affinity (sim affinity -s). Automatic sim affinity runs by default, and has to be manually canceled using the sim affinity -s command.

  • In VSX mode, an fwk process runs on the same CPU as some of the interface queues

    This can happen when the affinity of the Virtual System was manually changed but Multi-Queue was not reconfigured accordingly.

    To solve this issue, configure the number of active rx queues manually or run: cpmq reconfigure and reboot.

  • In Security Gateway mode – after changing the number of instances Multi-Queue is disabled on all interfaces

    When changing the number of CoreXL firewall instances, the number of active rx queues automatically changes according to this rule (if not configured manually):

    Active rx queues = Number of CPUs – number of CoreXL firewall instances

    If the number of instances is equal to the number of CPUs, or if the difference between the number of CPUs and the number of CoreXL firewall instances is 1, Multi-Queue will be disabled. To solve this issue, configure the number of active rx queues manually by running:

    cpmq set rx_num <igb/ixgbe> <value>

 
Top of Page ©2015 Check Point Software Technologies Ltd. All rights reserved. Download PDF Send Feedback Print