Download Complete PDF Send Feedback Print This Page

Previous

Synchronize Contents

Next

CoreXL Administration

CoreXL is a performance-enhancing technology for Security Gateways on multi-core processing platforms. CoreXL enhances Security Gateway performance by enabling the processing cores to concurrently perform multiple tasks.

CoreXL provides almost linear scalability of performance, according to the number of processing cores on a single machine. The increase in performance is achieved without requiring any changes to management or to network topology.

CoreXL joins ClusterXL Load Sharing and SecureXL as part of Check Point's fully complementary family of traffic acceleration technologies.

In a Security Gateway with CoreXL enabled, the Firewall kernel is replicated multiple times. Each replicated copy, or instance, runs on one processing core. These instances handle traffic concurrently, and each instance is a complete and independent inspection kernel. When CoreXL is enabled, all the kernel instances in the Security Gateway process traffic through the same interfaces and apply the same security policy.

Related Topics

Supported Platforms and Unsupported Features

Default Configuration

CoreXL for IPv6

Configuring IPv4 and IPv6 Firewall Instances

Performance Tuning

Configuring CoreXL

Command Line Reference

Supported Platforms and Unsupported Features

CoreXL is supported:

  • SecurePlatform
  • Gaia
  • IPSO
  • Crossbeam platforms

Unsupported Features:

CoreXL does not support Check Point Suite with these features:

  • Check Point QoS (Quality of Service)
  • Route-based VPN
  • IPv6 on IPSO
  • Overlapping NAT

To enable a non-supported feature in the Check Point Suite, disable CoreXL using cpconfig and reboot the gateway (see Configuring CoreXL).

Default Configuration

When you enable CoreXL, the number of kernel instances is based on the total number of CPU cores.

Number of Cores

Number of Kernel Instances

1

1

2

2

4

3

8

6

12

10

More than 12

Number of cores, minus 2

The default affinity setting for all interfaces is automatic when Performance Pack is installed. See Processing Core Allocation. Traffic from all interfaces is directed to the core running the Secure Network Distributor (SND).

CoreXL for IPv6

R76 supports multiple cores for IPv6 traffic. For each firewall kernel instance that works with IPv4 traffic, there is a corresponding firewall kernel instance that also works with IPv6 traffic. Both instances run on the same core.

To check the status of CoreXL on your Security Gateway, run:
fw6 ctl multik stat.

The fw6 ctl multik stat (multi-kernel statistics) command shows IPv6 information for each kernel instance. The state and processing core number of each instance is displayed, along with:

  • The number of connections currently running.
  • The peak number of concurrent connections the instance has used since its inception.

Configuring IPv4 and IPv6 Firewall Instances

After IPv6 support is enabled on the gateway, you can configure the gateway's processing cores to run different combinations of IPv4 and IPv6 firewall kernel instances.

  • The number of IPv4 instances range from a minimum of two to a number equal to the maximum number of cores on the gateway.

    By default, the number of IPv6 firewall instances is set to two.

  • The number of IPv6 instances range from a minimum of two to a number equal to the number of IPv4 instances.

    The number of IPv6 instances cannot exceed the number of IPv4 instances.

To configure the number of IPv6 firewall instances:

  1. From a command line on the gateway, run: cpconfig.

    The configuration menu shows.

  2. Enter option 8: Configure Check Point CoreXL.
Configure Check Point CoreXL...
===============================
CoreXL is currently enabled with 3 firewall instances and 2 IPv6 firewall instances.
 
(1) Change the number of firewall instances
(2) Change the number of IPv6 firewall instances
(3) Disable Check Point CoreXL
 
(4) Exit 

The Configuring Check Point CoreXL menu shows how many IPv4 and IPv6 firewall instances are running on the processing cores.

  1. Enter option 2: Change the number of IPv6 firewall instances.

    The menu shows how many cores are available on the gateway.

  2. Enter the total number of IPv6 firewall instances to run.

    You can only select a number from within the range shown.

  3. Reboot the gateway.

Example:

A gateway that has four cores and is running three IPv4 instances of the firewall kernel and two IPv6 instances of the firewall kernel can be represented like this:

Core

Firewall instances

IPv6 Firewall instances

CPU 0

 

 

CPU 1

fw4_2

 

CPU 2

fw4_1

fw6_1

CPU 3

fw4_0

fw6_0

 

3 instances of IPv4

2 instances of IPv6

  • The minimum allowed number of IPv4 instances is two and the maximum four
  • The minimum allowed number of IPv6 instances is two and the maximum is three

To increase the number of IPv6 instances to four, you must first increase the number of IPv4 firewall instances to the maximum of four:

How many firewall instances would you like to enable (2 to 4)[3] ? 4
 
CoreXL was enabled successfully with 4 firewall instances.
Important: This change will take effect after reboot. 

The gateway now looks like this:

Core

Firewall instances

IPv6 Firewall instances

CPU 0

fw4_3

 

CPU 1

fw4_2

 

CPU 2

fw4_1

fw6_1

CPU 3

fw4_0

fw6_0

 

4 instances of IPv4

2 instances of IPv6

Increase the number of IPv6 instances to four:

How many IPv6 firewall instances would you like to enable (2 to 4)[2] ? 4
 
CoreXL was enabled successfully with 3 IPv6 firewall instances.
Important: This change will take effect after reboot. 

The gateway now looks like this:

Core

Firewall instances

IPv6 Firewall instances

CPU 0

fw4_3

fw6_3

CPU 1

fw4_2

fw6_2

CPU 2

fw4_1

fw6_1

CPU 3

fw4_0

fw6_0

 

4 instances of IPv4

4 instances of IPv6

Performance Tuning

The following sections are relevant only for SecurePlatform.

Processing Core Allocation

The CoreXL software architecture includes the Secure Network Distributor (SND). The SND is responsible for:

  • Processing incoming traffic from the network interfaces
  • Securely accelerating authorized packets (if Performance Pack is running)
  • Distributing non-accelerated packets among kernel instances.

Traffic entering network interface cards (NICs) is directed to a processing core running the SND. The association of a particular interface with a processing core is called the interface's affinity with that core. This affinity causes the interface's traffic to be directed to that core and the SND to run on that core. Setting a kernel instance or a process to run on a particular core is called the instance's or process's affinity with that core.

The default affinity setting for all interfaces is Automatic. Automatic affinity means that if Performance Pack is running, the affinity for each interface is automatically reset every 60 seconds, and balanced between available cores. If Performance Pack is not running, the default affinities of all interfaces are with one available core. In both cases, any processing core running a kernel instance, or defined as the affinity for another process, is considered unavailable and will not be set as the affinity for any interface.

In some cases, which are discussed in the following sections, it may be advisable to change the distribution of kernel instances, the SND, and other processes, among the processing cores. This is done by changing the affinities of different NICs (interfaces) and/or processes. However, to ensure CoreXL's efficiency, all interface traffic must be directed to cores not running kernel instances. Therefore, if you change affinities of interfaces or other processes, you will need to accordingly set the number of kernel instances and ensure that the instances run on other processing cores.

Under normal circumstances, it is not recommended for the SND and an instance to share a core. However, it is necessary for the SND and an instance to share a core when using a machine with exactly two cores.

Allocating Processing Cores

In certain cases, it may be advisable to change the distribution of kernel instances, the SND, and other processes, among the processing cores. This section discusses these cases.

Before planning core allocation, make sure you have read the Processing Core Allocation.

Adding Processing Cores to the Hardware

Increasing the number of processing cores on the hardware platform does not automatically increase the number of kernel instances. If the number of kernel instances is not increased, CoreXL does not utilize some of the processing cores. After upgrading the hardware, increase the number of kernel instances using cpconfig.

Reinstalling the gateway will change the number of kernel instances if you have upgraded the hardware to an increased number of processing cores, or if the number of processing cores stays the same but the number of kernel instances was previously manually changed from the default. Use cpconfig to reconfigure the number of kernel instances.

In a clustered deployment, changing the number of kernel instances (such as by reinstalling CoreXL) should be treated as a version upgrade. Follow the instructions in the R76 Installation and Upgrade Guide, in the "Upgrading ClusterXL Deployments" chapter, and perform either a Minimal Effort Upgrade (using network downtime) or a Zero Downtime Upgrade (no downtime, but active connections may be lost), substituting the instance number change for the version upgrade in the procedure. A Full Connectivity Upgrade cannot be performed when changing the number of kernel instances in a clustered environment.

Allocating an Additional Core to the SND

In some cases, the default configuration of instances and the SND will not be optimal. If the SND is slowing the traffic, and your platform contains enough cores that you can afford to reduce the number of kernel instances, you may want to allocate an additional core to the SND. This is likely to occur especially if much of the traffic is of the type accelerated by Performance Pack; in a ClusterXL Load Sharing deployment; or if IPS features are disabled. In any of these cases, the task load of the SND may be disproportionate to that of the kernel instances.

To check if the SND is slowing down the traffic:

  1. Identify the processing core to which the interfaces are directing traffic using fw ctl affinity -l -r.
  2. Under heavy traffic conditions, run the top command on the CoreXL gateway and check the values for the different cores under the 'idle' column.

It is recommended to allocate an additional core to the SND only if all of the following conditions are met:

  • Your platform has at least eight processing cores.
  • The 'idle' value for the core currently running the SND is in the 0%-5% range.
  • The sum of the 'idle' values for the cores running kernel instances is significantly higher than 100%.

If any of the above conditions are not met, the default configuration of one processing core allocated to the SND is sufficient, and no further configuration is necessary.

Allocating an additional processing core to the SND requires performing the following two stages in the order that they appear:

  1. Reduce the number of kernel instances using cpconfig.
  2. Set interface affinities to the remaining cores, as detailed below.
  3. Reboot to implement the new configuration.
Setting Interface Affinities

Check which cores are running the kernel instances. See also Allocating Processing Cores. Allocate the remaining cores to the SND by setting interface affinities to the cores. The correct method of defining interface affinities depends on whether or not Performance Pack is running, as described in the following sections.

  • When Performance Pack is Running

    If Performance Pack is running, interface affinities are handled by using Performance Pack's sim affinity command.

    The default sim affinity setting is Automatic. In Performance Pack's Automatic mode, interface affinities are automatically distributed among cores that are not running kernel instances and that are not set as the affinity for any other process.

    In most cases, you do not need to change the sim affinity setting.

  • Setting Interface Affinities when Performance Pack is not Running

    If Performance Pack is not running, interface affinities are loaded at boot from a configuration text file called fwaffinity.conf, located under: $FWDIR/conf . In the text file, lines beginning with the letter i define interface affinities.

    If Performance Pack is running, interface affinities are defined by sim affinity settings, and lines beginning with i in fwaffinity.conf are ignored.

    If you are allocating only one processing core to the SND, it is best to have that core selected automatically by leaving the default interface affinity set to automatic, and having no explicit core affinities for any interfaces. To do this, make sure fwaffinity.conf contains the following line:

    i default auto

    In addition, make sure that fwaffinity.conf contains no other lines beginning with i, so that no explicit interface affinities are defined. All interface traffic will be directed to the remaining core.

    If you are allocating two processing cores to the SND, you need to explicitly set interface affinities to the two remaining cores. If you have multiple interfaces, you need to decide which interfaces to set for each of the two cores. Try to achieve a balance of expected traffic between the cores (you can later check the balance by using the top command).

To explicitly set interface affinities, when Performance Pack is not running:

  1. Set the affinity for each interface by editing fwaffinity.conf. The file should contain one line beginning with i for each interface. Each of these lines should follow the following syntax:

    i <interfacename> <cpuid>

    where <interfacename> is the interface name, and <cpuid> is the number of the processing core to be set as the affinity of that interface.

    For example, if you want the traffic from eth0 and eth1 to go to core #0, and the traffic from eth2 to go to core #1, create the following lines in fwaffinity.conf:

    i eth0 0

    i eth1 0

    i eth2 1

    Alternatively, you can choose to explicitly define interface affinities for only one processing core, and define the other core as the default affinity for the remaining interfaces, by using the word default for <interfacename>.

    In the case described in the previous example, the lines in fwaffinity.conf would be:

    i eth2 1

    i default 0

  2. Run $FWDIR/scripts/fwaffinity_apply for the fwaffinity.conf settings to take effect.

The affinity of virtual interfaces can be set using their physical interface(s).

Allocating a Core for Heavy Logging

If the gateway is performing heavy logging, it may be advisable to allocate a processing core to the fwd daemon, which performs the logging. Like adding a core for the SND, this too will reduce the number of cores available for kernel instances.

To allocate a processing core to the fwd daemon, you need to do two things:

  1. Reduce the number of kernel instances using cpconfig.
  2. Set the fwd daemon affinity, as detailed below.
Setting the fwd Daemon Affinity

Check which processing cores are running the kernel instances and which cores are handling interface traffic using fw ctl affinity -l -r. Allocate the remaining core to the fwd daemon by setting the fwd daemon affinity to that core.

Note - Avoiding the processing core or cores that are running the SND is important only if these cores are explicitly defined as affinities of interfaces. If interface affinities are set to Automatic, any core that is not running a kernel instance can be used for the fwd daemon, and interface traffic will be automatically diverted to other cores.

Affinities for Check Point daemons (such as the fwd daemon), if set, are loaded at boot from the fwaffinity.conf configuration text file located at: $FWDIR/conf . Edit the file by adding the following line:

n fwd <cpuid>

where <cpuid> is the number of the processing core to be set as the affinity of the fwd daemon. For example, to set core #2 as the affinity of the fwd daemon, add to the file:

n fwd 2

Reboot for the fwaffinity.conf settings to take effect.

Configuring CoreXL

To enable/disable CoreXL:

  1. Log in to the Security Gateway.
  2. Run cpconfig
  3. Select Configure Check Point CoreXL.
  4. Enable or disable CoreXL.
  5. Reboot the Security Gateway.

To configure the number of instances:

  1. Run cpconfig
  2. Select Configure Check Point CoreXL.
  3. If CoreXL is enabled, enter the number of firewall instances.

    If CoreXL is disabled, enable CoreXL and then set the number of firewall instances.

Reboot the gateway.

Note - In a clustered deployment, changing the number of kernel instances should be treated as a version upgrade.

Command Line Reference

Affinity Settings

Affinity settings controlled by the fwaffinity_apply script file, which executes automatically at boot. When you make a change to affinity settings, the settings will not take effect until you either reboot or manually execute the fwaffinity_apply script.

fwaffinity_apply executes affinity definitions according to the information in the fwaffinity.conf text file. To change affinity settings, edit the text file.

Note - If Performance Pack is running, interface affinities are only defined by Performance Pack's sim affinity command. The fwaffinity.conf interface affinity settings are ignored.

fwaffinity.conf

fwaffinity.conf is located in the $FWDIR/conf directory.

Syntax

Each line in the text file uses the same format: <type> <id> <cpu>

Data

Values

Description

<type>

i

interface

n

Check Point daemon

k

kernel instance

<id>

interface name

if <type> = i

daemon name

if <type> = n

instance number

if <type> = k

default

interfaces that are not specified in another line

<cpuid>

<number>

number(s) of processing core(s) to be set as the affinity

all

all processing cores are available to the interface traffic, daemon or kernel instance

ignore

no specified affinity (useful for excluding an interface from a default setting)

auto

Automatic mode See also Processing Core Allocation.

Note - Interfaces that share an IRQ cannot have different cores as their affinities, including when one interface is included in the default affinity setting. Either set both interfaces to the same affinity, or use ignore for one of them. To view the IRQs of all interfaces, run: fw ctl affinity -l -v -a .

fwaffinty_apply

fwaffinity_apply is located in the $FWDIR/scripts directory. Use the following syntax to execute the command: $FWDIR/scripts/fwaffinity_apply <option>

where <option> is one of the following parameters:

Parameter

Description

-q

Quiet mode - print only error messages.

-t <type>

Only apply affinity for the specified type.

-f

Sets interface affinity even if automatic affinity is active.

fw ctl affinity

The fw ctl affinity command controls affinity settings. However, fw ctl affinity settings will not persist through a restart of the Security Gateway.

To set affinities, execute fw ctl affinity -s.

To list existing affinities, execute fw ctl affinity -l.

fw ctl affinity -s

Use this command to set affinities.

fw ctl affinity -s settings are not persistent through a restart of the Security Gateway. If you want the settings to be persistent, either use sim affinity or edit the fwaffinity.conf configuration file.

To set interface affinities, you should use fw ctl affinity only if Performance Pack is not running. If Performance Pack is running, you should set affinities by using the Performance Pack sim affinity command. These settings will be persistent. If Performance Pack's sim affinity is set to Automatic mode (even if Performance Pack was subsequently disabled), you will not be able to set interface affinities by using fw ctl affinity -s.

Syntax

fw ctl affinity -s <proc_selection> <cpuid>

<proc_selection> is one of the following parameters:

Parameter

Description

-p <pid>

Sets affinity for a particular process, where <pid> is the process ID#.

-n <cpdname>

Sets affinity for a Check Point daemon, where <cpdname> is the Check Point daemon name (for example: fwd).

-k <instance>

Sets affinity for a kernel instance, where <instance> is the instance's number.

-i <interfacename>

Sets affinity for an interface, where <interfacename> is the interface name (for example: eth0).

<cpuid> should be a processing core number or a list of processing core numbers. To have no affinity to any specific processing core, <cpuid> should be: all.

Note - Setting an Interface Affinity will set the affinities of all interfaces sharing the same IRQ to the same processing core.
To view the IRQs of all interfaces, run: fw ctl affinity -l -v -a

Example

To set kernel instance #3 to run on processing core #5, run:

fw ctl affinity -s -k 3 5

fw ctl affinity -l

Use this command to list existing affinities. For an explanation of kernel, daemon and interface affinities, see CoreXL Administration.

Syntax

fw ctl affinity -l [<proc_selection>] [<listtype>]

If <proc_selection> is omitted, fw ctl affinity -l lists affinities of all Check Point daemons, kernel instances and interfaces. Otherwise, <proc_selection> is one of the following parameters:

Parameter

Description

-p <pid>

Displays the affinity of a particular process, where <pid> is the process ID#.

-n <cpdname>

Displays the affinity of a Check Point daemon, where <cpdname> is the Check Point daemon name (for example: fwd).

-k <instance>

Displays the affinity of a kernel instance, where <instance> is the instance's number.

-i <interfacename>

Displays the affinity of an interface, where <interfacename> is the interface name (for example: eth0).

If <listtype> is omitted, fw ctl affinity -l lists items with specific affinities, and their affinities. Otherwise, <listtype> is one or more of the following parameters:

Parameter

Description

-a

All: includes items without specific affinities.

-r

Reverse: lists each processing core and the items that have it as their affinity.

-v

Verbose: list includes additional information.

Example

To list complete affinity information for all Check Point daemons, kernel instances and interfaces, including items without specific affinities, and with additional information, run:

fw ctl affinity -l -a -v

fw ctl multik stat

The fw ctl multik stat and fw6ctl multik stat (multi-kernel statistics) commands show information for each kernel instance. The state and processing core number of each instance is displayed, along with:

  • The number of connections currently being handled.
  • The peak number of concurrent connections the instance has handled since its inception.
 
Top of Page ©2013 Check Point Software Technologies Ltd. All rights reserved. Download Complete PDF Send Feedback Print