Print Download PDF Send Feedback

Terms

3rd party Cluster

Cluster of Check Point Security Gateways that work together in a redundant configuration. These Check Point Security Gateways are installed on X-Series XOS, or IPSO OS. VRRP Cluster on Gaia OS is also considered a 3rd party cluster. The 3rd party cluster handles the traffic, and Check Point Security Gateways perform only State Synchronization.

Active

State of a Cluster Member that is fully operational:

Active

State of a Cluster Member that handles network connections that pass through the cluster. In a cluster deployment, only one Cluster Member is Active and can handle connections.

Active Up

ClusterXL in High Availability mode that was configured as Maintain current active Cluster Member in the cluster object in SmartConsole:

Active(!)

In ClusterXL, state of the Active Cluster Member that suffers from a failure. A problem was detected, but the Cluster Member still forwards packets, because it is the only member in the cluster, or because there are no other Active members in the cluster. In any other situation, the state of the member is Down.

Active/Active Mode

See Load Sharing Mode.

Active/Standby Mode

See High Availability Mode.

ARP Forwarding

See sk111956.

Backup

A Cluster Member or Virtual System in this state does not process any traffic passing through cluster.

Blocking Mode

Cluster operation mode, in which Cluster Member does not forward any traffic (for example, caused by a failure).

Bond

A virtual interface that contains (enslaves) two or more physical interfaces for redundancy and load sharing. The physical interfaces share one IP address and one MAC address. See Link Aggregation.

Bonding

See Link Aggregation.

Bridge Mode

A Security Gateway or Virtual System that works as a Layer 2 bridge device for easy deployment in an existing topology.

Cluster

Two or more Security Gateways that work together in a redundant configuration - High Availability.

Cluster Control Protocol (CCP)

Proprietary Check Point protocol that runs between Cluster Members on UDP port 8116, and has the following roles:

Note: CCP is located between the Check Point Firewall kernel and the network interface (therefore, only TCPdump should be used for capturing this traffic).

Cluster Correction Layer (CCL)

Proprietary Check Point mechanism that deals with asymmetric connections in Check Point cluster.

The CCL provides connections stickiness by "correcting" the packets to the correct Cluster Member:

Cluster Interface

An interface on a Cluster Member, whose Network Type was set as Cluster in SmartConsole in cluster object. This interface is monitored by cluster, and failure on this interface will cause cluster failover.

Cluster Member

A Security Gateway that is part of a cluster.

Cluster Mode

Configuration of Cluster Members to work in these redundant modes:

Cluster Topology

Set of interfaces on all members of a cluster and their settings (Network Objective, IP address/Net Mask, Topology, Anti-Spoofing, and so on).

ClusterXL

Cluster of Check Point Security Gateways that work together in a redundant configuration. The ClusterXL both handles the traffic and performs State Synchronization.

These Check Point Security Gateways are installed on Gaia OS:

Note - In ClusterXL Load Sharing mode, configuring more than 4 Cluster Members significantly decreases the cluster performance due to amount of Delta Sync traffic.

CPHA

General term that stands for Check Point High Availability (historic fact: the first release of ClusterXL supported only High Availability) that is used only for internal references (for example, inside kernel debug) to designate ClusterXL infrastructure.

Creating an Interface Bond in Load Sharing Mode

Follow the instructions in the R80.20 Gaia Administration Guide - Chapter Network Management - Section Network Interfaces - Section Bond Interfaces (Link Aggregation).

Critical Device

Also known as a Problem Notification, or pnote. A special software device on each Cluster Member, through which the critical aspects for cluster operation are monitored. When the critical monitored component on a Cluster Member fails to report its state on time, or when its state is reported as problematic, the state of that member is immediately changed to Down. The complete list of the configured critical devices (pnotes) is printed by the cphaprob -ia list command or show cluster members pnotes all command.

Dead

State reported by a Cluster Member when it goes out of the cluster (due to cphastop command (which is a part of cpstop), or reboot).

Decision Function

A special cluster algorithm applied by each Cluster Member on the incoming traffic in order to decide, which Cluster Member should process the received packet. Each Cluster Members maintains a table of hash values generated based on connections tuple (source and destination IP addresses/Ports, and Protocol number).

Delta Sync

Synchronization of kernel tables between all working Cluster Members - exchange of CCP packets that carry pieces of information about different connections and operations that should be performed on these connections in relevant kernel tables. This Delta Sync process is performed directly by Check Point kernel. While performing Full Sync, the Delta Sync updates are not processed and saved in kernel memory. After Full Sync is complete, the Delta Sync packets stored during the Full Sync phase are applied by order of arrival.

Delta Sync Retransmission

It is possible that Delta Sync packets will be lost or corrupted during the Delta Sync operations. In such cases, it is required to make sure the Delta Sync packet is re-sent. The Cluster Member requests the sending Cluster Member to retransmit the lost/corrupted Delta Sync packet.
Each Delta Sync packet has a sequence number.
The sending member has a queue of sent Delta Sync packets.
Each Cluster Member has a queue of packets sent from each of the peer Cluster Members.
If, for any reason, a Delta Sync packet was not received by a Cluster Member, it can ask for a retransmission of this packet from the sending member.
The Delta Sync retransmission mechanism is somewhat similar to a TCP Window and TCP retransmission mechanism.
When a member requests retransmission of Delta Sync packet, which no longer exists on the sending member, the member prints a console messages that the sync is not complete.

Down

State of a Cluster Member during a failure when one of the Critical Devices reports its state as "problem":

A Cluster Member in this state does not process any traffic passing through cluster.

Dying

State of a Cluster Member as assumed by peer members, if it did not report its state for 0.7 sec.

Failback

Also, Fallback. Recovery of a Cluster Member that suffered from a failure. The state of a recovered Cluster Member is changed from Down to either Active, or Standby (depending on Cluster Mode).

Failed Member

A Cluster Member that cannot send or accept traffic because of a hardware or software problem.

Failover

Also, Fail-over. Transferring of a control over traffic (packet filtering) from a Cluster Member that suffered a failure to another Cluster Member (based on internal cluster algorithms).

Failure

A hardware or software problem that causes a Security Gateway to be unable to serve as a Cluster Member (for example, one of cluster interface has failed, or one of the monitored daemon has crashed). Cluster Member that suffered from a failure is declared as failed, and its state is changed to Down (a physical interface is considered Down only if all configured VLANs on that physical interface are Down).

Flapping

Consequent changes in the state of either cluster interfaces (cluster interface flapping), or Cluster Members (Cluster Member flapping). Such consequent changes in the state are seen in the Logs & Monitor > Logs (if in SmartConsole >cluster object, the cluster administrator set the Track changes in the status of cluster members to Log).

Flush and ACK

Also, FnA, F&A. Cluster Member forces the Delta Sync packet about the incoming packet and waiting for acknowledgments from all other Active members and only then allows the incoming packet to pass through.

In some scenarios, it is required that some information, written into the kernel tables, will be Sync-ed promptly, or else a race condition can occur. The race condition may occur if a packet that caused a certain change in kernel tables left Member_A toward its destination and then the return packet tries to go through Member_B.

In general, this kind of situation is called asymmetric routing. What may happen in this scenario is that the return packet arrives at Member_B before the changes induced by this packet were Sync-ed to this Member_B.

Example of such a case is when a SYN packet goes through Member_A, causing multiple changes in the kernel tables and then leaves to a server. The SYN-ACK packet from a server arrives at Member_B, but the connection itself was not Sync-ed yet. In this condition, the Member_B will drop the packet as an Out-of-State packet (First packet isn't SYN). In order to prevent such conditions, it is possible to use the "Flush and Ack" (F&A) mechanism.

This mechanism can send the Delta Sync packets with all the changes accumulated so far in the Sync buffer to the other Cluster Members, hold the original packet that induced these changes and wait for acknowledgment from all other (Active) Cluster Members that they received the information in the Delta Sync packet. When all acknowledgments arrived, the mechanism will release the held original packet.

This ensures that by the time the return packet arrived from a server at the cluster, all the Cluster Members are aware of the connection.

F&A is being operated at the end of the Inbound chain and at the end of the Outbound chain (it is more common at the Outbound).

Forwarding

Process of transferring of an incoming traffic from one Cluster Member to another Cluster Member for processing. There are two types of forwarding the incoming traffic between Cluster Members - Packet forwarding and Chain forwarding. Also see Forwarding Layer and ARP Forwarding.

Forwarding Layer

The Forwarding Layer is a ClusterXL mechanism that allows a Cluster Member to pass packets to peer Cluster Members, after they have been locally inspected by the firewall. This feature allows connections to be opened from a Cluster Member to an external host.

Packets originated by Cluster Members are hidden behind the Cluster Virtual IP address. Thus, a reply from an external host is sent to the cluster, and not directly to the source Cluster Member. This can pose problems in the following situations:

If a Cluster Member decides, upon the completion of the firewall inspection process, that a packet is intended for another Cluster Member, it can use the Forwarding Layer to hand the packet over to that Cluster Member.

In High Availability mode, packets are forwarded over a Synchronization network directly to peer Cluster Members. It is important to use secured networks only, as encrypted packets are decrypted during the inspection process, and are forwarded as clear-text (unencrypted) data.

In Load Sharing mode, packets are forwarded over a regular traffic network.

Packets that are sent on the Forwarding Layer use a special source MAC address to inform the receiving Cluster Member that they have already been inspected by another Cluster Member. Thus, the receiving Cluster Member can safely hand over these packets to the local Operating System, without further inspection.

Full High Availability Mode

Also, Full HA Mode. A special Cluster Mode (supported only on Check Point appliances running Gaia OS or SecurePlatform OS, where each Cluster Member also runs as a Security Management Server. This provides redundancy both between Security Gateways (only High Availability is supported) and between Security Management Servers (only High Availability is supported). See sk101539 and sk39345.

Full Sync

Process of full synchronization of applicable kernel tables by a Cluster Member from the working Cluster Member(s) when it tries to join the existing cluster. This process is meant to fetch a "snapshot"‎of the applicable kernel tables of already Active Cluster Member(s).

Full Sync is performed during the initialization of Check Point software (during boot process, the first time the Cluster Member runs policy installation, during cpstart, during cphastart). Until the Full Sync process completes successfully, this Cluster Member remains in the Down state, because until it is fully synchronized with other Cluster Members, it cannot function as a Cluster Member.

Meanwhile, the Delta Sync packets continue to arrive, and the Cluster Member that tries to join the existing cluster, stores them in the kernel memory until the Full Sync completes.

The whole Full Sync process is performed by fwd daemons on TCP port 256 over the Sync network (if it fails over the Sync network, it tries the other cluster interfaces). The information is sent by fwd daemons in chunks, while making sure they confirm getting the information before sending the next chunk.

Also see Delta Sync.

HA not started

Output of the cphaprob <flag> command or show cluster <option> command on the Cluster Member. This output means that Check Point clustering software is not started on this Security Gateway (for example, this machine is not a part of a cluster, or cphastop command was run, or some failure occurred that prevented the ClusterXL product from starting correctly).

High Availability Mode

A redundant cluster mode, where only one Cluster Member (Active member) processes all the traffic, while other Cluster Members (Standby members) are ready to be promoted to Active state if the current Active member fails.

In the High Availability mode, the Cluster Virtual IP address (that represents the cluster on that network) is associated:

HTU

Stands for "HA Time Unit". All internal time in ClusterXL is measured in HTUs (the times in cluster debug also appear in HTUs). Formula in the Check Point software: 1 HTU = 10 x fwha_timer_base_res = 10 x 10 milliseconds = 100 ms

Hybrid Mode

Starting in R80.20, on Security Gateways with 40 or more CPU cores, Software Blades run in the user space (as fwk processes). The Hybrid Mode refers to the state when you upgrade Cluster Members from R80.10 (or below) to R80.20 (or above). The Hybrid Mode is the state, in which the upgraded Cluster Members already run their Software Blades in the user space (as fwk processes), while other Cluster Members still run their Software Blades in the kernel space (represented by the fw_worker processes). In the Hybrid Mode, Cluster Members are able to synchronize the required information.

Init

State of a Cluster Member in the phase after the boot and until the Full Sync completes. A Cluster Member in this state does not process any traffic passing through cluster.

IP Tracking

Collecting and saving of Source IP addresses and Source MAC addresses from incoming IP packets during the probing. IP tracking is a useful for members within a cluster to determine whether the network connectivity of the member is acceptable.

IP Tracking Policy

Setting that controls, which IP addresses should be tracked during IP tracking:

Link Aggregation

A technology that joins multiple physical interfaces together into one virtual interface, known as a bond interface. Also known as Interface Bonding.

Load Sharing Mode

Also, Load Balancing mode. A redundant cluster mode, where all Cluster Members process all incoming traffic in parallel. See Load Sharing Multicast Mode and Load Sharing Unicast Mode.

Load Sharing Multicast Mode

Load Sharing Cluster Mode, where all Cluster Members process all traffic in parallel. Each Cluster Member is assigned the equal load of [ 100% / number_of_members ].
The Cluster Virtual IP address (that represents the cluster on that network) is associated with Multicast MAC Address 01:00:5E:X:Y:Z (which is generated based on last 3 bytes of cluster Virtual IP address on that network).
A ClusterXL decision algorithm (Decision Function) on all Cluster Members decides, which Cluster Member should process the given packet.

Load Sharing Unicast Mode

Load Sharing Cluster Mode, where one Cluster Member (called Pivot) accepts all traffic. Then, the Pivot member decides to process this traffic, or to forward it to other non-Pivot Cluster Members.
The traffic load is assigned to Cluster Members based on the hard-coded formula per the value of Pivot_overhead attribute (see sk34668).
The Cluster Virtual IP address (that represents the cluster on that network) is associated with:

Management Server

A Check Point Security Management Server or a Multi-Domain Server.

Master

State of a Cluster Member that processes all traffic in cluster configured in VRRP mode.

Network Objective

Defines how the cluster will configure and monitor an interface - Cluster, Sync, Cluster+Sync, Monitored Private, Non-Monitored Private. Configured in SmartConsole > cluster object > Topology pane - Network Objective.

Non-Blocking Mode

Cluster operation mode, in which Cluster Member keeps forwarding all traffic.

Non-Monitored Interface

An interface on a Cluster Member, whose Network Type was set as Private in SmartConsole, in cluster object. This interface state appears in the output of the cphaprob -a if command.

Non-Sticky Connection

A connection is called non-sticky, if the reply packet returns via a different Cluster Member, than the original packet (for example, if network administrator has configured asymmetric routing). In Load Sharing mode, all Cluster Members are Active, and in Static NAT and encrypted connections, the Source and Destination IP addresses change. Therefore, Static NAT and encrypted connections through a Load Sharing cluster may be non-sticky.

Packet Selection

Distinguishing between different kinds of packets coming from the network, and selecting, which member should handle a specific packet (Decision Function mechanism):

Pingable Host

Some host (that is, some IP address) that Cluster Members can ping during probing mechanism. Pinging hosts in an interface's subnet is one of the health checks that ClusterXL mechanism performs. This pingable host will allow the Cluster Members to determine with more precision what has failed (which interface on which member).
On Sync network, usually, there are no hosts. In such case, if switch supports this, an IP address should be assigned on the switch (for example, in the relevant VLAN).
The IP address of such pingable host should be assigned per this formula:
IP_of_pingable_host = IP_of_physical_interface_on_member + ~10
Assigning the IP address to pingable host that is higher than the IP addresses of physical interfaces on the Cluster Members will give some time to Cluster Members to perform the default health checks.
Example:

Pivot Member

A Cluster Member in the Unicast Load Sharing cluster that receives all packets. Cluster Virtual IP addresses are associated with Physical MAC Addresses of this Cluster Member. This Pivot Cluster Member distributes the traffic between other Cluster Members.

Pnote

See Critical Device.

Preconfigured Mode

Cluster Mode, where cluster membership is enabled on all members to be. However, no policy had been yet installed on any of the members - none of them is actually configured to be primary, secondary, and so on. The cluster cannot function if one machine‎fails.‎In this scenario, the "preconfigured mode" takes place. The preconfigured mode also comes into effect when no policy is yet installed, right after the machines came up after boot, or when running the cphaconf init command.

Primary Up

ClusterXL in High Availability mode that was configured as Switch to higher priority Cluster Member in the cluster object in SmartConsole:

Private Interface

An interface on a Cluster Member, whose Network Type was set as Private in SmartConsole in cluster object. This interface is not monitored by cluster, and failure on this interface will not cause any changes in Cluster Member's state.

Probing

If a Cluster Member fails to receive status for another member (does not receive CCP packets from that member) on a given segment, Cluster Member will probe that segment in an attempt to illicit a response.
The purpose of such probes is to detect the nature of possible interface failures, and to determine which module has the problem.
The outcome of this probe will determine what action is taken next (change the state of an interface, or of a Cluster Member).

Problem Notification

See Critical Device.

Ready

State of a Cluster Member during after initialization and before promotion to the next required state - Active / Standby / VRRP Master / VRRP Backup (depending on Cluster Mode). A Cluster Member in this state does not process any traffic passing through cluster. A member can be stuck in this state due to several reasons - see sk42096.

Security Gateway

A computer that runs Check Point software to inspect traffic and enforces Security Policies for connected network resources.

Security Management Server

A computer that runs Check Point software to manage the objects and policies in Check Point environment.

Selection

The packet selection mechanism is one of the central and most important components in the ClusterXL product and State Synchronization infrastructure for 3rd party clustering solutions. Its main purpose is to decide (to select) correctly what has to be done to the incoming and outgoing traffic on the cluster machine.

SmartConsole

Check Point main GUI client used to create and manage the security policy.

SmartDashboard

A legacy Check Point GUI client used to create and manage the security policy in R77.30 and below.

Standby

State of a Cluster Member that is ready to be promoted to Active state (if the current Active Cluster Member fails). Applies only to ClusterXL High Availability Mode.

State Synchronization

Technology that synchronizes the relevant information about the current connections (stored in various kernel tables on Check Point Security Gateways) among all Cluster Members over Synchronization Network. Due to State Synchronization, the current connections are not cut off during cluster failover.

Sticky Connection

A connection is called sticky, if all packets are handled by a single Cluster Member (in High Availability mode, all packets reach the Active Cluster Member, so all connections are sticky).

Subscribers

User Space processes that are made aware of the current state of the ClusterXL state machine and other clustering configuration parameters. List of such subscribers can be obtained by running the cphaconf debug_data command (see sk31499).

Sync Interface

Also, Secured Interface, Trusted Interface. An interface on a Cluster Member, whose Network Type was set as Sync or Cluster+Sync in SmartConsole in cluster object. This interface is monitored by cluster, and failure on this interface will cause cluster failover. This interface is used for State Synchronization between Cluster Members.
The use of more than one Sync Interfaces for redundancy is not supported because the CPU load will increase significantly due to duplicate tasks performed by all configured Synchronization Networks. See sk92804.

Synchronization Network

Also, Sync Network, Secured Network, Trusted Network. A set of interfaces on Cluster Members that were configured as interfaces, over which State Synchronization information will be passed (as Delta Sync packets ). The use of more than one Synchronization Network for redundancy is not supported because the CPU load will increase significantly due to duplicate tasks performed by all configured Synchronization Networks. See sk92804.

Traffic

The flow of data between network devices.

VLAN

Virtual Local Area Network. Open servers or appliances connected to a virtual network, which are not physically connected to the same network.

VLAN Trunk

A connection between two switches that contains multiple VLANs.

VMAC

Virtual MAC address. When this feature is enabled on Cluster Members, all Cluster Members in High Availability mode and Load Sharing Unicast mode associate the same Virtual MAC address with Virtual IP address. This allows avoiding issues when Gratuitous ARP packets sent by cluster during failover are not integrated into ARP cache table on switches surrounding the cluster. See sk50840.