Download Complete PDF Send Feedback Print This Page

Previous

Synchronize Contents

Next

Advanced QoS Policy Management

Related Topics

Overview

Examples: Guarantees and Limits

Differentiated Services (DiffServ)

Low Latency Queuing

Authenticated QoS

Citrix MetaFrame Support

Load Sharing

Overview

This chapter describes the more advanced QoS policy management procedures that enable you to refine the basic QoS policies described in Basic Policy Management.

Examples: Guarantees and Limits

The QoS Action properties defined in the rules and sub-rules of a QoS Policy Rule Base interact with one another to determine bandwidth allocation.

The guidelines and examples in the sections that follow explain how to use guarantees and limits effectively.

Per Rule Guarantees

  1. The bandwidth allocated to the rule is a combination of the guaranteed bandwidth, plus the bandwidth that is given to the rule because of its weight. The guaranteed bandwidth is first "extracted" from the total bandwidth and set aside so that the guarantee can be upheld. The remaining bandwidth is then distributed according to the weights specified by all the rules. This means that the amount of bandwidth that is guaranteed to a rule is the guaranteed bandwidth plus the rule's share of bandwidth according to weight.

    Total Rule Guarantees

    Rule Name

    Source

    Destination

    Service

    Action

    Rule A

    Any

    Any

    ftp

    Rule Guarantee - 100KBps

    Weight 10

    Rule B

    Any

    Any

    http

    Weight 20

    • The link capacity is 190KBps.
    • In this example, Rule A receives 130KBps, 100KBps from the guarantee, plus (10/30) * (190-100).
    • Rule B receives 60KBps, which is (20/30) * (190-100).
  2. If a guarantee is defined in a sub-rule, then a guarantee must be defined for the rule above it. The guarantee of the sub-rule can also not be greater than the guarantee of the rule above it.

    Guarantee is Defined in Sub-rule A1, But Not in Rule A Making the Rule Incorrect

    Rule

    Source

    Destination

    Service

    Action

    Rule A

    Any

    Any

    ftp

    Weight 10

    Start of Sub-Rule

    Rule A 1

    Client-1

    Any

    ftp

    Rule Guarantee - 100KBps

    Weight 10

    Rule A2

    Client-2

    Any

    ftp

    Weight 10

    End of Sub-Rule

    Rule B

    Any

    Any

    http

    Weight 30

    This Rule Base is not correct because the guarantee is defined in sub-rule A1, but not in Rule A. To correct this, add a guarantee of 100KBps or more to Rule A.

  3. A rule guarantee must not be smaller than the sum of guarantees defined in its sub‑rules.

    Example of an Incorrect Rule Base

    Rule

    Source

    Destination

    Service

    Action

    Rule A

    Any

    Any

    ftp

    Rule Guarantee - 100KBps

    Weight 10

    Start of Sub-Rule

    Rule A 1

    Client-1

    Any

    ftp

    Rule Guarantee - 80KBps

    Weight 10

    Rule A2

    Client-2

    Any

    ftp

    Rule Guarantee - 80KBps

    Weight 10

    Rule A3

    Client-3

    Any

    ftp

    Weight 10

    End of Sub-Rule

    Rule B

    Any

    Any

    http

    Weight 30

    This Rule Base is incorrect because the sum of guarantees in Sub-Rules A1 and A2 is (80 + 80) = 160, which is greater that the guarantee defined in Rule A (100KBps). To correct this, define a guarantee not smaller than 160KBps in Rule A, or reduce the guarantees defined in A1 and A2.

  4. If a rule's weight is low, some connections may receive very little bandwidth.

    If a Rule's Weight is Low, Some Connections May Receive Very Little Bandwidth

    Rule

    Source

    Destination

    Service

    Action

    Rule A

    Any

    Any

    ftp

    Rule Guarantee - 100KBps

    Weight 1

    Start of Sub-Rule

    Rule A 1

    Client-1

    Any

    ftp

    Rule Guarantee - 100KBps

    Weight 10

    Rule A2

    Client-2

    Any

    ftp

    Weight 10

    End of Sub-Rule

    Rule B

    Any

    Any

    http

    Weight 30

    The link capacity is 190KBps.

    Rule A is entitled to 103KBps, which are the 100KBps guaranteed, plus (190-100) * (1/31). FTP traffic classified to Sub-Rule A1 receives the guaranteed 100KBps which is almost all the bandwidth to which Rule A is entitled. All connections classified to Sub‑Rule A2 together receive only 1.5KBps, which is half of the remaining 3KBps.

  5. The sum of guarantees in rules in the upper level should not exceed 90% of the capacity of the link.

Per Connections Guarantees

  1. If the Accept additional connections is checked, connections exceeding the number defined in the Number of guaranteed connections are allowed to open. If you leave the field adjacent to Accept additional connections empty, the additional connections receive bandwidth allocated according to the Rule Weight defined.
  2. If Per connection guarantees are defined both for a rule and for its sub-rule, the Per connection guarantee of the sub-rule should not be greater than the Per connection guarantee of the rule.

    When such a Rule Base is defined, a connection classified to the sub-rule receives the Per connection guarantee that is defined in the sub-rule. If a sub-rule does not have a Per connection guarantee, it still receives the Per connection guarantee defined in the parent rule.

Limits

  1. If both a Rule Limit and a Per connection limit are defined for a rule, the Per connection limit must not be greater than the Rule Limit.
  2. If a limit is defined in a rule with sub-rules, and limits are defined in all the sub-rules, the rule limit should not be greater than the sum of limits defined in the sub-rules.

    Having a rule limit that is greater than the sum of limits defined in the sub-rules is never necessary, because it is not possible to allocate more bandwidth to a rule than the bandwidth determined by the sum of the limits of its sub-rules.

Guarantee - Limit Interaction

  1. If a Rule Limit and a Guarantee per rule are defined in a rule, then the limit should not be smaller than the guarantee.
  2. If both a Limit and a Guarantee are defined in a rule, and the Limit is equal to the Guarantee, connections may receive no bandwidth, as in the following examples:

Example:

No Bandwidth Received

Rule

Source

Destination

Service

Action

Rule A

Any

Any

ftp

Rule Guarantee — 100KBps

Rule Limit 100KBps

Weight 10

Start of Sub-Rule

Rule A 1

Client-1

Any

ftp

Rule Guarantee - 100KBps

Weight 10

Rule A2

Client-2

Any

ftp

Rule Guarantee - 80KBps

Weight 10

End of Sub-Rule

Rule B

Any

Any

http

Weight 30

The Guarantee in sub-rule A1 equals the Guarantee in rule A (100KBps). When there is enough traffic on A1 to use the full Guarantee, traffic on A2 does not receive any bandwidth from A (there is a limit on A of 100KBps).

The steps that lead to this situation are as follows:

  • A rule has both a guarantee and a limit, such that the limit equals the guarantee.
  • The rule has sub-rules with Total Rule Guarantees that add up to the Total Rule Guarantee for the entire rule.
  • The rule also has sub-rule(s) with no guarantee.

In such a case, the traffic from the sub-rule(s) with no guarantee may receive no bandwidth.

Differentiated Services (DiffServ)

Overview

DiffServ is an architecture for providing different types or levels of service for network traffic. Packets are marked in the IP header TOS byte, inside the enterprise network as belonging to a certain Class of Service, or QoS Class. These packets are then granted priority on the public network.

DiffServ markings have meaning on the public network, not inside the enterprise network. (Effective implementation of DiffServ requires that packet markings be recognized on all public network segments.)

DiffServ Markings for IPSec Packets

When DiffServ markings are used for IPSec packets, the DiffServ mark can be copied from one location to another in one of two ways:

  • :ipsec.copy_TOS_to_inner — The DiffServ mark is copied from the IPSec header to the IP header of the original packet after decapsulation/decryption.
  • :ipsec.copy_TOS_to_outer — The DiffServ mark is copied from the original packet's IP header to the IPSec header of the encrypted packet after encapsulation.

    This property should be set, per QoS gateway, in $FWDIR/conf/objects_5_0.c.

    The default setting is:

    :ipsec.copy_TOS_to_inner (false)

    :ipsec.copy_TOS_to_outer (true)

Interaction Between DiffServ Rules and Other Rules

A DiffServ rule specifies not only a QoS Class, but also a weight, in the same way that other QoS Policy Rules do. These weights are enforced only on the interfaces on which the rules of this class are installed.

For example, suppose a DiffServ rule specifies a weight of 50 for FTP connections. That rule is installed only on the interfaces for which the QoS Class is defined. On other interfaces, the rule is not installed and FTP connections routed through those other interfaces do not receive the weight specified in the rule. To specify a weight for all FTP connections, add a rule under "Best Effort."

DiffServ rules can be installed only on interfaces for which the relevant QoS Class has been defined in the QoS tab of the Interface Properties window. See: Define the QoS Properties for the Interfaces.

"Best Effort" rules (that is, non-DiffServ rules) can be installed on all interfaces of gateways with QoS gateways installed. Only rules installed on the same interface interact with each other.

Low Latency Queuing

Overview

For most traffic on the Web (including most TCP protocols), the WFQ (Weighted Fair Queuing, see Intelligent Queuing Engine) paradigm is adequate. This means that packets reaching QoS are put in queues and forwarded according to the interface bandwidth and the priority of the matching rule. Using this standard policy, QoS avoids dropping packets as often as possible, because such drops may adversely affect TCP. Avoiding drops, however, means holding (possibly long) queues, which may lead to non-negligible delays.

For some types of traffic, such as voice and video, bounding this delay is important. Long queues are inadequate for these types of traffic because they lead to substantial delay. Fortunately, for most "delay sensitive" applications, there is no need to drop packets from queues in order to keep them short.

Instead, the fact that the streams of these applications have a known, bounded bit rate can be utilized. If QoS is configured to forward as much traffic as the stream delivers, then only a small number of packets accumulate in the queues and delay is negligible.

QoS Low Latency Queuing makes it possible to define special Classes of Service for "delay sensitive" applications like voice and video. Rules under these classes can be used together with other rules in the QoS Policy Rule Base. Low Latency classes require you to specify the maximal delay that is tolerated and a Constant Bit Rate. QoS then guarantees that traffic matching rules of this type is forwarded within the limits of the bounded delay.

Low Latency Classes

For each Low Latency class defined on an interface, a constant bit rate and maximal delay should be specified for active directions. QoS checks packets matched to Low Latency class rules to make sure they have not been delayed for longer than their maximal delay permits. If the maximal delay of a packet has been exceeded, it is dropped. Otherwise, it is transmitted at the defined constant bit rate for the Low Latency class to which it belongs.

If the Constant Bit Rate of the class is defined correctly (meaning that it is not smaller than the expected arrival rate of the matched traffic), packets are not dropped (provided that the delay exceeds some minimum, see Computing Maximal Delay). On the other hand, when the arrival rate is higher than the specified Constant Bit Rate, packets exceeding this constant rate are dropped to ensure that those transmitted are within the maximal delay limitations.

Note - The maximal delay set for a Low Latency class is an upper limit. This means that packets matching the class are always forwarded with a delay not greater, but often smaller, than specified.

Low Latency Class Priorities

In most cases, one Low Latency class is sufficient to serve all bounded delay traffic. In some cases, however, the user may need to define more than one Low Latency class. For this purpose, Low Latency classes are assigned one out of five priority levels (not including the Expedited Forwarding class, see Low Latency versus DiffServ). These priority levels are relative to other Low Latency classes.

It is advisable to define more than one Low Latency class if different types of traffic require different maximal delays.

The class with the lower maximal delay should get a higher priority than the class with the higher delay. The reason for this is that when two packets are ready to be forwarded, one for each Low Latency class, the packet from the higher priority class is forwarded first. The remaining packet (from the lower class) then encounters greater delay. This implies that the maximal delay that can be set for a Low Latency class depends on the Low Latency classes of higher priority.

Other Low Latency classes can affect the delay incurred by a class and therefore must be taken into consideration when determining the minimal delay that is feasible for the class. This is best done by initially setting the priorities for all Low Latency classes according to maximal delay, and then defining the classes according to descending priority. When you define class two, for example, class one should already be defined.

For more information on the effects of class priority on computing maximal delay, see Computing Maximal Delay.

Logging LLQ Information

SmartView Tracker enables you to log extensive information for all aspects of LLQ. For more information, see SmartView Tracker.

Computing the Correct Constant Bit Rate and Maximal Delay

Limits on Constant Bit Rate

For each direction of an interface (inbound and outbound), the sum of the constant bit rates of all the Low Latency classes cannot exceed 20% of the total designated bandwidth rate. The 20% limit is set to ensure that "Best Effort" traffic does not suffer substantial delay and jitter as a result of the existing Low Latency class(es).

Computing Constant Bit Rate

To compute the Constant Bit Rate of a Low Latency class, you should know the bit rate of a single application stream in traffic that matches the class and the number of expected streams that are simultaneously opened. The Constant Bit Rate of the class should be the bit rate of a single application, multiplied by the expected number of simultaneous streams.

If the number of streams exceeds the number you expected when you set the Constant Bit Rate, then the total incoming bit rate exceeds the Constant Bit Rate, and many drops occur. You can avoid this situation by limiting the number of concurrent streams. For more information, see Ensuring that Constant Bit Rate is Not Exceeded (Preventing Unwanted Drops).

Note - Unlike bandwidth allocated by a Guarantee, the constant bit rate allocated to a Low Latency class on an interface in a given direction is not increased in the event that more bandwidth is available.

Computing Maximal Delay

To compute the maximal delay of a Low Latency class, you should take into account both the maximal delay that streams matching the class can tolerate in QoS and the minimal delay that QoS can guarantee this stream.

It is important not to define a maximal delay that is too small, which may lead to unwanted drops. The delay value defined for a class determines the number of packets that can be queued in the Low Latency queue before drops begin to occur. The smaller the delay, the shorter the queue. Therefore, an insufficient maximal delay may cause packets to be dropped before they have the chance to be forwarded. It is advisable to allow for at least several packets to be queued, as explained in the steps below.

If you are using Check Point SmartView Tracker, it is recommended to use the default Class Maximal Delay defined in the LLQ log. In order to obtain this default number, you must first configure the correct Constant Bit Rate for the Class and you must give an estimation for the Class Maximal Delay. For more information, see SmartView Tracker. Alternately, you can set the Class Maximal Delay, as described in the steps that follow. If you use the following method you can set the delay of the class by obtaining estimates for the upper and lower bounds, and setting the delay to a value between the bounds:

  1. Estimate the greatest delay that you can set for the class:
    1. Refer to the technical details of the streaming application and find the delay that it can tolerate.
    2. For voice applications, for example, it is commonly stated that the user starts to experience irregularities when the overall delay exceeds 150 ms.
    3. Find or estimate the bound on the delay that your external network (commonly the WAN) imposes. Many Internet Service Providers publish Service Level Agreements (SLAs) that guarantee certain bounds on delay.
    4. The maximal delay should be set at no more than the delay that the streaming application can tolerate minus the delay that the external network introduces.
    5. This ensures that when the delay introduced by QoS is added to the delay introduced by the external network, and does not exceed the delay tolerated by the streaming application.
  2. Estimate the smallest delay that you can set for the class:
    1. Find the bit rate of the streaming application in the application properties, or using Check Point SmartView Monitoring (see R76 SmartView Monitor Administration Guide).

    Note - Even if you set the Constant Bit Rate of the class to accommodate multiple simultaneous streams, conduct the following calculations with the streaming rate of a single stream.

    1. Estimate the typical packet size in the stream. You can either find it in the application properties or monitor the traffic.
    2. If you do not know the packet size, you can use the size of the MTU of the LAN behind QoS. For Ethernet, this number is 1500 Bytes.
    3. Many LAN devices, including switches and NICs, introduce some burstiness to flows of constant bit rate by changing the delay between packets. For constant bit rate traffic generated in the LAN and going out to the WAN, it is therefore recommended to monitor the stream packets on the QoS gateway (on the internal interface that precedes QoS) to get an estimate of burst size.
    4. If no burstiness is detected, the minimal delay of the class should be no smaller than:

    1. This enables three packets to be held in the queue before drops can occur. (Note again that the bit rate should represent a single application, even if you set the Constant Bit Rate of the class to accommodate multiple streams.)
    2. If burstiness is detected, set the minimal delay of the class to be at least:

  3. The maximal delay that you choose for the class should be between the smallest delay (estimated in step 2) and the greatest delay (estimated in step 1). Setting it very close to either of these values is not recommended. However, if you expect the application to burst occasionally, or if you don't know whether the application generates bursts at all, then you should set the delay close to the greatest value.
  4. When you enter the maximal delay you calculated, you may get an error box containing the message "The inbound/outbound maximal delay of class... must be greater than... milliseconds."

    This can occur if the Class of Service that you define is not of the first priority (see Low Latency Class Priorities). The delay value displayed in the error message depends on the Low Latency classes of higher priority, and on interface speed.

    Set the maximal delay to a value no smaller than the one printed in the message.

Ensuring that Constant Bit Rate is Not Exceeded (Preventing Unwanted Drops)

As explained in Logging LLQ Information, if the aggregate bit rate going through the Low Latency class exceeds the Constant Bit Rate of the class, then drops occur. This situation may occur when the number of streams actually opened exceeds the number you expected when you set the Constant Bit Rate.

To ensure that more streams than allowed are opened through a Low Latency Class, define a single rule under the class, with a per connection guarantee as its Action. In the Per Connection Guarantee field of the QoS Action Properties window, define the per connection bit rate that you expect, and in the Number of guaranteed connections field define the maximal number of connections that you allow in this class. The Accept additional non-guaranteed connections option should not be checked.

In this way, you can limit the number of connections to the number you used to compute the Constant Bit Rate of the class.

Interaction between Low Latency and Other Rule Properties

To activate a Low Latency class, you should define at least one rule under it in the QoS Policy Rule Base, however you may define more than one rule. The traffic matching any Low Latency class rule receives the delay and Constant Bit Rate properties defined for the specified class and is also treated according to the rule properties (weight, guarantee and limit).

You can use all types of properties in the rules under the Low Latency class, including Weight, Guarantee, Limit, Per Connection Guarantee and Per Connection Limit.

To better understand the integration of Low Latency class and rule properties, consider the class with its rules as a separate network interface forwarding packets at a rate defined by the Constant Bit Rate with delay bounded by the class delay, and with the rules defining the relative priority of the packets before they arrive at the interface. If a rule has a relatively low priority, then packets matching it are entitled to a small portion of the Constant Bit Rate, and hence prone to more drops if the incoming rate is not small enough.

Note - Using sub-rules under the low latency class is not recommended because they make it difficult to compute the streams that suffer drops and the drop pattern. Guarantees and limits are not recommended for the same reasons (with the exception of Per Connection Guarantees, as described in Ensuring that Constant Bit Rate is Not Exceeded (Preventing Unwanted Drops).

When to Use Low Latency Queuing

Use Low Latency Queuing in the following cases:

  • When low delay is important, and the bit rate of the incoming stream is known. This is the case for video and voice applications. In such cases, specify both the maximal delay and the Constant Bit Rate of the class.
  • When controlling delay is important, but the bit rate is not known in advance. The most common example is Telnet. This application requires fast responses, but the bit rate is not known in advance. In addition, even if the stream occasionally exceeds the Constant Bit Rate, you do not want to experience drops. It is preferable to experience a somewhat larger delay. In such cases, set the Constant Bit Rate of the class to an upper estimate of the stream rate, and set a very large maximal delay (such as 99999 ms). The large delay ensures that packets are not dropped even in the event of a burst exceeding the Constant Bit Rate. They are queued and forwarded according to the Constant Bit Rate.

Note - When the incoming stream is smaller than the Constant Bit Rate, the actual delay is much smaller than 99999 ms (in the example above), because packets are forwarded almost as soon as they arrive. The 99999 ms bound is effective only for large bursts.

Do not use a Low Latency Class when controlling delay is not of prime importance. For most TCP protocols (such as HTTP, FTP and SMTP) the other type of QoS rule is more appropriate. Use Weights, Limits and Guarantees in such cases, so the exact priority of the traffic is imposed without having to take care of bit rate and delay. QoS enforces the policy with minimal drops. Moreover, weights and guarantees dynamically fill the pipe when some types of expected traffic are not present, while Low Latency Queuing firmly bounds its traffic by the Constant Bit Rate.

Low Latency versus DiffServ

Low Latency classes differ from DiffServ classes in that they do not receive type of service (TOS) markings. Packets are not marked as Low Latency in a universal manner, and this preferential treatment can only be guaranteed for the QoS gateway through which they pass.

The exception to this rule is the Expedited Forwarding DiffServ class. Any DiffServ class defined as an Expedited Forwarding class automatically becomes a Low Latency class of highest priority. Such a class receives the conditions afforded it by its DiffServ marking both in QoS and in the rest of the network.

Note - To use the Expedited Forwarding class as DiffServ only, without delay being enforced, specify a Maximal Delay value of 99999 in the Interface Properties tab (see Low Latency Classes).

When to Use DiffServ and When to Use LLQ

If you need to limit the delay for some types of traffic, you should use Low Latency Queuing except in the following two cases, when you should mark your traffic using a DiffServ class (see When to Use Low Latency Queuing):

  • when your ISP supports DiffServ, meaning that you can receive a different level of QoS according to the DiffServ marking that you apply to the IP packets.
  • when your ISP provides you with several Classes of Service using MPLS. In this case, DiffServ marking serves to "communicate" to your ISP the Class of Service that you expect every packet to receive.

Authenticated QoS

Check Point Authenticated QoS provides Quality of Service (QoS) for end-users in dynamic IP environments, such as remote access and DHCP environments. This enables priority users, such as corporate CEOs, to receive priority service when remotely connecting to corporate resources.

Authenticated QoS dynamically prioritizes end-users, based on information gathered during network or VPN authentication. The feature leverages Check Point UserAuthority technology to classify both inbound and outbound user connections. The User Authority Server (UAS) maintains a list of authenticated users. When you query the UAS, QoS retrieves the data and allocates bandwidth accordingly.

QoS supports Client Authentication, including Encrypted Client Authentication, and SecuRemote/SecureClient Authentication. User and Session Authentication are not supported.

For information about Client Authentication, see the R76 Security Gateway Technical Administration Guide.

Citrix MetaFrame Support

Overview

Citrix MetaFrame is a client/server software application that enables a client to run a published application on a Citrix server farm from the client's desktop. It provides:

  • Load balancing by automatically directing a client to the server with the lightest load in a server farm and by allowing publishing and application management from a single server in that farm.
  • A secure encryption option via the ICA (Independent Computing Architecture) protocol developed by Citrix.

One of the disadvantages of using Citrix ICA is that, uncontrolled, printing traffic would consume all the available bandwidth, leaving mission critical applications struggling for bandwidth. There is, therefore, a critical need to provide service differentiation both between Citrix and other types of traffic, as well as within Citrix (layer 7) traffic.

QoS, from NG with Application Intelligence (R55), solves the problem by:

  • Classifying all ICA applications running over Citrix through layer 7.
  • Differentiating between the Citrix traffic based on ICA published applications, ICA printing traffic (Priority Tagging) and NFuse.

For further information, see Managing QoS for Citrix ICA Applications.

QoS, from NG with Application Intelligence (R55) manages QoS for printing over Citrix using the following service:

  • Citrix_ICA_printing service: Citrix ICA printing traffic service.

For further information, see Managing QoS for Citrix Printing.

Limitations

  • The Citrix TCP services are supported in Traditional mode QoS Policies only.
  • Session Sharing must be disabled.
  • The number of applications that are detected by the inspection infrastructure is limited to 2048. Console errors will be sent if this limit is exceeded. These errors are harmless and will not affect your system. Simply restart the machine.
  • Versions of MetaFrame prior to 1.8 are not supported because there is no packet tagging in these versions.
  • Only one Citrix TCP service can be allocated per single rule.

Load Sharing

Overview

Load Sharing is a mechanism that distributes traffic within a cluster of gateways so that the total throughput of multiple machines is increased. QoS architecture guarantees that Load Sharing will provide either:

  • Two-way Stickiness - all packets of a single connection use the same machine in both directions.
  • Conversation Stickiness - all packets of control/data connections within a conversation use the same machine in both directions.

In Load Sharing configurations, all functioning machines in the cluster are active, and handle network traffic (Active/Active operation). If there is a failure in one of the machines, its connections are redistributed amongst the remaining operational machines in the cluster.

If any individual Check Point gateway in the cluster becomes unreachable, transparent failover occurs to the other machines, thus providing High Availability. All connections are shared between the remaining gateways without interruption.

Note - The new Check Point High Availability is a special type of load sharing that automatically works with QoS Load Sharing. These modes can be safely switched. To enforce the change though, The QoS policy has to be reinstalled.

All cluster servers share the same set of so called "virtual" interfaces. Each virtual interface corresponds to an outgoing link. An example of a typical cluster setting looks like this:

QoS provides a fault-tolerant QoS solution for cluster load sharing that deploys a unique, distributed WFQ bandwidth management technology. The user is able to specify a unified QoS policy per virtual interface of the cluster. The resulting bandwidth allocation is therefore identical to that obtained by installing the same policy on a single server.

Note - Under a load state there are a few connections that are backlogged active for short periods of time. In such cases the Load Sharing function in ClusterXL are not spread evenly, but in this case there is no congestion and therefore no need for QoS.

QoS Cluster Infrastructure

This section describes the cluster infrastructure needed for QoS load sharing.

Cluster State

ClusterXL introduces a member's load value. A member's load, calculated in percentages, is assigned to each member by the cluster. The load is different for ClusterXL multicast and unicast modes. Generally the load for the N members in the cluster equals (100 / N)%. If the number of cluster members changes dynamically (due to failover or recovery) the load is dynamically adjusted by the cluster to the appropriate value.

Changes in Cluster State

Cluster members are informed of changes in a fellow cluster member's load. All cluster members, including the causer member recalculate their rates with respect to the new load.

In this way, on the next re-calculation of rates the failed machine's unutilized bandwidth, will be divided between the active cluster members. This guarantees correct work and the quick recovery of the system when faults have occurred.

Rates Calculation Algorithm

QoS Load Sharing uses a member's load value in order to obtain correct rates allocation for QoS rules.

The rates of the cluster members are calculated in the context of each virtual network interface. These calculations are used for enforcing the scheduling policy of a virtual network interface by setting the local rates of the corresponding real network interface of the cluster members. The cluster member executes this calculation each time ClusterXL informs them of changes in the cluster state.

Basically, in a centralized policy the rate of a rule is divided equally between the matching connections. In load sharing, the set of connections is evenly split between the cluster members by the decision function. Therefore, any rule and sub-rule of a cluster member is assigned a fraction of the original rate that is proportional to the load of member in the cluster. To achieve a guaranteed rate, the limit and allotment of each centralized policy rule are recalculated proportionally to the load of the each member.

Finally, a member's physical interface limit is calculated as a portion of the cluster interface limit, proportional to the member's load.

Note - If for any reason the QoS daemon cannot retrieve a load from the Check Point Load Sharing, it calculates the load statically according to the (100 / N)% formula. Where N is the number of members configured in the cluster topology and which are not necessarily active.

Per-connection guarantees are processed separately (Per-connection limit implementation remains unchanged by the load sharing mechanism.)

Per-Connection Guarantee Allocation

Each rule with a per connection guarantee manages its rate budget. A rule's budget is the sum of all per connection guarantee rates over the number of per-connection guarantee connections allowed under this rule.

In order to decide whether a new connection receives its per-connection guarantee, the overall rate, which has already been granted to the matched rule's per-connection guarantee is checked. If this rate is below the rule's budget then the new connection is granted its per-connection guarantee.

This budget is also divided among cluster members proportionally to their cluster load. So, generally, each member will process only half the allowed per-connection guarantee matched to the rule. In this way the cluster as a whole will grant per-connection guarantee service according to the cluster's QoS policy.

Example of Rates Calculation

Consider a cluster consisting of two machines with one virtual interface configured to the rate of 125KBps. The centralized scheduling policy as well as the corresponding local scheduling policies are shown here:

Conclusion

The decision function distributes traffic evenly between all cluster members and the resultant load sharing allocates exactly the same rates to the rules/connections as would be done by a centralized policy.

 
Top of Page ©2013 Check Point Software Technologies Ltd. All rights reserved. Download Complete PDF Send Feedback Print