Print Download Documentation Send Feedback



Policy Packages

What can I do here?

Use this window to edit a policy package.

Getting Here

Getting Here - Security Policies > Access Control/Threat Prevention > Right-click Policy > Edit Policy


Menu > Manage polices and layers > New or Edit.

Installing a Policy Package

  1. On the Global Toolbar, click Install Policy.

    The Install Policy window opens showing the installation targets (Security Gateways).

  2. From the Select a policy menu, select a policy package.
  3. Select one or more policy types that are available in the package.
  4. Select the Install Mode:
    • Install on each selected gateway independently - Install the policy on each target gateway independently of others, so that if the installation fails on one of them, it doesn't affect the installation on the rest of the target gateways.

      Note - If you select For Gateway clusters install on all the members, if fails do not install at all, the Security Management Server makes sure that it can install the policy on all cluster members before it begins the installation. If the policy cannot be installed on one of the members, policy installation fails for all of them.

    • Install on all selected gateways, if it fails do not install on gateways of the same version - Install the policy on all the target gateways. If the policy fails to install on one of the gateways, the policy is not installed on other target gateways.
  5. Click Install.

Uninstalling a Policy Package

You can uninstall a policy package through a command line interface on the gateway.

To uninstall a policy package:

  1. Open a command prompt on the Security Gateway.
  2. Run: fw unloadlocal.

Warning -

Creating a New Policy Package

  1. From the Menu, select Manage policies and layers.

    The Manage policies and layers window opens.

  2. Click New.

    The New Policy window opens.

  3. Enter a name for the policy package.
  4. In the General page > Policy types section, select one or more of these policy types:
    • Access Control
    • Threat Prevention
    • QoS, select Recommended or Express
    • Desktop Security

    To see the QoS, and Desktop Security policy types, enable them on one or more Gateways:

    Go to gateway editor > General Properties > Network Security tab:

    • For QoS, select QoS
    • For Desktop Security, select IPSec VPN and Policy Server
  5. On the Installation targets page, select the gateways the policy will be installed on:
    • All gateways
    • Specific gateways - For each gateway, click the [+] sign and select it from the list.

    To install Policy Packages correctly and eliminate errors, each Policy Package is associated with a set of appropriate installation targets.

  6. Click OK.
  7. Click Close.

    The new policy shows on the Security Policies page.

Adding a Policy Type to an Existing Policy Package

  1. From the Menu, select Manage policies and layers.

    The Manage policies and layers window opens.

  2. Select a policy package and click the Edit button.
  3. The New Policy package window opens.
  4. On the General > Policy types page, select the policy type to add:
    • Access Control
    • Threat Prevention
    • QoS, select Recommended or Express
    • Desktop Security
  5. Click OK.

QoS Policy Types

R80.10 includes two QoS Policy types:

This table shows the difference between the Recommended and Express policy types.




To learn more



Limits (whole rule)


Authenticated QoS



Authenticated QoS


Overview of Logging




Support for UTM-1 Edge Gateways



Support for hardware acceleration



High Availability and Load Sharing


(Per connection)



Limits (Per connection)



LLQ (controlling packet delay in QoS)


Low Latency Queuing



Differentiated Services (DiffServ)




Matching by URI resources



Matching by DNS string



Matching Citrix ICA Applications




SecureXL support



CoreXL support



SmartLSM clusters



* You must disable SecureXL and CoreXL before you can use this feature.

To select a QoS Policy type:

  1. In SmartConsole menu, click Manage policies and layers.
  2. In the Manage Policies window, click New or select an existing Policy and then click Edit.
  3. Select QoS, and then select Recommended or Express.

Weight is the percentage of the available bandwidth allocated to a rule. This is not the same as the weight in the QoS Rule Base, which is a manually assigned priority.

To calculate what percentage of the bandwidth the connections matched to a rule receives:

Priority in SmartDashboard

The weight = -----------------------------------------------------

Total priority of all the rules with open connections

For example:

Then all the connections open under this rule are allocated 12/120, or 10%. The weight of this rule is 10%. The rule gets 10% of the available bandwidth if the rule is active. In practice, if other rules are not using their maximum allocated bandwidth, a rule can get more than the bandwidth allocated by this formula. Unless a per connection limit or guarantee is defined for a rule, all connections under a rule receive equal weight.

Allocating bandwidth according to weights ensures full use of the line even if a specified class is not using all of its bandwidth. In such a case, the left over bandwidth is divided between the remaining classes in accordance with their relative weights. Units are configurable, see Defining QoS Global Properties.


A limit specifies the maximum bandwidth that is assigned to all the connections together. A limit defines a point after which connections below a rule are not allocated more bandwidth, even if there is surplus bandwidth available.

Limits can also be defined for the sum of all connections in a rule or for individual connections within a rule.

For more information on weights, guarantees and limits, see Action Type.

Note - Bandwidth allocation is not fixed. As connections are opened and closed, QoS continuously changes the bandwidth allocation to accommodate competing connections, in accordance with the QoS Policy.

Overview of Logging

These events are logged. The table below describes features unique to event logs.

Non-Accounting Log Events

Log Event

Data Returned


Policy Mode

Connection Reject

QoS rejects a connection when the number of guaranteed connections is exceeded and/or when you have configured the system not to accept additional connections.

The name of the matching rule on account of which the connection was rejected.

Generated as a reject log. Unified with the initial connection log.

Recommended policy only.

Running Out of Packet Buffers

One of the interface-direction's packet buffers is exhausted. A report is generated a maximum of once per 12 hours.

A string explaining the nature of the problem and the size of the relevant pool.

New log record created each time a global problem is reported.

Recommended policy only.

LLQ Packet Drop

When a packet is dropped from an LLQ connection. A report is generated a maximum of once per 5 minutes.

Logged data:

  • Number of bytes dropped due to delay expiration
  • Average packet delay
  • Jitter (maximum delay difference between two consecutive packets)

Unified with the initial connection log.

Recommended policy only.

The next table describes the features unique to accounting logs.

Explaining the Accounting Log


Data Returned

Policy Mode

General Statistics

The total bytes transmitted through QoS for each relevant interface and direction.

Inbound and outbound bytes transmitted by QoS.

Recommended and Express policies.

Drop Policy Statistics

  • Total bytes dropped from the connection as a result of the QoS policy.
  • Count of the bytes dropped from the connection because the maximum used memory fragments for a single connection was exceeded.


Recommended policy mode only.

LLQ Statistics

Statistics about the LLQ connection.

Logged data:

  • Number of bytes dropped due to delay expiration
  • Average packet delay
  • Jitter (maximum delay difference between two consecutive packets)

Recommended policy mode only.

These conditions must be met for a connection to be logged:


A guarantee allocates a minimum bandwidth to the connections matched with a rule.

Guarantees can be defined for:

A per-connection guarantee means that each connection that matches the specified rule is guaranteed a minimum bandwidth.

Note: Although weights guarantee the bandwidth share for specified connections, only a guarantee lets you to specify an absolute bandwidth value.

Low Latency Queuing

For most traffic on the Web (most TCP protocols), the WFQ (Weighted Fair Queuing, see Intelligent Queuing Engine) paradigm is sufficient. Packets reaching QoS are put in queues and forwarded according to the interface bandwidth and the priority of the matching rule.

Using this standard Policy, QoS avoids dropping packets. Dropped packets adversely affect TCP. Avoiding drops means holding (possibly) long queues, which can lead to non-negligible delays.

For some types of traffic, such as voice and video, bounding this delay is important. Long queues are inadequate for these types of traffic. Long queues can result in substantial delay. For most "delay sensitive" applications, it is not necessary to drop packets from queues to keep the queues short. The fact that the streams of these applications have a known, bounded bit rate can be utilized. If QoS is configured to forward as much traffic as the stream delivers, only a small number of packets are queued and delay is negligible.

QoS Low Latency Queuing makes it possible to define special Classes of Service for "delay sensitive" applications like voice and video. Rules below these classes can be used together with other rules in the QoS Policy Rule Base. Low Latency classes require you to specify the maximal delay that is tolerated and a Constant Bit Rate. QoS then guarantees that traffic matching rules of this type is forwarded within the limits of the bounded delay.

Low Latency Classes

For each Low Latency class defined on an interface, a constant bit rate and maximal delay must be specified for active directions. QoS checks packets matched to Low Latency class rules to make sure they have not been delayed for longer than their maximal delay permits. If the maximal delay of a packet has been exceeded, it is dropped. Otherwise, it is transmitted at the defined constant bit rate for the Low Latency class to which it belongs.

If the Constant Bit Rate of the class is not smaller than the expected arrival rate of the matched traffic, packets are not dropped. The maximal delay must also exceed some minimum. For more, see Computing Maximal Delay).

When the arrival rate is higher than the specified Constant Bit Rate, packets exceeding this constant rate are dropped. This is to make sure that transmitted packets comply with the maximal delay limitations.

Note - The maximal delay set for a Low Latency class is an upper limit. Packets matching the class are always forwarded with a delay not greater, but often smaller, than specified.

Low Latency Class Priorities

In most cases, one Low Latency class is sufficient for all bounded delay traffic. In some cases, it might be necessary to define more than one Low Latency class. For this reason, Low Latency classes are assigned one out of five priority levels (not including the Expedited Forwarding class, see Low Latency versus DiffServ). These priority levels are relative to other Low Latency classes.

As a best practice, define more than one Low Latency class if different types of traffic require different maximal delays.

The class with the lower maximal delay must get a higher priority than the class with the higher delay. When two packets are ready to be forwarded, one for each Low Latency class, the packet from the higher priority class is forwarded first. The remaining packet (from the lower class) then encounters greater delay. The maximal delay that can be set for a Low Latency class depends on the Low Latency classes of higher priority.

Other Low Latency classes can affect the delay incurred by a class. Other Low Latency classes must be taken into consideration when determining the minimal delay that is possible for the class. This is best done by:

When you define class two, for example, class one must already be defined.

For more on the effects of class priority on calculating maximal delay, see: Computing Maximal Delay.

Logging LLQ Information

The system logs data for all aspects of LLQ.

Calculating the Correct Constant Bit Rate and Maximal Delay Limits on Constant Bit Rate

For the inbound or outbound interface direction, the sum of the constant bit rates of all the Low Latency classes has a limit. This sum cannot exceed 20% of the total designated bandwidth rate. This 20% limit makes sure that "Best Effort" traffic does not suffer substantial jitter because of the existing Low Latency class(es).

Calculating Constant Bit Rate

To calculate the Constant Bit Rate of a Low Latency class, you must know the bit rate of one application stream in traffic that matches the:

The Constant Bit Rate of the class equals the bit rate of one application multiplied by the expected number of streams opened at the same time.

If the number of streams is greater than the number you expected, the total incoming bit rate will exceed the Constant Bit Rate. Many drops will occur. To prevent drops, limit the number of concurrent streams. For more, see Ensuring that Constant Bit Rate is Not Exceeded (Preventing Unwanted Drops).

Note - Unlike bandwidth allocated by a Guarantee, the constant bit rate allocated to a Low Latency class on an interface in a given direction is not increased when more bandwidth is available.

Calculating Maximal Delay

To calculate the maximal delay of a Low Latency class, take into account the:

It is important not to define a maximal delay that is too small, which can result in unwanted drops. The delay value defined for a class determines the number of packets that can be queued in the Low Latency queue before drops occur. The smaller the delay, the shorter the queue. A maximal delay that is not sufficient can cause packets to be dropped before they are forwarded. Allow for some packets to be queued, as explained in the steps below.

Best Practice - Use the default Class Maximal Delay defined in the LLQ log. To obtain this default number:

You can also set the Class Maximal Delay by obtaining estimates for the upper and lower bounds. Set the delay to a value between the bounds.

  1. Estimate the greatest delay that you can set for the class:
    1. Refer to the technical details of the streaming application and find the delay that it can tolerate.

      For voice applications, the user generally starts to experience irregularities when the overall delay exceeds 150 ms.

    2. Find or estimate the bound on the delay that your external network (commonly the WAN) imposes. Many Internet Service Providers publish Service Level Agreements (SLAs) that guarantee some bounds on delay.
    3. The maximal delay must be set at no more than:

      (i) The delay that the streaming application can tolerate minus

      (ii) The delay that the external network introduces

    This makes sure that the delay introduced by QoS plus the delay introduced by the external network is no more than the delay tolerated by the streaming application.

  2. Estimate the smallest delay that you can set for the class:
    • Find the bit rate of the streaming application in the application properties, or use SmartView Monitor.

      Note: Even if you set the Constant Bit Rate of the class to accommodate multiple simultaneous streams, do the next calculations with the rate of a single stream:

    • Estimate the typical packet size in the stream.
      • Find it in the application properties, or monitor the traffic.
      • If you do not know the packet size, use the size of the MTU of the LAN behind QoS. For Ethernet, this number is 1500 Bytes.
    • Many LAN devices, switches and NICs, introduce some burstiness to flows of constant bit rate by changing the delay between packets. For constant bit rate traffic generated in the LAN and going out to the WAN, monitor the stream packets on the QoS Security Gateway. To get an estimate of burst size, monitor the internal interface that precedes the QoS Security Gateway.
    • If no burstiness is detected, the minimal delay of the class must be no smaller than:

    3 x packet size


    bit rate

    This enables three packets to be held in the queue before drops can occur.

    The bit rate must be the bit rate of one application, even if the Constant Bit Rate of the class is for multiple streams.

    • If burstiness is detected, set the minimal delay of the class to at least:

    (burst size + 1) x packet size


    bit rate

The maximal delay that you select for the class must be between the smallest delay (step 2) and the greatest delay (step 1). Setting the maximal delay near to one of these values is not recommended. If you expect the application to burst occasionally, or if you don't know whether the application generates bursts at all, set the maximal delay close to the value of the greatest delay.

This error message can show after you enter the maximal delay: "The inbound/outbound maximal delay of class... must be greater than... milliseconds." The message shows if Class of Service that you define is not of the first priority (see Low Latency Class Priorities). The delay value displayed in the error message depends on the Low Latency classes of higher priority, and on interface speed.

Set the maximal delay to a value no smaller than the one printed in the error message.

Making sure that Constant Bit Rate is not Exceeded

If the total bit rate going through the Low Latency class exceeds the Constant Bit Rate of the class, then drops occur. (See: Logging LLQ Information.)

This occurs when the number of streams opened exceeds the number you expected when you set the Constant Bit Rate.

To limit the number of streams opened through a Low Latency Class:

  1. Define one rule under the class, with a per connection guarantee as its Action.
  2. In the Per Connection Guarantee field of the QoS Action Properties window, define the per connection bit rate that you expect
  3. In the Number of guaranteed connections field, define the maximal number of connections that you allow in this class.

    Do not select the Accept additional non-guaranteed connections option.

The number of connections is limited to the number you used to calculate the Constant Bit Rate of the class.

Interaction between Low Latency and Other Rule Properties

To activate a Low Latency class, define at least one rule below it in the QoS Policy Rule Base. Traffic matching a Low Latency class rule receives the delay and Constant Bit Rate properties defined for the specified class. The traffic is handled according to the rule properties (weight, guarantee and limit).

You can use all types of properties in the rules below the Low Latency class:

Think of the Low Latency class with its rules as a separate network interface:

If a rule has a relatively low priority, then packets matching it are entitled to a small part of the Constant Bit Rate. More packets will be dropped if the incoming rate is not sufficiently small.


When to Use Low Latency Queuing

Use Low Latency Queuing when:

Do not use a Low Latency Class when controlling delay is not of primarily importance. For most TCP protocols (such as HTTP, FTP and SMTP) the other type of QoS rule is more applicable. Use Weights, Limits and Guarantees. The correct priority is imposed on traffic without having to adjust bit rate and delay.

QoS enforces the policy with minimal drops. Weights and guarantees dynamically fill the pipe when expected traffic is not present. Low Latency Queuing limits traffic according to the Constant Bit Rate.

Low Latency versus DiffServ

Low Latency classes are different from DiffServ classes in that they do not receive type of service (TOS) markings. Not all packets are marked as Low Latency. Preferential treatment is guaranteed only while the packets are passing through the QoS Security Gateway.

The exception to this rule is the Expedited Forwarding DiffServ class. A DiffServ class defined as an Expedited Forwarding class automatically becomes a Low Latency class of highest priority. Such a class receives the conditions afforded it by its DiffServ marking both in QoS and on the network.

Note: To use the Expedited Forwarding class as DiffServ only, without delay being enforced, specify a Maximal Delay value of 99999 in the Interface Properties tab (see Low Latency Classes).

When to Use DiffServ and When to Use LLQ

Do not use Low Latency Queuing to delay traffic when your ISP:

For these two cases, mark your traffic using a DiffServ class (see When to Use Low Latency Queuing):

Differentiated Services (DiffServ)

DiffServ is an architecture for giving different types or levels of service for network traffic.

When on the enterprise network, packets are marked in the IP header TOS byte as belonging to some Class of Service (QoS Class). When outside on the public network, these packets are granted priority according to their class.

DiffServ markings have meaning on the public network, not on the enterprise network. Good implementation of DiffServ requires that packet markings be recognized on all public network segments.

DiffServ Markings for IPSec Packets

When DiffServ markings are used for IPSec packets, the DiffServ mark can be copied between headers by setting these properties in: $FWDIR/conf/objects_5_0.c.

Interaction Between DiffServ Rules and Other Rules

Just like QoS Policy Rules, a DiffServ rule specifies not only a QoS Class, but also a weight. These weights are enforced only on the interfaces on which the rules of this class are installed.

For example, if a DiffServ rule specifies a weight of 50 for FTP connections. That rule is installed only on the interfaces for which the QoS Class is defined. On other interfaces, the rule is not installed. FTP connections routed through the other interfaces do not get the weight specified by the rule. To specify a weight for all FTP connections, add a rule below "Best Effort."

DiffServ rules can be installed only on interfaces for which the related QoS Class has been defined. QoS class is defined on the QoS tab of the Interface Properties window. For more, see: Define the QoS Properties for the Interfaces.

"Best Effort" rules (that is, non-DiffServ rules) can be installed on all interfaces of gateways with QoS gateways installed. Only rules installed on the same interface interact with each other.