What can I do here?
Use this window to edit a policy package.
Getting Here - Security Policies > Access Control/Threat Prevention > Right-click Policy > Edit Policy Or: Menu > Manage polices and layers > New or Edit. |
The Install Policy window opens showing the installation targets (Security Gateways).
Note - If you select For Gateway clusters install on all the members, if fails do not install at all, the Security Management Server makes sure that it can install the policy on all cluster members before it begins the installation. If the policy cannot be installed on one of the members, policy installation fails for all of them.
You can uninstall a policy package through a command line interface on the gateway.
To uninstall a policy package:
fw unloadlocal
.Warning -
fw unloadlocal
command prevents all traffic from passing through the Security Gateway, because it disables the IP Forwarding in the Linux kernel.fw unloadlocal
command removes all policies from the Security Gateway. This means that the Security Gateway accepts all incoming connections destined to all active interfaces without any filtering or protection enabled.The Manage policies and layers window opens.
The New Policy window opens.
To see the QoS, and Desktop Security policy types, enable them on one or more Gateways:
Go to gateway editor > General Properties > Network Security tab:
To install Policy Packages correctly and eliminate errors, each Policy Package is associated with a set of appropriate installation targets.
The new policy shows on the Security Policies page.
The Manage policies and layers window opens.
R80.20 includes two QoS Policy types:
This table shows the difference between the Recommended and Express policy types.
Features |
Recommended |
Express |
To learn more |
---|---|---|---|
IPv6 Support |
|
||
Weights |
|||
Limits (whole rule) |
|||
Logging |
|||
Accounting |
* |
|
|
Support for UTM-1 Edge Gateways |
|
|
|
Support for hardware acceleration |
|
|
|
High Availability and Load Sharing |
|
||
Guarantees |
|
||
Limits (Per connection) |
|
||
LLQ (controlling packet delay in QoS) |
|
||
DiffServ |
|
||
Sub-rules |
|
|
|
Matching by URI resources |
|
|
|
Matching by DNS string |
|
|
|
SecureXL support |
|
|
|
CoreXL support |
|
|
|
SmartLSM clusters |
|
|
* You must disable SecureXL and CoreXL before you can use this feature.
To select a QoS Policy type:
Weight is the percentage of the available bandwidth allocated to a rule. This is not the same as the weight in the QoS Rule Base, which is a manually assigned priority.
To calculate what percentage of the bandwidth the connections matched to a rule receives:
Priority in SmartDashboard The weight = ----------------------------------------------------- Total priority of all the rules with open connections |
For example:
Then all the connections open under this rule are allocated 12/120, or 10%. The weight of this rule is 10%. The rule gets 10% of the available bandwidth if the rule is active. In practice, if other rules are not using their maximum allocated bandwidth, a rule can get more than the bandwidth allocated by this formula. Unless a per connection limit or guarantee is defined for a rule, all connections under a rule receive equal weight.
Allocating bandwidth according to weights ensures full use of the line even if a specified class is not using all of its bandwidth. In such a case, the left over bandwidth is divided between the remaining classes in accordance with their relative weights. Units are configurable, see Defining QoS Global Properties.
A limit specifies the maximum bandwidth that is assigned to all the connections together. A limit defines a point after which connections below a rule are not allocated more bandwidth, even if there is surplus bandwidth available.
Limits can also be defined for the sum of all connections in a rule or for individual connections within a rule.
For more information on weights, guarantees and limits, see Action Type.
Note - Bandwidth allocation is not fixed. As connections are opened and closed, QoS continuously changes the bandwidth allocation to accommodate competing connections, in accordance with the QoS Policy.
These events are logged. The table below describes features unique to event logs.
Non-Accounting Log Events
Log Event |
Data Returned |
Presentation |
Policy Mode |
---|---|---|---|
Connection Reject |
|||
QoS rejects a connection when the number of guaranteed connections is exceeded and/or when you have configured the system not to accept additional connections. |
The name of the matching rule on account of which the connection was rejected. |
Generated as a reject log. Unified with the initial connection log. |
Recommended policy only. |
Running Out of Packet Buffers |
|||
One of the interface-direction's packet buffers is exhausted. A report is generated a maximum of once per 12 hours. |
A string explaining the nature of the problem and the size of the relevant pool. |
New log record created each time a global problem is reported. |
Recommended policy only. |
LLQ Packet Drop |
|||
When a packet is dropped from an LLQ connection. A report is generated a maximum of once per 5 minutes. |
Logged data:
|
Unified with the initial connection log. |
Recommended policy only. |
The next table describes the features unique to accounting logs.
Explaining the Accounting Log
Logged |
Data Returned |
Policy Mode |
---|---|---|
General Statistics |
||
The total bytes transmitted through QoS for each relevant interface and direction. |
Inbound and outbound bytes transmitted by QoS. |
Recommended and Express policies. |
Drop Policy Statistics |
||
|
|
Recommended policy mode only. |
LLQ Statistics |
||
Statistics about the LLQ connection. |
Logged data:
|
Recommended policy mode only. |
These conditions must be met for a connection to be logged:
A guarantee allocates a minimum bandwidth to the connections matched with a rule.
Guarantees can be defined for:
A total rule guarantee reserves a minimum bandwidth for all the connections below a rule. The actual bandwidth allocated to each connection depends on the number of open connections that match the rule. The total bandwidth allocated to the rule cannot be less than the guarantee. The more connections that are open, the less bandwidth each connection receives.
A per-connection guarantee means that each connection that matches the specified rule is guaranteed a minimum bandwidth.
Note: Although weights guarantee the bandwidth share for specified connections, only a guarantee lets you to specify an absolute bandwidth value.
For most traffic on the Web (most TCP protocols), the WFQ (Weighted Fair Queuing, see Intelligent Queuing Engine) paradigm is sufficient. Packets reaching QoS are put in queues and forwarded according to the interface bandwidth and the priority of the matching rule.
Using this standard Policy, QoS avoids dropping packets. Dropped packets adversely affect TCP. Avoiding drops means holding (possibly) long queues, which can lead to non-negligible delays.
For some types of traffic, such as voice and video, bounding this delay is important. Long queues are inadequate for these types of traffic. Long queues can result in substantial delay. For most "delay sensitive" applications, it is not necessary to drop packets from queues to keep the queues short. The fact that the streams of these applications have a known, bounded bit rate can be utilized. If QoS is configured to forward as much traffic as the stream delivers, only a small number of packets are queued and delay is negligible.
QoS Low Latency Queuing makes it possible to define special Classes of Service for "delay sensitive" applications like voice and video. Rules below these classes can be used together with other rules in the QoS Policy Rule Base. Low Latency classes require you to specify the maximal delay that is tolerated and a Constant Bit Rate. QoS then guarantees that traffic matching rules of this type is forwarded within the limits of the bounded delay.
For each Low Latency class defined on an interface, a constant bit rate and maximal delay must be specified for active directions. QoS checks packets matched to Low Latency class rules to make sure they have not been delayed for longer than their maximal delay permits. If the maximal delay of a packet has been exceeded, it is dropped. Otherwise, it is transmitted at the defined constant bit rate for the Low Latency class to which it belongs.
If the Constant Bit Rate of the class is not smaller than the expected arrival rate of the matched traffic, packets are not dropped. The maximal delay must also exceed some minimum. For more, see Computing Maximal Delay).
When the arrival rate is higher than the specified Constant Bit Rate, packets exceeding this constant rate are dropped. This is to make sure that transmitted packets comply with the maximal delay limitations.
Note - The maximal delay set for a Low Latency class is an upper limit. Packets matching the class are always forwarded with a delay not greater, but often smaller, than specified.
In most cases, one Low Latency class is sufficient for all bounded delay traffic. In some cases, it might be necessary to define more than one Low Latency class. For this reason, Low Latency classes are assigned one out of five priority levels (not including the Expedited Forwarding class, see Low Latency versus DiffServ). These priority levels are relative to other Low Latency classes.
As a best practice, define more than one Low Latency class if different types of traffic require different maximal delays.
The class with the lower maximal delay must get a higher priority than the class with the higher delay. When two packets are ready to be forwarded, one for each Low Latency class, the packet from the higher priority class is forwarded first. The remaining packet (from the lower class) then encounters greater delay. The maximal delay that can be set for a Low Latency class depends on the Low Latency classes of higher priority.
Other Low Latency classes can affect the delay incurred by a class. Other Low Latency classes must be taken into consideration when determining the minimal delay that is possible for the class. This is best done by:
When you define class two, for example, class one must already be defined.
For more on the effects of class priority on calculating maximal delay, see: Computing Maximal Delay.
The system logs data for all aspects of LLQ.
For the inbound or outbound interface direction, the sum of the constant bit rates of all the Low Latency classes has a limit. This sum cannot exceed 20% of the total designated bandwidth rate. This 20% limit makes sure that "Best Effort" traffic does not suffer substantial jitter because of the existing Low Latency class(es).
To calculate the Constant Bit Rate of a Low Latency class, you must know the bit rate of one application stream in traffic that matches the:
The Constant Bit Rate of the class equals the bit rate of one application multiplied by the expected number of streams opened at the same time.
If the number of streams is greater than the number you expected, the total incoming bit rate will exceed the Constant Bit Rate. Many drops will occur. To prevent drops, limit the number of concurrent streams. For more, see Ensuring that Constant Bit Rate is Not Exceeded (Preventing Unwanted Drops).
Note - Unlike bandwidth allocated by a Guarantee, the constant bit rate allocated to a Low Latency class on an interface in a given direction is not increased when more bandwidth is available.
To calculate the maximal delay of a Low Latency class, take into account the:
It is important not to define a maximal delay that is too small, which can result in unwanted drops. The delay value defined for a class determines the number of packets that can be queued in the Low Latency queue before drops occur. The smaller the delay, the shorter the queue. A maximal delay that is not sufficient can cause packets to be dropped before they are forwarded. Allow for some packets to be queued, as explained in the steps below.
Best Practice - Use the default Class Maximal Delay defined in the LLQ log. To obtain this default number:
You can also set the Class Maximal Delay by obtaining estimates for the upper and lower bounds. Set the delay to a value between the bounds.
For voice applications, the user generally starts to experience irregularities when the overall delay exceeds 150 ms.
(i) The delay that the streaming application can tolerate minus
(ii) The delay that the external network introduces
This makes sure that the delay introduced by QoS plus the delay introduced by the external network is no more than the delay tolerated by the streaming application.
Note: Even if you set the Constant Bit Rate of the class to accommodate multiple simultaneous streams, do the next calculations with the rate of a single stream:
3 x packet size --------------- bit rate |
This enables three packets to be held in the queue before drops can occur.
The bit rate must be the bit rate of one application, even if the Constant Bit Rate of the class is for multiple streams.
(burst size + 1) x packet size ------------------------------ bit rate |
The maximal delay that you select for the class must be between the smallest delay (step 2) and the greatest delay (step 1). Setting the maximal delay near to one of these values is not recommended. If you expect the application to burst occasionally, or if you don't know whether the application generates bursts at all, set the maximal delay close to the value of the greatest delay.
This error message can show after you enter the maximal delay: "The inbound/outbound maximal delay of class... must be greater than... milliseconds." The message shows if Class of Service that you define is not of the first priority (see Low Latency Class Priorities). The delay value displayed in the error message depends on the Low Latency classes of higher priority, and on interface speed.
Set the maximal delay to a value no smaller than the one printed in the error message.
If the total bit rate going through the Low Latency class exceeds the Constant Bit Rate of the class, then drops occur. (See: Logging LLQ Information.)
This occurs when the number of streams opened exceeds the number you expected when you set the Constant Bit Rate.
To limit the number of streams opened through a Low Latency Class:
Do not select the Accept additional non-guaranteed connections option.
The number of connections is limited to the number you used to calculate the Constant Bit Rate of the class.
To activate a Low Latency class, define at least one rule below it in the QoS Policy Rule Base. Traffic matching a Low Latency class rule receives the delay and Constant Bit Rate properties defined for the specified class. The traffic is handled according to the rule properties (weight, guarantee and limit).
You can use all types of properties in the rules below the Low Latency class:
Think of the Low Latency class with its rules as a separate network interface:
If a rule has a relatively low priority, then packets matching it are entitled to a small part of the Constant Bit Rate. More packets will be dropped if the incoming rate is not sufficiently small.
Note:
Use Low Latency Queuing when:
The large delay makes sure that packets are not dropped if a burst exceeds the Constant Bit Rate. The packets are queued and forwarded according to the Constant Bit Rate.
Note - When the incoming stream is smaller than the Constant Bit Rate, the actual delay is much smaller than 99999 ms. (As in the example above). Packets are forwarded almost as soon as they arrive. The 99999 ms bound is effective only for large bursts.
Do not use a Low Latency Class when controlling delay is not of primarily importance. For most TCP protocols (such as HTTP, FTP and SMTP) the other type of QoS rule is more applicable. Use Weights, Limits and Guarantees. The correct priority is imposed on traffic without having to adjust bit rate and delay.
QoS enforces the policy with minimal drops. Weights and guarantees dynamically fill the pipe when expected traffic is not present. Low Latency Queuing limits traffic according to the Constant Bit Rate.
Low Latency classes are different from DiffServ classes in that they do not receive type of service (TOS) markings. Not all packets are marked as Low Latency. Preferential treatment is guaranteed only while the packets are passing through the QoS Security Gateway.
The exception to this rule is the Expedited Forwarding DiffServ class. A DiffServ class defined as an Expedited Forwarding class automatically becomes a Low Latency class of highest priority. Such a class receives the conditions afforded it by its DiffServ marking both in QoS and on the network.
Note: To use the Expedited Forwarding class as DiffServ only, without delay being enforced, specify a Maximal Delay value of 99999 in the Interface Properties tab (see Low Latency Classes).
Do not use Low Latency Queuing to delay traffic when your ISP:
Despite the DiffServ marking that you apply, the IP packets might get a different QoS level from the ISP.
DiffServ marking communicate to your ISP the Class of Service that you expect all packets to receive.
For these two cases, mark your traffic using a DiffServ class (see When to Use Low Latency Queuing):
DiffServ is an architecture for giving different types or levels of service for network traffic.
When on the enterprise network, packets are marked in the IP header TOS byte as belonging to some Class of Service (QoS Class). When outside on the public network, these packets are granted priority according to their class.
DiffServ markings have meaning on the public network, not on the enterprise network. Good implementation of DiffServ requires that packet markings be recognized on all public network segments.
When DiffServ markings are used for IPSec packets, the DiffServ mark can be copied between headers by setting these properties in: $FWDIR/conf/objects_5_0.c.
:ipsec.copy_TOS_to_inner
The DiffServ mark is copied from the IPSec header to the IP header of the packet after decapsulation/decryption.
:ipsec.copy_TOS_to_outer
The DiffServ mark is copied from the packet's IP header to the IPSec header of the encrypted packet after encapsulation.
The default setting are:
:ipsec.copy_TOS_to_inner (false)
:ipsec.copy_TOS_to_outer (true)
Just like QoS Policy Rules, a DiffServ rule specifies not only a QoS Class, but also a weight. These weights are enforced only on the interfaces on which the rules of this class are installed.
For example, if a DiffServ rule specifies a weight of 50 for FTP connections. That rule is installed only on the interfaces for which the QoS Class is defined. On other interfaces, the rule is not installed. FTP connections routed through the other interfaces do not get the weight specified by the rule. To specify a weight for all FTP connections, add a rule below "Best Effort."
DiffServ rules can be installed only on interfaces for which the related QoS Class has been defined. QoS class is defined on the QoS tab of the Interface Properties window. For more, see: Define the QoS Properties for the Interfaces.
"Best Effort" rules (that is, non-DiffServ rules) can be installed on all interfaces of gateways with QoS gateways installed. Only rules installed on the same interface interact with each other.
Note: