Open Frames Download Complete PDF Send Feedback Print This Page

Previous

Next

Configuring Netflow Export - CLI (netflow)

To add a collector:

add netflow collector ip VALUE port VALUE [srcaddr VALUE export-format VALUE]

To delete a collector:

delete netflow collector [for-ip VALUE [for-port VALUE]] 

To change settings of a collector:

set netflow collector [for-ip VALUE [for-port VALUE]]
   export-format VALUE
   srcaddr VALUE
 
set netflow collector [for-ip VALUE]
   port VALUE
 
set netflow collector
   ip VALUE

Parameter

Description

ip VALUE

The IPv4 address to which NetFlow packets are sent. This is mandatory.

port VALUE

The UDP port number on which the collector is listening. This is mandatory. There is no default or standard port number for NetFlow.

srcaddr VALUE

Optional: The IPv4 address of the NetFlow packets source. This must be an IP address of the local host. The default (which is recommended) is an IP address from the network interface on which the NetFlow traffic is going out.

export-format VALUE

The NetFlow protocol version to send: 5 or 9. Each has a different packet format. The default is 9.

for-ip VALUE
for-port VALUE

The for-ip and for-port parameters specify the collector that the command operates on. If you only have one collector configured, you do not need these parameters. If you have two or three collectors with different IP addresses, use for-ip. If you have two or three collectors with the same IP address and different UDP ports, you must use for-ip and for-port to identify the one you want to work on.

Monitoring NetFlow Configuration

To see NetFlow configurations:

show netflow all
show netflow collector [for-ip VALUE [for-port VALUE]]
show netflow collector [for-ip VALUE [for-port VALUE]]
   export-format
   srcaddr
show netflow collector [for-ip VALUE] port
show netflow collector ip

Performance Optimization

Use Performance Optimization to get best results for performance tests on multi-core appliances and open servers. CoreXL, Performance Pack and Multi-Queue technologies are used to get best results.

How is performance measured?

There are different ways of measuring performance:

  • Packet Rate – How many packets the gateway can forward per second. Usually measured with 64 bytes UDP packets.
  • Throughput – How many bits the gateway can forward per second. Measured using large packets, usually 1518 byte packets.
  • TCP Session Rate – How many connections can be opened and closed per second.

Performance Optimization Terms and Concepts

SecureXL - A Check Point patented open interface that offloads security processing to optimized hardware or software processing units. Makes it possible to get multi-gigabit Firewall and VPN performance on Security Gateways.

Performance Pack – A Check Point software product that uses SecureXL technology to increase the speed of IPv6 and IPv4 traffic. It is installed on a gateway, and gives significant performance improvements for Security Gateways.

Connection Templates - A mechanism that is used by SecureXL acceleration devices to improve session rates by opening connections more quickly. When a connection is opened, the Firewall offloads to the acceleration device a template: for this connection type. The template increases the throughput of connections between the same IP addresses, same destination port, same protocol and same interfaces, starting the first packet.

CoreXL - A Check Point performance-enhancing technology for Security Gateways on multi-core (CPU) processing platforms. It enhances performance by letting the processing cores do multiple tasks at the same time. It provides almost linear scalability of performance for each processing core.

Multi-Queue

Multi-Queue improves the performance of SecureXL acceleration on multi-core Security Gateways. Traffic entering a network interface card (NIC) traffic queue is:

  • Accelerated by Performance Pack or
  • Directed to a CoreXL core that processes traffic that is not accelerated, because it must be inspected by Software Blades.

By default, each network interface has one traffic queue that is handled by one CPU at a time.

Multi-Queue lets you configure more than one traffic queue for each network interface. This means more than one CPU can be used for acceleration.

Configuring Performance Optimization - WebUI

This page shows in the WebUI for R76 and higher appliances and open servers with:

  • More than 6 cores (CPUs), and
  • Interfaces that support Multi-Queue: igb (1 Gigabit/s) and ixgbe (10 Gigabit/s) interfaces.

To configure Performance Optimization

  1. Choose one of these options
    • Optimize for Software Blades - Best Software Blades performance. Most cores are assigned to CoreXL instances. Select if you enabled more blades than the Firewall Blade and the VPN Blade.
    • Optimize for Session Rate - Best session rate for template connections. Up to 4 cores are assigned to Performance Pack. Recommended Multi-Queue interface configuration is applied.
    • Optimize for Packet Rate and Throughput - Best small or large packet accelerated throughput. Up to 6 cores are assigned to Performance Pack. Recommended Multi-Queue Interface configuration is applied.
    • Custom - Assign cores to Performance Pack and CoreXL using the Core Split slider. This is the equivalent of the Configure Check Point CoreXL option in cpconfig and the cpmq configuration utility.
  2. Click Apply
  3. Reboot.

Core Split

Shows how the cores on the Security Gateway are used for each Performance Optimization option.

Multi-Queue

Note - You cannot configure Multi-Queue if you select Optimize for Software Blades.

  1. Select a Performance Optimization option.

    In the Multi-Queue section of the page, interfaces that

    • Support Multi-Queue are shown. Other interfaces (Management, Synchronization and on board interfaces) do not show.
    • Are recommended for enabling Multi-Queue are selected. These are interfaces which are Up and have a link.
  2. To change the settings, select or clear interfaces.
  3. Click Apply
  4. Reboot.

To see the association of interfaces to cores, run the command:

  • sim affinity -l for interfaces that are not configured with Multi-Queue.
  • cpmq get –v for interfaces that are configured with Multi-Queue.

To learn about CoreXL and Multi-Queue, see the R76 Performance Optimization Guide.

Configuring Performance Optimization - CLI (cpconfig)

To configure CoreXL for performance optimization:

  1. Run cpconfig
  2. Select
    (10) Configure Check Point CoreXL

You can see the total number of CPUs (cores) and edit the number of cores with enabled firewall instances.

The number of cores used by Performance Pack = The number of CPUs - The number of firewall instances.

Note - In the WebUI, this is equivalent to the Performance Optimization option Custom .

To configure Multi-Queue for performance optimization:

To see the association of interfaces to cores, run the command sim affinity -l.

To learn about CoreXL and Multi-Queue, see the R76 Performance Optimization Guide.

 
Top of Page ©2014 Check Point Software Technologies Ltd. All rights reserved. Download Complete PDF Send Feedback Print