Contents/Index/Search Download Complete PDF Send Feedback Print This Page

Previous

System Administration

The following tasks help you maintain your SmartEvent system properly:

These tasks can be performed from the Policy tab.

Modifications to the Event Policy do not take effect until saved on the SmartEvent server and installed to the Correlation Units.

To enable changes made to the Event Policy, proceed as follows:

  1. Select File > Save.
  2. Select Actions > Install Event Policy.

Changes made to the Event Policy can be undone if the changes have not been saved first. To undo changes made to the policy, select File > Revert Changes.

Related Topics

Modifying the System General Settings

Managing the Event Database

SmartEvent High Availability Environment

Third-Party Device Support

Modifying the System General Settings

The following tasks help you maintain your SmartEvent system:

These tasks can be performed from the Policy tab. The Policy tab is hidden by default, but can be revealed by selecting Policy Tab from the View menu.

Adding Network and Host Objects

Certain objects from the Management server are added during the initial sync with the SmartEvent server and updated at a set interval. However, it may be necessary or useful to add other Network or Host objects, for the following reasons:

  • If you have devices or networks not represented on the Management server that are important for the purpose of defining your internal network
  • When adding sources or destinations to exclusions or exceptions in Event Definitions
  • When selecting sources or destinations in a filter

The following screens are locked until initial sync is complete:

  • Network Objects
  • Internal Network
  • Correlation Units

To make these devices available for use in SmartEvent, proceed as follows:

For a Host object:

  1. From the Policy tab, select General Settings > Objects > Network Objects > Add > Host.
  2. Give the device a significant Name.
  3. Enter its IP Address or select Get Address.
  4. Select OK.

For a Network object:

  1. From the Policy tab, select General Settings > Objects > Network Objects > Add > Network.
  2. Give the network a significant Name.
  3. Enter the Network Address and Net Mask.
  4. Select OK.

See Defining the Internal Network for information on adding objects to the Internal Network definition.

Defining Correlation Units and Log Servers

The SmartEvent system works with correlation units that compile event information from log servers. Additional Correlation Units and their corresponding Log servers should be configured during the initial system setup.

To define Correlation Units or Log servers in SmartEvent:

  1. From the Policy tab, select General Settings > Initial Settings > Correlation Units.
  2. Select Add.
  3. Select the […] symbol and select a Correlation Unit from the pop-up window.
  4. Select OK.
  5. Select Add and select a Log server available to the Correlation Unit from the pop-up window.
  6. Select Save.
  7. From the Actions menu, select Install Event Policy.

    Note - The following screens are locked until sync is complete:

    • Network Objects
    • Internal Network
    • Correlation Units

To define Correlation Units in SmartEvent Intro:

  • In a Security Management Server environment: correlation is defined automatically.
  • In a Multi-Domain Security Management environment: do the previous procedure on the Multi-Domain Server.

Defining the Internal Network

To help SmartEvent determine whether events have originated internally or externally, the Internal Network must be defined. The direction is calculated the as follows:

  1. Incoming – all the sources are outside the network and all destinations are inside
  2. Outgoing – all sources are inside the network and all destinations are outside
  3. Internal – sources and destinations are all inside the network
  4. Other – a mixture of and internal and external values makes the result indeterminate

To define the Internal Network:

  1. From the Policy tab, select General Settings > Initial Settings > Internal Network.
  2. Add internal objects.

Note - It is recommended to add all internal Network objects, and not Host objects

Certain network objects are copied from the Management server to the SmartEvent server during the initial sync and updated afterwards periodically.

The following screens are locked until initial sync is complete:

  • Network Objects
  • Internal Network
  • Correlation Units

Offline Log Files

SmartEvent enables an administrator to view existing logs from a previously generated log file. This feature is designed to enable an administrator to review security threats and pattern anomalies that appeared in the past. As a result, an administrator can investigate threats (for example, unauthorized scans targeting vulnerable hosts, unauthorized legions, denial of service attacks, network anomalies, and other host-based activity) before SmartEvent was installed.

In the same respect, an administrator can review logs from a specific time period in the past and focus on deploying resources on threats that have been active for a period of time but may have been missed (for example, new events which may have been dynamically updated can now be processed over the previous period).

The generation of Offline logs are set in the SmartEvent > Policy tab > General Settings > Initial Settings > Offline Jobs, connected to the Security Management Server or Multi-Domain Server with the following options:

  • Add enables you to configure an Offline Log File process.
    • Name acts as a label that enables you to recognize the specific Offline Line log file for future processing. For example, you can create a query according to the Offline Job name. This name is used in Event tab queries to search events that have been generated by this job.
    • Comment contains a description of the Offline Job for edification.
    • Offline Job Parameters:

      Correlation Unit the machine that reads and processes the Offline Logs.

      Log Server the machine that contains the Offline Log files. SmartEvent will query this log server to see which log files are available.

      Log File contains a list of available of log files found on the selected Log server to be processed by the correlation unit. In this window you select the log file from which you would like to retrieve historical information.

  • Edit enables you to modify the parameters of an Offline Log File process.
  • Remove enables you to delete an Offline Log File process.

    Once you Start an Offline Log File process you cannot remove it.

  • Start runs the Offline Log File process.

    The results of this process appear in the Events tab and are accessible by the By Job Name query or filter.

  • Stop ends the Offline Log Files process.
  • Stop does not delete the entire process, it only stops the process at the specific point at which it is selected. The information collected up until the process is stopped will appear in the Events tab.

With the SmartEvent Events Tab you can add offline jobs to query events generated by offline jobs. To do this perform the following:

  1. Select the Events Tab.
  2. Go to Predefined > By Job Name.
  3. Double-click By Job Name.

    Every job that appears in this window is an offline job except for All online jobs.

  4. Select the job you want the By Job Name to query.
  5. Click OK.

Configuring Custom Commands

To add (or edit) custom commands:

  1. Select Actions > Configure Custom Commands.
  2. To add a command, select Add…. (To edit an existing command, highlight the command and select Edit.)
  3. Enter the text to appear in the right-click context menu.
  4. Enter the command to run, and any arguments.
  5. Configure the command to run in an SmartEvent window or a in separate Windows command window.
  6. Select whether the command should appear in the context menu only when right-clicking in cells with IP address data.
  7. Select OK.

Creating an External Script

An external script can be written to receive an Event Definition via standard input. The format of the event content is a name-value set – a structured set of fields that have the form:

(name: value ;* );

where name is a string and value is either free text until a semicolon, or a nested name-value set. The script will be reported as successful if it completes within 10 minutes and its exit status is zero.

The following is a sample event as it is received by an external script:

(Name: Check Point administrator credential guessing; RuleID: {F182D6BC-A0AA-444a-9F31-C0C22ACA2114}; Uuid: <42135c9c,00000000,2e1510ac,131c07b6>; NumOfUpdates: 0; IsLast: 0; StartTime: 16Feb2005 16:45:45; EndTime: Not Completed; DetectionTime: 16Feb2005 16:45:48; LastUpdateTime: 0; TimeInterval: 600; MaxNumOfConnections: 3; TotalNumOfConnections: 3; DetectedBy: 2886735150; Origin: (IP: 1.2.3.4; repetitions: 3; countryname: United States; hostname: theHost) ; ProductName: SmartDashboard; User: XYZ; Source: (hostname: theHost; repetitions: 3; IP: 1.2.3.4; countryname: United States) ; Severity: Critical; EventNumber: EN00000184; State: 0; NumOfRejectedConnections: 0; NumOfAcceptedConnections: 0) ;

To add an External Script, proceed as follows:

  1. From the Policy tab, select General Settings > Initial Settings > Automatic Reactions > Add > External Script.
  2. Give the script a name.
  3. In the field Action, enter the name of the file containing the script. The script must be placed in the directory $RTDIR/bin/ext_commands, and must have execute privileges.

Managing the Event Database

SmartEvent uses an optimization algorithm to manage disk space and other system resources. When the SmartEvent database becomes too large, the oldest events are automatically deleted to save space. In addition, events that are more than one year old are also automatically deleted.

For instructions to change maximum period and maximum database size to save past events in SmartEvent database see sk69706

Backup and Restore of the Database

The evs_backup utility backs up the SmartEvent configuration files and places them in a compressed tar file. In addition, it backs up data files based upon the options selected. The files can be restored using the evs_backup_extractor script. Enclosed are two script versions, one for Windows that has a .bat suffix and one for Solaris, Linux and SecurePlatform that does not have a suffix but should have the executable permissions set.

Usage:

evs_backup [-filename file.tgz] [-EvaDb] [-EvrDb] [-Results] [-Logs] [-LogoAndScripts] [-All] [-export] 

Additional options are:

Option

Description

EvaDb

Copy the SmartEvent events database

EvrDb

Copy the SmartReporter consolidation database

Results

Copy the SmartReporter results

Logs

Copy the SmartEvent error logs

LogoAndScripts

Copy the logo file and the distribution script

export

Runs a evr_addon_export, for a different file name use -filename

All

Select all options

SmartEvent High Availability Environment

The SmartEvent database keeps a synchronized copy of management objects locally on the SmartEvent server. This process, dbsync, allows SmartEvent to work independently of different management versions and different management servers in a High Availability environment.

Management High Availability capability exists for Security Management Servers, and in a Multi-Domain Security Management environment, dbsync supports High Availability for the Multi-Domain Servers and the Domain Management Servers.

How it works

Dbsync initially connects to the management server with which SIC is established. It retrieves all the objects and after the initial synchronization it gets updates whenever an object is saved. At this point, dbsync registers all the High Availability management machines and periodically tests the connectivity with the current management server. If connectivity is lost, it attempts to connect to the other High Availability management servers until it finds an active one and connects to it.

If two management servers are active concurrently, dbsync will remain connected to one management server and will not receive any changes made on the other management server until a synchronization operation is performed.

Log Server High Availability

In SmartDashboard, it is possible to configure a Security Gateway such that when it fails to send its logs to one log server, it will send its logs to a secondary logs server. In order to support this configuration, it is possible to add both log servers to a single Correlation Unit. In this way, the Correlation Unit will get an uninterrupted stream of logs from both servers and will continue to correlate all Firewall logs.

Correlation Unit High Availability

Multiple Correlation Units can read logs from the same log servers and in this way provide redundancy in case one of them fails. The events that the Correlation Units detect will be duplicated in the SmartEvent database; however these events can be disambiguated by filtering with the Detected By field in the Event Query definition. The Detected By field specifies which Correlation Unit detected the event.

In case the SmartEvent server becomes unavailable, the Correlation units retain the events until it can reconnect with the SmartEvent server and will then forward the events.

Third-Party Device Support

New Device Support

Adding support for a log-generating device (e.g., router, Firewall, IDS, anti-virus, OS) to SmartEvent involves one or both of the following:

  • Adding the data necessary to translate the device logs to a format that a Check Point Log server can read. This translation is called parsing, and it involves extracting the relevant log fields from the log data to create a normalized Check Point log available for further analysis.
  • Adding the device logs to Event Definitions.

SmartEvent currently supports the following log formats:

  • Check Point / OPSEC ELA
  • Microsoft Windows Events
  • Syslog messages
  • SNMP traps

Devices using Check Point, ELA, or Windows Events do not require special parsing configuration. If you are adding a device using one of these formats, skip to the section Adding New Devices to Event Definitions. For details on support for Windows logs, see the section Windows Events.

Devices using the syslog or SNMP format require parsing configuration. Continue to Planning and Considerations and the parsing section relevant for your device.

Parsing Log Files

Planning and Considerations

  1. Learn the exact structure of the logs the device generates with the following
    1. The vendor logging guide (if it exists), or any other documentation that specifies the different logs the device can generate and their exact structure. Documentation is important to verify that you have found all possible logs and is usually enough to start writing the parsing file.
    2. Log samples, as many as possible. It is recommended to use real logs generated from the actual devices to be used with SmartEvent. Samples are important for testing the parsing file and tuning it accordingly.
  2. Consult the Syslog Parsing guide to become familiar with the Free Text Parsing Language. The document also specifies the relevant parsing files and their location on the Log server.
  3. Decide which fields to extract from the log. While the fields you want to extract differ from one device to another, devices of the same category would usually have similar log fields. For example:

    Device Type

    Typical Log Fields

    Firewall, router and other devices that send connection based logs

    source IP address, destination IP address, source port, destination port, protocol, accept/reject indication

    IDS/IPS, application Firewall and other de\-vices that send attack logs

    attack name/ID

  4. It may also be useful to compare existing parsing files of another similar product.

Syslog Parsing

To parse a syslog file:

  1. Create a new parsing file called <device product name>.C as specified in the parsing guide, and place it in the directory $FWDIR/conf/syslog/UserDefined on the Log server.
  2. On the Log server, edit the file $FWDIR/conf/syslog/UserDefined/UserDefinedSyslogDevices.C to add a line that includes the new parsing file. For example:

    : (
    :command (
    :cmd_name (include)
    :file_name ("snortPolicy.C")
    )
    )

  3. If needed, create a new dictionary file called <device product name>_dict.ini, and place it in the directory $FWDIR/conf/syslog/UserDefined on the Log server. A dictionary translates values with the same meaning from logs from different devices into a common value. This common value is then used in the Event Definitions.
  4. If you have added a new dictionary file, edit the file $FWDIR/conf/syslog/UserDefined/UserDefinedSyslogDictionaries.C on the Log server and add a line to include the dictionary file. For example:

    :filename ("snort_dict.ini")

To test the parsing, send syslog samples to a Check Point Log server:

  1. Configure the Log server to accept syslogs by doing one of the following:
    • Using SmartDashboard, connect to the Security Management Server and edit the SmartEvent server network object: Go to Logs and Masters > Additional Logging Configuration and enable the property Accept Syslog messages.
    • On the Log server, run syslog –r to register the syslog daemon.
  2. After making any change in the parsing file, restart the fwd process on the Log server (either run the commands cpstop & cpstart, or fw kill fwd & fwd –n)
  3. Send syslogs from the device itself, or from a syslog generator, such as Kiwi Syslog Message Generator, available at http://www.kiwisyslog.com/software_downloads.htm#sysloggen, or Adiscon logger, available at http://www.monitorware.com/logger/.

Troubleshooting:

If SmartView Tracker does not display the logs as expected, there may be specific problems with the parsing files:

  • If there is a syntax error in the parsing files, an error message will report a failure to read the parsing files. To read a specific error message, set the environment variable TDERROR_ALL_FTPARSER value to 5 before running the process fwd -n.
  • If the syslogs are displayed in SmartView Tracker with 'Product syslog', this means the log was not parsed properly, and was parsed as a general syslog.
  • If the Product field contains another product (not the one you have just added) this means there is a problem with the other product parsing file. Report this to the Check Point SmartEvent team.
  • If the product is reporting correctly in the log, look for all the fields you have extracted. Some of them will be in the Information section. Some fields may only be visible when selecting More Columns.

SNMP Parsing

  1. Create a new parsing file called <device product name>.C as specified in the Syslog Parsing guide, and place it in the directory $FWDIR/conf/snmpTrap/UserDefined on the Log server. In the file, use a switch command for the snmp_trap_to_cp_log_param_id field, so that each case contains OID for a specific log field (OID information may be extracted from the device MIB files, if available)

    To view an example, see the file $FWDIR/conf/snmpTrap/CPdefined/realSecure.C.

  2. Edit the file $FWDIR/conf/snmpTrap/UserDefined /UserDefinedSnmpDevices.C to add lines to include the new parsing file. The value of the attribute case should be the appropriate OID for the product. Note that the product OID should contain exactly seven numeric values, separated by decimal points. For example:

    : (
    :case ("1.3.6.1.4.1.2499")
    :command (
    :cmd_name (include)
    :file_name ("realSecure.C")
    )
    )

  3. To test the parsing, send SNMP trap samples to a Check Point Log server:
    1. Configure the Log server to accept SNMP traps, as follows:
      1. On the Log server, run the command snmpTrapToCPLog –r to register the SNMP trap daemon.
      2. On the Log server, run the command snmpTrapToCPLog -a [ip_addr] to add the SNMP trap sender.
    2. Restart the snmpTrapToCPLog process on the Log server after any change in the parsing file (using cpstop & cpstart or by terminating the snmpTrapToCPLog process and running snmpTrapToCPLog again from the command line)
    3. Send SNMP traps from the device itself, or from a SNMP trap generator like NuDesign Trap Sender, available at http://www.nudesignteam.com).
  4. If SmartView Tracker does not display the logs as expected, there may be specific problems with your parsing files:
    1. If there is a syntax error in the parsing files, an error message will report a failure to read the parsing files. To read a specific error message, set the environment variable TDERROR_ALL_SNMP value to 5 before running the process snmpTrapToCPLog.
    2. If the SNMP traps are displayed in SmartView Tracker with 'Product snmp Trap', this means the log was not parsed properly, and was parsed as a general SNMP trap.
    3. If the Product field contains another product (not the one you have just added) this means there is a problem with the other product parsing file. Report this to the Check Point SmartEvent team.
    4. If the product is reporting correctly in the log, look for all the fields you have extracted. Some of them will be in the Information section. Some fields may only be visible when selecting More Columns.

Adding New Devices to Event Definitions

After creating the appropriate parsing file for the new product, the next step is to include the product in the SmartEvent Event Policy by adding it to the Product filters of new and existing events. This involves making changes to the SmartEvent server database. Some of the changes are accomplished using SmartEvent client, while others require using a CPMI client (such as GuiDBedit or dbedit, or a specific client you can write for your own use).

Note - Manually editing the files in $FWDIR/conf is not recommended and should be done with extreme care.

Step 1: Create an object to represent the new device in one of the following ways:

  1. Using the SmartEvent client:
    1. Right click any of the Event Definitions on the Policy tab and select Properties > Filter tab.
    2. From the Product list section, select Add > Add Product.
    3. Enter the product name as it appears in the Product field of the log.
    4. Select OK.
    5. Select OK again.
    6. Select Cancel to exit the dialog.
  2. Using another CPMI client:
    1. Enter the class name: eventia_product_object and the table: eventia_products
    2. Set the name and the product_displayed_name & product_name fields, for example:
: (Snort_IDS
		:product_displayed_name ("Snort IDS")
		:product_name ("Snort IDS")
)

The resulting object is added to the file $FWDIR/conf/sem_products.C.

Step 2: Add the device to the relevant Event Definitions:

For example, if this is an IDS/IPS reporting a 'Ping of Death' attack, use the Event Definition Wizard to add a filter for the new product in the 'Ping of Death' Event Definition. You may also add existing or new fields to the product filter by selecting the property Show more fields.

  1. Note that Event Definitions cannot be modified, so adding a new filter requires doing one of the following:
    • Saving the relevant Event Definitions as User Defined Events.
    • Overriding this restriction by making a change to the file $FWDIR/conf/sem_detection_policies.C. Use an editor to open the file, search for the line abacus_detection_policy_object, and set the value :user_defined to false.
  2. Create new Event Definitions where needed if the requested event is not covered by existing Event Definitions. As in step 2, this is accomplished via the Event Definition Wizard. New Event Definitions appear in the User Defined Events section of the Event Policy tree.

    To move the Event Definition to another section of the tree, do the following:

    1. Use a CPMI client to edit the abacus_detection_policy_object in the table abacus_detection_policies.
    2. Edit the category field.
    3. To verify that the change has been made, view the object abacus_detection_policy_object in the file $FWDIR/conf/sem_detection_policies.C.
  3. Consider adding a generic event for the new product (as in the Third Party Devices - User Configured Events section of the Event Policy tree).
    1. Create a new Event Definition based on the new product using the Event Definition Wizard.
    2. Use a CPMI client to edit the abacus_detection_policy_object in the table abacus_detection_policies.
    3. Set the property :create_exception_only for this event to true.
    4. Modify the values of the following fields as desired:
      • exception_rule_static_string_1
      • exception_rule_static_string_2
      • exception_rule_static_string_3
      • static_description_string
      • exception_list_def
      • exception_columns

To test the changes in the Event Definition:

  1. Copy the modified files to the directory $FWDIR/conf on the SmartEvent Server.
  2. Run cpstop & cpstart on the SmartEvent server.
  3. Close and reopen the SmartEvent client.
  4. Assuming the Event Definitions are configured as expected, install Event Policy.
  5. Send logs as described in the testing for parsing above, and see the generated events.

Syslog Parsing

Various third-party devices use the syslog format for logging. SmartEvent can parse and process third-party syslog messages by reformatting the raw data. This parsing process extracts relevant log fields from the syslog data and creates a normalized Check Point log which is available for further analysis.

Warning - Manual modifications to out-of-the-box parsing files will not be preserved automatically during the upgrade process. Mark your modifications with comments so you can remember what changed.

The Parsing Process

The process takes place on the Log server and begins with the syslog daemon. The syslog daemon running on the Log server receives the syslogs and calls for their parsing. The parsing itself involves various parsing files, which contain the different parsing definitions and specifications and can be found in $FWDIR\conf\syslog\. Among these files, are the device-specific parsing files, which define the actual parsing and extraction of fields, according to each device specific syslog format

The parsing starts with the syslog_free_text_parser.C file. This file defines the different dictionaries (see Dictionary) and performs a preliminary parsing of the syslog, extracting fields which are common to all syslog messages such as PRI, date and time, and the machine and application that generated the syslog. syslog_free_text_parser.C then uses the allDevices.C file (which includes reference to two files: UserDefined/UserDefinedSyslogDevices.C and CPdefined/CPdefinedSyslogDevices.C). The first (UserDefined/UserDefinedSyslogDevices.C)contains the names of all the devices parsing files that were defined by the user while the second (CPdefined/CPdefinedSyslogDevices.C)contains all devices parsing files that were defined by Check Point. The allDevices.C file starts by going over the user device parsing files, and tries to match the incoming syslog with the syslog format parsed in that particular file. Once a device parsing-file succeeds in the preliminary parsing of the syslog (that is, it matches the syslog format and is therefore the syslog origin), the rest of the syslog is parsed in that file. If a match is not found, the file will continue going over the Check Point device parsing files until it finds a match.

The Free Text Parsing Language

The free text parsing language enables us to parse an input string, extract information and define log fields which appear as part of the Check Point log in the Log server which are then used in the definition of events. Each parsing file contains a tree of commands. Each command tries to check/parse part of the input string (sometimes adding fields to the log as a result) and decides whether to continue parsing the string (according to the success/failure of its execution).

The Commands

Each command consists of the following parts:

  1. cmd_name - the name of the command.
  2. command arguments - arguments that define the behavior of the command.
  3. on_success (optional) - the next command executed if the current command execution succeeds.
  4. on_fail (optional) - the next command executed if the current command execution fails.

Sample

:command (
      :cmd_name (try)
      :try_arguments
           .
           .

      :on_success (
             :command()
      )
      :on_fail (
             :command()
      )
)

Try

The try command matches a regular expression against the input string.

Try Command Parameters

Argument

Description

parse_from

start_position - run the regular expression from the beginning of the input string.

last_position - run the regular expression from the last position (of the previous successful command).

regexp

The regular expression to match.

add_field

One or more fields to add to the result (only if the regular expression is successful). See Adding a Field.

Try Command Sample

:command (
     :cmd_name (try)
     :parse_from (start_position)
     :regexp ("([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)")
     :add_field (
             :type (index)
             :field_name (Src)
             :field_type (ipaddr)
             :field_index (1)
     )
)

In the above example, we try to match the regular expression ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) looking at the entire log (parse_from (start_position) - parse from the beginning of the log). If the regular expression is matched, we add a source field (See Adding a Field).

Group_try

The command group_try can executes one or more commands in one of the following four modes:

  1. try_all tries all the commands in the group and ignores the return code of the commands.
  2. try_all_successively tries all the commands in the group one after the other and ignores the return code of the commands. Each command tries to execute from the last position of the previous successful command.
  3. try_until_success tries all the commands until one succeeds.
  4. try_until_fail tries all the commands until one fails.

The command group_try is commonly used when parsing a "free-text" portion of a log, which contains a number of fields we wish to extract. For example:

%PIX-6-605004: Login denied from 194.29.40.24/4813 to outside:192.168.35.15/ssh for user 'root'

When looking at this section of the log, we can use the following structure:

Group_try Command Sample 1

:command (
      :cmd_name (group_try)
      :mode (try_all_successively)
      :(
            # A "try" command for the source.
            :command ()
      )
      :(
            # A "try" command for the destination.
            :command ()
      )
      :(

            # A "try" command for the user.
            :command ()
      )
                 .
                 .
                 .
)

In this example, the first try command in the group_try block (for the source) is executed.

If the source, destination and user appear in no particular order in the syslog, the try_all mode should be used instead of try_all_successively.

Group_try Command Sample 2

:command (
    :cmd_name (group_try)
    :mode (try_until_success)
    :(
         :command (
         .
         .
         .
           :regexp ("(\(|)(login|su)(\)|).* session (opened|closed) for
user ([a-z,A-Z,0-9]*)")
         )
    )
    :(
         :command (
             .
             .
             .
           :regexp ("(\(|)su(\)|).* authentication failure; logname=([a-zA-Z0-9]*).* user=([a-zA-Z0-9]*)")
         )
    )
         .
         .
         .
)

In the above example, the regular expressions in the different commands try matching more specific logs. At most, one command in the group_try block will be successful and once it is found, there will be no need to check the others.

Note - When you add a new device, the first try command in the parsing file must use the try until success parameter:

:cmd_name (group_try)
:mode (try_until_success)
: (
….
)

Switch

This command enables the ability to compare the result of a specific field against a list of predefined constant values.

Switch Command Parameters

field_name

The field name whose value is being checked.

case

One or more case attributes followed by the value with which to compare.

default

Execute only if no relevant case is available. The default value is optional.

Switch Command Sample

:command (

     :cmd_name (switch)

     :field_name (msgID)

     :(
           :case (302005)
           :command ()
     )
     :(
           :case (302001)
           :case (302002)
           :command ()
     )
     :default (
           :command()
     )
)

Unconditional _try

This command is an "empty" command that allows adding fields to the result without any conditions.

Unconditional _try Command Sample 1

:command (
      :cmd_name (unconditional_try)
      :add_field (
            :type (const)
            :field_name (product)
            :field_type (string)
            :field_value ("Antivirus")
      )
)

A common usage of unconditional_try is with the switch command. In the following example, each message ID is attached with its corresponding "message" field (denoting its meaning).

Unconditional _try Command Sample 2

:command (
      :cmd_name (switch)
      :field_name (msgID)
(
:case (106017)
:command (
:cmd_name (unconditional_try)
:add_field (
:type (const)
:field_name (message)
:field_type (string_id)
:field_value ("LAND Attack")
)
)
)
:(
:case (106020)
:command (
:cmd_name (unconditional_try)
:add_field (
:type (const)
:field_name (message)
:field_type (string_id)
:field_value ("Teardrop Attack")
)
)
)
.
.
.
)

Include

This command enables the inclusion of a new parsing file.

file_name

The full path plus the file name of the file to be included.

Include Command Sample

:command (
     :cmd_name (include)
     :file_name ("c:\freeTextParser\device\antivirusPolicy.C")
)

add_field

Each add_field has several arguments:

  • Type - The type of the add_field command. This argument has two possible values:
    • Index - Part of the regular expression will be extracted as the field. The field_index value denotes which part will be extracted (see field_index bullet).
    • Const - adding a constant field whose value doesn't depend on information extracted from the regular expression (see field_value bullet).

Field_name - the name of the new field. There are a number of common fields, which have corresponding columns in SmartView Tracker. Following is a table with the field names that need to be given in order for these fields to appear in their column in SmartView Tracker (and not in the Information field, where other added fields will appear):

Field Name to be Given

Column in SmartView Tracker

Src

Source

Dst

Destination

proto

Protocol

s_port

Source Port

product

Product

service

Service (when resolved includes the port

and protocol.)

Action

Action

ifname

Interface

User

User

Naming the above fields accordingly places them in their correct column in SmartView Tracker and thus enables them to participate in any filtering done on these columns. These fields will automatically take part in existing event definitions with these field names.

Field_type - the type of the field in the log. Following is a table with the possible field types.

Field Type

Comment

int

 

uint

 

string

 

ipaddr

For IP addresses used with the Src and Dst fields.

pri

Includes the facility and severity of a syslog.

timestmp

Includes the date and time of the syslog. Supports the for\-mat 'Oct 10 2004 15:05:00'.

time

Supports the format '15:05:00'.

string_id

For a more efficient usage of strings. Used when there is a finite number of possible values for this field.

action

Supports the following actions: drop, reject, accept, en\-crypt, decrypt, vpnroute, keyinst, authorize, deauthorize, authcrypt, default.

ifdir

0 - inbound

1 - outbound

ifname

For an interface name (used with the "ifname" field).

protocol

The field name should be "proto".

port

For "service", "s_port" or "port" fields.

The following table lists the field names whose field type must be as mentioned

Field Name

Field Type

Src

ipaddr

Dst

ipaddr

proto

protocol

s_port

port

service

port

Action

action

ifname

ifname

  • field_index or field_value - The argument used depends on the value of the "type" field. If it is "index", "field_index" appears and if it is "const", "field_value" appears.

    field_index - denotes which part of the regular expression will be extracted, according to the grouping of the patterns (the grouping is done by writing a certain expression in brackets so basically the number in "field_index" denotes the bracket number whose pattern will be taken into account).

Add_field Command Sample

:command (
     :cmd_name (try)
     :parse_from (last_position)
     :regexp ("Failed password for ([a-zA-Z0-9]+) from
([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) port ([0-9]+)")
     :add_field (
        :type (index)
        :field_name (User)
        :field_type (string)
        :field_index (1)
     )
     :add_field (
        :type (index)
        :field_name (Src)
        :field_type (ipaddr)
        :field_index (2)
     )
     :add_field (
        :type (index)
        :field_name (port)
        :field_type (port)
        :field_index (3)
     )
)

The pattern for the User, [a-zA-Z0-9]+, is located in the first pair of brackets (and therefore the field_index is 1), the pattern for the Source address, [0-9]+\.[0-9]+\.[0-9]+\.[0-9]+, is located in the second pair of brackets (hence the index 2) and the pattern for the port is in the third pair of brackets.

In each parsed regular expression the maximum number of brackets is limited to 9. If you wish to extract more than 9 elements from the regular expression, "break" the expression into 2 pieces. The first regular expression will contain the first 9 brackets and the rest of the regular expression will be in the on_success command.

:command (
:cmd_name (try)
:parse_from (start_position)
:regexp ("access-list (.*) (permitted|denied|est-allowed)
([a-zA-Z0-9_\([a-zA-Z0-9_\\.[0-9]+\.[0-9]+\.[0-9]+)\(([0-9]*)\) -> ")
:add_field (
:type (index)
:field_name (listID)
:field_type (string)
:field_index (1)
)
:add_field (
:type (index)
:field_name (action)
:field_type (action)
:field_index (2)
)
:add_field (
:type (index)
:field_name (proto)
:field_type (protocol)
:field_index (3)
)
:add_field (
:type (index)
:field_name (ifname)
:field_type (ifname)
:field_index (4)
)
:add_field (
:type (index)
:field_name (Src)
:field_type (ipaddr)
:field_index (5)
)
:on_success (
:command (
:cmd_name (try)
:parse_from (last_position)
:regexp
("([a-zA-Z0-9_\\.[0-9]+\.[0-9]+\.[0-9]+)\(([0-9]*)\) hit-cnt ([0-9]+) ")
:add_field (
:type (index)
:field_name (destination_interface)
:field_type (string)
:field_index (1)
)
)
)
)

field_value is the constant value to be added.

:command (
      :cmd_name (try)
      :parse_from (last_position)
      :regexp ("%PIX-([0-9])-([0-9]*)"))
      :add_field (
             :type (const)
             :field_name (product)
             :field_type (string_id)
             :field_value ("CISCO PIX")
      )
)

Dict_name is the name of the dictionary to use to convert the value (if the value is not found in the dictionary the real value will be the result). See Dictionary.

Dictionary

The free text parser enables us to use dictionaries to convert values from the log. These conversions are used in order to "translate" values from logs from different devices, with the same meaning, into a common value, which is used in the events' definitions.

Each dictionary file is defined as a .ini file, in which the section name is the dictionary name and the values are the dictionary values (each dictionary can include one or more sections).

[dictionary_name]

Name1 = val1

Name2 = val2

cisco_action]          [3com_action]

permitted = accept      Permit    = accept

denied = reject         Deny      = reject

Dictionary Sample

The reference to a dictionary in the parsing file is shown in the following table:

Dictionary Command Sample 2

:command (
      :cmd_name (try)
      :parse_from (start_position)
      :regexp ("list (.*) (permitted|denied) (icmp)
([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) -> ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+).* packet")
       :add_field (
               :type (index)
               :field_name (action)
               :field_type (action)
               :field_index (2)
               :dict_name (cisco_action)
       )
)

Administrator Support for WinEventToCPLog

WinEventToCPLog uses Microsoft APIs to read events from Windows operating system event files. These files can be viewed using the Windows Event Viewer.

WinEventToCPLog can read all event files on the local machine and it can read log files from remote machines with the right privileges. This is useful for making a central WinEventToCPLog server that forwards multiple Window hosts events to a Check Point Log server.

The privileges are set by invoking WinEventToCPLog -s to specify an administrator login and password. There are two ways to access the files on a remote machine. The first is to define a local administrator on the remote machine whose name matches the name registered with WinEventToCPLog. The second way is to define the administrator registered with WinEventToCPLog as an administrator in the domain and then this administrator can access all the machines in the domain.

 
Top of Page ©2013 Check Point Software Technologies Ltd. All rights reserved. Download Complete PDF Send Feedback Print