In This Section: |
The following tasks help you maintain your SmartEvent system properly:
These tasks can be performed from the Policy tab.
Modifications to the Event Policy do not take effect until saved on the SmartEvent server and installed to the SmartEvent Correlation Unit.
To enable changes made to the Event Policy:
You can undo changes to the Event Policy, if they were not saved.
To undo changes: click File > Revert Changes.
You can assign a Permission Profile to an administrator for the SmartEvent database.
Configure Permission Profiles for SmartEvent in the SmartDashboard or SmartDomain Manager connected to the Security Management Server or Multi-Domain Server (SmartDashboard > Manage > Permissions Profiles > New / Edit).
You can configure these permissions:
These customized permissions are available for Events and Reports:
When an administrator logs into SmartEvent his user name and password are verified by the SmartEvent Server. If the administrator is not defined on the SmartEvent server, the SmartEvent server attempts the login with the credentials that are defined on the Security Management Server or Multi-Domain Server.
Note - If you do not want to centrally manage administrators, and you only use the local administrator defined for the SmartEvent Server:
From the SmartEvent server command line, invoke:cpprod_util CPPROD_SetValue FW1 REMOTE_LOGIN 4 1 1
In Multi-Domain Security Management, each Event and Report is related to a Domain. Administrators can see events for Domains according to their permissions.
A Multi-Domain Security Management Policy administrator can be:
The following tasks help you maintain your SmartEvent system:
These tasks can be performed from the General Settings page of the Policy tab. The Policy tab is hidden by default, but can be revealed by selecting Policy Tab from the View menu.
Certain objects from the Management server are added during the initial sync with the SmartEvent server and updated at a set interval. But it is useful (or necessary) to add other Network or Host objects, for these reasons:
These screens are locked until initial sync is complete:
You can make a device available to use in SmartEvent.
To make a device that is a host object available in SmartEvent:
To make a device that is a network object available in SmartEvent:
See Defining the Internal Network for information about how to add objects to the Internal Network definition.
The SmartEvent system works with Event Correlation Units that compile event information from Log Servers. Additional Event Correlation Units and their corresponding Log Servers should be configured during the initial system setup.
To define SmartEvent Correlation Unit or Log Servers in SmartEvent:
Note - The following screens are locked until sync is complete:
To define SmartEvent Event Correlation Units in SmartEvent Intro:
To help SmartEvent conclude if events originated internally or externally, you must define the Internal Network. These are the options to calculate the traffic direction:
To define the Internal Network:
We recommend you add all internal Network objects, and not Host objects.
Some network objects are copied from the Management server to the SmartEvent Server during the initial sync and updated afterwards.
These screens are locked until initial sync is complete:
SmartEvent enables an administrator to view existing logs from a previously generated log file. This feature is designed to enable an administrator to review security threats and pattern anomalies that appeared in the past. As a result, an administrator can investigate threats (for example, unauthorized scans targeting vulnerable hosts, unauthorized legions, denial of service attacks, network anomalies, and other host-based activity) before SmartEvent was installed.
In the same respect, an administrator can review logs from a specific time period in the past and focus on deploying resources on threats that have been active for a period of time but may have been missed (for example, new events which may have been dynamically updated can now be processed over the previous period).
The generation of Offline logs are set in the SmartEvent > Policy tab > General Settings > Initial Settings > Offline Jobs, connected to the Security Management Server or Multi-Domain Server with the following options:
SmartEvent Correlation Unit the machine that reads and processes the Offline Logs.
Log Server the machine that contains the Offline Log files. SmartEvent will query this Log Server to see which log files are available.
Log File contains a list of available of log files found on the selected Log Server to be processed by the SmartEvent Correlation Unit. In this window you select the log file from which you would like to retrieve historical information.
Once you Start an Offline Log File process you cannot remove it.
The results of this process appear in the Events tab and are accessible by the By Job Name query or filter.
With the SmartEvent Events Tab you can add offline jobs to query events generated by offline jobs.
To add the offline jobs:
Every job that appears in this window is an offline job except for All online jobs.
To add (or edit) custom commands:
SmartEvent uses an optimization algorithm to manage disk space and other system resources. When the Events database becomes too large, the oldest events are automatically deleted to save space. In addition, events that are more than one year old are also automatically deleted.
For instructions to change maximum period and maximum database size to save past events in the Events database see sk69706
The evs_backup utility backs up the SmartEvent configuration files and places them in a compressed tar file. In addition, it backs up data files based upon the options selected. The files can be restored using the evs_backup_extractor script. Enclosed are two script versions, one for Windows that has a .bat suffix and one for Solaris, Linux and SecurePlatform that does not have a suffix but should have the executable permissions set.
Usage:
evs_backup [-filename file.tgz] [-EvaDb] [-EvrDb] [-Results] [-Logs] [-LogoAndScripts]
[-All] [-export]
Additional options are:
Option |
Description |
---|---|
EvaDb |
Copy the SmartEvent events database |
EvrDb |
Copy the SmartReporter consolidation database |
Results |
Copy the SmartReporter results |
Logs |
Copy the SmartEvent error logs |
LogoAndScripts |
Copy the logo file and the distribution script |
export |
Runs a evr_addon_export, for a different file name use -filename |
All |
Select all options |
The SmartEvent database keeps a synchronized copy of management objects locally on the SmartEvent Server. This process, dbsync, allows SmartEvent to work independently of different management versions and different management servers in a High Availability environment.
Management High Availability capability exists for Security Management Servers, and in a Multi-Domain Security Management environment, dbsync
supports High Availability for the Multi-Domain Servers and the Domain Management Servers.
Dbsync initially connects to the management server with which SIC is established. It retrieves all the objects. After the initial synchronization it gets updates when an object is saved. Dbsync registers all the High Availability management machines and periodically tests the connectivity with the newest management server. If connectivity is lost, it attempts to connect to the other High Availability management servers until it finds an active one and connects to it.
If two management servers are active concurrently, dbsync stays connected to one management server. Dbsync does not get changes made on the other management server until a synchronization operation is done.
In SmartDashboard, you can configure a Security Gateway, that when it fails to send its logs to one Log Server, it will send its logs to a secondary Log Server. To support this configuration, you can add Log Servers to a single SmartEvent Correlation Unit. In this way, the SmartEvent Correlation Unit gets an uninterrupted stream of logs from both servers and continues to correlate all logs.
Multiple correlation units can read logs from the same Log Servers. That way, the units provide redundancy if one of them fails. The events that the correlation units detect are duplicated in the SmartEvent database. But these events can be disambiguated if you filter them with the Detected By field in the Event Query definition. The Detected By field specifies which SmartEvent Correlation Unit detected the event.
If the SmartEvent Server becomes unavailable, the correlation units keep the events until it can reconnect with the SmartEvent Server and forward the events.
Adding support for a log-generating device (e.g., router, Firewall, IDS, Anti-Virus, OS) to SmartEvent involves one or both of the following:
SmartEvent currently supports the following log formats:
Devices using Check Point, ELA, or Windows Events do not require special parsing configuration. If you are adding a device using one of these formats, skip to the section Adding New Devices to Event Definitions. For details on support for Windows logs, see the section Windows Events.
Devices using the syslog or SNMP format require parsing configuration. Continue to Planning and Considerations and the parsing section relevant for your device.
Device Type |
Typical Log Fields |
---|---|
Firewall, router and other devices that send connection based logs |
source IP address, destination IP address, source port, destination port, protocol, accept/reject indication |
IDS / IPS, application Firewall and other devices that send attack logs |
attack name/ID |
To parse a syslog file:
|
|
To test the parsing, send syslog samples to a Check Point Log Server:
Troubleshooting:
If SmartView Tracker does not display the logs as expected, there may be specific problems with the parsing files:
To view an example, see the file $FWDIR/conf/snmpTrap/CPdefined/realSecure.C.
|
After creating the appropriate parsing file for the new product, the next step is to include the product in the SmartEvent Event Policy by adding it to the Product filters of new and existing events. This involves making changes to the SmartEvent Server database. Some of the changes are accomplished using SmartEvent client, while others require using a CPMI client (such as GuiDBedit or dbedit, or a specific client you can write for your own use).
Note - Manually editing the files in $FWDIR/conf is not recommended and should be done with extreme care. |
Step 1: Create an object to represent the new device in one of the following ways:
|
The resulting object is added to the file $FWDIR/conf/sem_products.C.
Step 2: Add the device to the relevant Event Definitions:
For example, if this is an IDS / IPS reporting a 'Ping of Death' attack, use the Event Definition Wizard to add a filter for the new product in the 'Ping of Death' Event Definition. You may also add existing or new fields to the product filter by selecting the property Show more fields.
To move the Event Definition to another section of the tree, do the following:
To test the changes in the Event Definition:
Various third-party devices use the syslog format for logging. SmartEvent can parse and process third-party syslog messages by reformatting the raw data. This parsing process extracts relevant log fields from the syslog data and creates a normalized Check Point log which is available for further analysis.
Warning - Manual modifications to out-of-the-box parsing files will not be preserved automatically during the upgrade process. Mark your modifications with comments so you can remember what changed. |
The procedure occurs on the Log Server and starts with the syslog daemon. The syslog daemon that runs on the Log Server receives the syslogs and calls for their parsing. The parsing involves many parsing files, which contain the different parsing definitions and specifications, and can be found in $FWDIR\conf\syslog\. In these files there are the device-specific parsing files, which define the actual parsing and extraction of fields, according to each device specific syslog format.
The parsing starts with the syslog_free_text_parser.C file. This file defines the different dictionaries and parses the syslog. The file extracts fields which are common to all syslog messages (such as PRI, date and time), and the machine and application that generated the syslog.
syslog_free_text_parser.C uses the allDevices.C file (which refers to two files: UserDefined/UserDefinedSyslogDevices.C and CPdefined/CPdefinedSyslogDevices.C). The first (UserDefined/UserDefinedSyslogDevices.C) contains the names of the devices parsing files that the user defines. The second (CPdefined/CPdefinedSyslogDevices.C) contains devices parsing files that Check Point defines. allDevices.C goes over the device parsing files, and tries to match the incoming syslog with the syslog format parsed in that file.
After the parsing-file succeeds in the preliminary parsing of the syslog (that is, it matches the syslog format and is therefore the syslog origin), the remaining of the syslog is parsed in that file. If a match is not found, the file will continue to go over the Check Point device parsing files until it finds a match.
The free text parsing language enables to parse an input string, extract information, and define log fields. These log fields which show as part of the Check Point log in the Log Server. They are used in the definition of events. Each parsing file contains a tree of commands. Each command examines or parses part of the input string (sometimes it adds fields to the log as a result), and decides if to continue to parse the string (according to the success/failure of its execution).
Each command consists of these parts:
cmd_name
- the name of the command.command arguments
- arguments that define the behavior of the command.on_success
(optional) - the next command executed if the current command execution succeeds.on_fail
(optional) - the next command executed if the current command execution fails.Sample
|
The try
command matches a regular expression against the input string.
Try Command Parameters
Argument |
Description |
---|---|
|
|
|
The regular expression to match. |
|
One or more fields to add to the result (only if the regular expression is successful). |
Try Command Sample
|
In the above example, we try to match the regular expression ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)
that looks at the entire log (parse_from (start_position)
) - parse from the start of the log). If the regular expression is matched, we add a source field.
The command group_try
executes one or more commands in one of these modes:
try_all
tries all commands in the group, and ignores the return code of the commands.try_all_successively
tries all the commands in the group, and ignores the return code of the commands.Each command tries to execute from the last position of the earlier successful command.
try_until_success
tries all the commands until one succeeds.try_until_fail
tries all the commands until one fails.The command group_try
is commonly used when it parses a "free-text" piece of a log, which contains a number of fields we want to extract. For example:
%PIX-6-605004: Login denied from 194.29.40.24/4813 to outside:192.168.35.15/ssh for user 'root'
When you look at see this section of the log, you can use this structure:
Group_try Command Sample 1
|
In this example, the first try command in the group_try
block (for the source) is executed.
If the source, destination and user are not in a specified sequence in the syslog, use the try_all
mode instead of try_all_successively
.
Group_try Command Sample 2
In this example, the regular expressions in the different commands try to match more specified logs. At most, one command in the group_try
block will be successful. When it is found, it is not necessary to examine the others:
|
Note - When you add a new device, the first try
command in the parsing file must use the try until success
parameter:
:cmd_name (group_try)
:mode (try_until_success)
: (
….
)
This command enables to compare the result of a specified field against a list of predefined constant values.
Switch Command Parameters
Parameter |
Description |
---|---|
|
The field name whose value is checked. |
|
One or more case attributes followed by the value with which to compare. |
|
Execute only if no relevant case is available. The default value is optional. |
Switch Command Sample
|
This command is an "empty" command that allows you to add fields to the result without any conditions.
Unconditional _try Command Sample 1
|
A common usage of unconditional_try
is with the switch command. In example 2, each message ID is attached with its corresponding message
field which denotes its meaning.
Unconditional _try Command Sample 2
|
This command enables the inclusion of a new parsing file.
|
The full path plus the file name of the file to be included. |
Include Command Sample
|
Each add_field
has some parameters:
add_field
command. This parameter has these possible values:field_index
value denotes which part will be extracted (see field_index
bullet).field_value
bullet.Field_name
- the name of the new field. There are some fields, which have corresponding columns in SmartConsole Logs & Monitor > Logs. This table shows the names to give these fields to show in their Logs & Monitor > Logs column (and not in the Information field, where other added fields appear):
Field Name to be Given |
Column in Logs & Monitor > Logs |
---|---|
|
Source |
|
Destination |
|
Protocol |
|
Source Port |
|
Product |
|
Service (when resolved includes the port and protocol.) |
|
Action |
|
Interface |
|
User |
When you name the above fields accordingly, they are placed in their correct column in Logs & Monitor > Logs. This enables them to participate in all filtering done on these columns. These fields automatically take part in existing event definitions with these field names.
Field_type
- the type of the field in the log. This table shows the possible field types.
Field Type |
Comment |
---|---|
|
|
|
|
|
|
|
For IP addresses used with the Src and Dst fields. |
|
Includes the facility and severity of a syslog. |
|
Includes the date and time of the syslog. Supports the format 'Oct 10 2004 15:05:00'. |
|
Supports the format '15:05:00'. |
|
For a more efficient usage of strings. Used when there is a finite number of possible values for this field. |
|
Supports these actions: drop, reject, accept, encrypt, decrypt, vpnroute, keyinst, authorize, deauthorize, authcrypt, and default. |
|
0 - inbound 1 - outbound |
|
For an interface name (used with the "ifname" field). |
|
The field name should be "proto". |
|
For "service", "s_port" or "port" fields. |
The field type of the field names in this table must be as mentioned:
Field Name |
Field Type |
---|---|
|
ipaddr |
|
ipaddr |
|
protocol |
|
port |
|
port |
|
action |
|
ifname |
field_index
or field_value
- The parameter used depends on the value of the "type" field. If it is index, field_index
shows. If it is const, field_value
shows.field_index
denotes which part of the regular expression is extracted, according to the grouping of the patterns. To make this grouping, write a certain expression in brackets. In this expression, the number in field_index
denotes the bracket number whose pattern is taken into account.
Add_field Command Sample
:command ( |
The pattern for the User, [a-zA-Z0-9]+
, is located in the first pair of brackets. Therefore, the field_index
is one. The pattern for the Source address, [0-9]+\.[0-9]+\.[0-9]+\.[0-9]+
, is located in the second pair of brackets. Therefore, the index is two. The pattern for the port is in the third pair of brackets.
In each parsed regular expression the maximum number of brackets must be up to nine. To extract more than nine elements from the regular expression, break the expression into two pieces. The first regular expression contains the first nine brackets. The remaining of the regular expression is in the on_success
command.
:command ( |
field_value
is the constant value to be added.
:command ( |
Dict_name
is the name of the dictionary to use to convert the value. If the value is not found in the dictionary, the value is the result. See Dictionary.
The free text parser enables us to use dictionaries to convert values from the log. These conversions are used to translate values from logs from different devices, with the same meaning, into a common value, which is used in the event definitions.
Each dictionary file is defined as an .ini
file. In the ini
file the section name is the dictionary name and the values are the dictionary values (each dictionary can include one or more sections).
|
The reference to a dictionary in the parsing file is shown in this table:
Dictionary Command Sample 2
|
WinEventToCPLog
uses Microsoft APIs to read events from Windows operating system event files. To see these files, use the Windows Event Viewer.
WinEventToCPLog
can read event files on the local machine, and can read log files from remote machines with the right privileges. This is useful when you make a central WinEventToCPLog
server that forwards multiple Window hosts events to a Check Point Log server.
To set the privileges, invoke WinEventToCPLog -s
to specify an administrator login and password.
These are the ways to access the files on a remote machine:
WinEventToCPLog
.WinEventToCPLog
as an administrator in the domain. This administrator can access all of the machines in the domain.