Deploying a Check Point Cluster in Oracle Cloud Infrastructure (OCI)

Workflow for deploying a Check Point Cluster In Oracle Cloud Infrastructure

Note - The current recommended version for Oracle Cloud is R80.40 with the latest GA Jumbo.

Overview

Oracle Cloud Infrastructure combines the elasticity and utility of public cloud with the granular control, security, and predictability of cloud computing and brings the agility and fast-paced innovation in performance, high availability and cost-effective infrastructure services.

Check Point CloudGuard for Oracle extends advanced Threat Prevention security to protect customers' OCI environments from malware and other sophisticated threats. As an Oracle-certified solution, CloudGuard enables you to easily and seamlessly secure your workloads, data, and assets and still provide secure connectivity across your cloud and on-premises environments.

Prerequisites

You should be familiar with general Oracle concepts, features, and terms, including:

Method of Operation

A traditional Check Point cluster environment uses multicast or broadcast to perform state synchronization and health checks across cluster members.

Because Oracle does not support multicast and broadcast Check Point cluster members use unicast to communicate. In addition, in a regular ClusterXL working in High Availability mode, cluster members use Gratuitous ARP to announce the MAC Address of the Active member associated with the Virtual IP Address (during usual operation and when cluster failover occurs).

In contrast, Oracle makes API calls to OCI. When an Active cluster member fails, the Standby cluster member becomes Active and takes ownership of the cluster resources. As part of this process, the member:

  • Associates the cluster's Private and Public Secondary IP addresses attached to the Primary vNIC

  • Associates each pair of Secondary Public/Private IP attached to the Primary vNIC (for every published service)

  • Associates the Secondary Private IP attached to the Secondary vNIC

Oracle API Authentication

To make API calls to Oracle automatically, the cluster members need permission to perform the API calls in the actual compartment. This is done with Oracle Identity Manager.

In this guide you learn how to:

  • Create a Dynamic Group with a correct rule that defines only the cluster members as part of the Dynamic Group

  • Create a policy for the defined Dynamic Group

Solution Topology

This sample environment is used to explain the configuration steps. When you follow the configuration steps below, make sure to replace the IP addresses in the example to reflect your environment.

CloudGuard Cluster Deployment

Follow these instructions to deploy Check Point's CloudGuard Cluster solution in Oracle. Perform the steps from the Oracle portal in the preferred compartment(s).

  1. Sign in to your OCI tenant account.

  2. Select the relevant CloudGuard listing from the Oracle Cloud Marketplace.

  3. Create a new VCN or select an existing VCN (for example VCN with CIDR Block 10.0.0.0/16 ).

  4. Add two subnets to your VCN: one public subnet and one private subnet.

    • Frontend public subnet (Ex. - 10.0.0.0/24)

    • Backend private subnet (Ex. - 10.0.1.0/24)

    Optional: Each subnet can be placed in its own independent VCN. In this scenario, the VCN CIDR contains only one subnet which is the entire CIDR of the VCN.

    Example: VCN defined as 10.1.0.0/24 has one subnet that also uses 10.1.0.0/24

  5. Create a public subnet: for example, frontend (CIDR Block 10.0.0.0/24).

    Option: The Frontend subnet can also be private. In this case, the default route for the Frontend subnet must be set to use a Target Type of Service Gateway and a Destination Service of All <Region> Oracle Services in Oracle Services Network for the API calls to succeed.

  6. Create a private subnet: for example, backend (CIDR Block 10.0.1.0/24).

    Final VCN configuration with two subnets: frontend and backend.

  7. Configure your VCN's Security List to allow all traffic on all protocols. This lets Check Point control and monitor all traffic.

  8. Create both CloudGuard cluster members.

  9. By default, the Primary vNIC on each instance is attached to the frontend subnet.

  10. Add a Secondary vNIC to each cluster member.

    Notes :

    • Connect the Primary vNIC to the frontend subnet; Connect the Secondary vNIC to the backend subnet.

    • You must ensure both vNICs of each cluster member are configured with the check box for Skip source/destination check.

  11. Select one of the cluster members (only one) and add a new Secondary Private IP to the Primary vNIC.

  12. Create a reserve Public IP and attach it to the Secondary Private IP you created in step 11. This represents the cluster IP.

  13. Create one more Secondary private IP and attach it to the member's Secondary vNIC of the chosen member from step # 11 (secondary VIP for outbound traffic).

  14. Add these new routing tables to the Private Subnet (the backend subnet, that is configured after you add the Additional Secondary vNIC) and the Public subnet, respectively. This rule redirects the traffic to the Secondary Private IP of the Secondary vNIC (traffic goes through the VIP).

  15. Add this Route Table to the Public Subnet (Frontend):

  16. Create a Dynamic Group and include the two members in this Dynamic Group (in this example, the group name is cp_cluster_group). To create the rules which define the Dynamic Group, use the OCI Rule Builder and create two different rules, one for each member. If you do not use the OCI Rule Builder, you can manually configure a single rule to include the two members, as appears below.

  17. Create the policy and allow the defined Dynamic Group to use resources in the compartment where it belongs.

  18. Connect to the two CloudGuard members with the Private Key matched to the Public Key you used when you created the instance (ssh –i privateKey admin@<cluster-member-public-ip>). Run these commands to set the password:

    > set user admin password

    - insert your password <XXXXX>

    > save config

    > exit

  19. Use a web browser to connect to the members with the member public IP. Enter the Blink wizard information to complete the configuration.

    https://<member_public_ip>

    User name : admin

    Password: XXXXX

  20. Configure the CloudGuard members and Cluster in the Management SmartConsole (see below).

Best Practice: Apply the latest GA Jumbo Hotfix Accumulator to each cluster member.

Configuring OCI Cluster in Check Point Security Management

CloudGuard Network Security Gateway

You can manage the CloudGuard Network Security Gateway in several different configurations:

  • As a standalone configuration in which the Security Gateway is its own management.

  • Centrally managed, where the management server is located on-premises outside the virtual network.

  • Centrally managed, where the management server is located in the same virtual network.

CloudGuard Cluster Configuration

  1. Use Check Point SmartConsole to connect to the Check Point Management Server.

  2. Create a new Check Point Cluster: in the Cluster menu, click Cluster...

  3. Select Wizard Mode.

  4. Insert the cluster object's name (for example, checkpoint-oci-cluster). In the Cluster IPv4 Address field, enter the public address (Secondary Public IP address of the Primary vNIC) allocated to the cluster and click Next.

    Note - To see the Cluster IP address in the OCI portal, select the CloudGuard Active Member's Primary VNIC and then select the Secondary Public IP (Secondary Public IP of the Primary vNIC; Primary vNIC is the first vNIC of the deployed instance).

    Sample cluster configuration:

  5. Click Add to add the cluster members.

  6. Configure the cluster members properties:

    1. In the Name field,enter the first cluster member's name (for example, member1).

    2. In the IPv4 Address field: If you manage the cluster from the same VCN, enter the member's Primary Private IP address of the Primary vNIC. Otherwise, insert the member's Primary public IP address of the Primary vNIC.

    3. In the Activation Key field, enter the SIC (Secure Internal Communication) key you defined for the CloudGuard member during the First Time Wizard configuration.

    4. In the Confirm Activation Key field, re-enter the key and click Initialize. The Trust State field must show: "Trust established."

    5. Click OK.

      Example:

  7. Repeat steps 5-6 to add the second CloudGuard cluster member. Click Next.

    Example:

  8. In the new window, click Finish:

  9. Click Finish.

  10. Examine the cluster configuration and configure the cluster interfaces:

    1. Click the cluster object checkpoint-oci-cluster.

    2. Click Network Management.

    3. Double-click eth0.

    4. Click General.

    5. Select Network Type Cluster and enter the member's Secondary private IP address of the Primary vNIC (definition of the first VIP).

    6. Click OK.

    7. In Network Management, double-click eth1.

    8. Click General.

    9. Select Network Type "Cluster + Sync" and enter the member's Secondary private IP address of the Secondary vNIC (definition of the second VIP).

    10. Click OK and exit the cluster object configurations dialog.

  11. To provide Internet connectivity to the internal subnet (publish services), use NAT Rules.

  12. Configure and install the Security policy on the cluster.

  13. Set the perform_cluster_hide_fold attribute for the relevant cluster object on the Security Management Server to 0. Refer to sk170296 for details.

Adding Additional Secondary IPs to OCI Cluster

If secondary IP's other than the primary cluster IP have to be attached to the Active cluster member:

  1. Attach all desired secondary IPs to the Active cluster member in the OCI console.

  2. Push policy to the gateways.

Configure IPv6

To configure IPv6 refer to sk181535 - IPv6 support for CloudGuard Network Security in Oracle Cloud Infrastructure (OCI).

Known Limitations

  • You must configure NTP on the environment for failover to work correctly > Oracle API requirement

  • To inspect East/West traffic, each backend subnet that requires inspection must exist in its own VCN and be routed to the backend VNIC via LPGs or DRG.

  • Adding more network interfaces is currently not supported.