Rolling Back a Failed Upgrade of a Maestro Orchestrator

This section describes the steps for rolling back a failed upgrade of a Maestro Orchestrator to R81.20.

Warning - If after an upgrade of the Orchestrator to R81.20 you made changes in topology of Security Groups (added or removed Security Appliances, added or removed interfaces, changed settings of physical ports), then do NOT use this rollback procedure on the Orchestrator.

You must contact Check Point Support for assistance.

Important:

Procedure for each Orchestrator:

Step

Instructions

1

Connect to the command line on the Orchestrator (in our example, "Orchestrator 1_1").

2

If your default shell is /etc/cli.sh (Gaia Clish), then go to the Expert mode:

expert

3

Stop the Orchestrator service:

orchd stop

4

Go from the Expert mode to Gaia Clish:

  • If your default shell is /bin/bash (the Expert mode), then run:

    clish

  • If your default shell is /etc/cli.sh (Gaia Clish), then run:

    exit

5

Restore the Gaia snapshot, which was created automatically during the upgrade:

set snapshot revert

The Orchestrator automatically reboots and starts the revert.

For more information, see the R81.20 Gaia Administration Guide > Chapter Maintenance > Section Snapshot Management.

6

Wait for the reverted Orchestrator to boot.

7

Configure the same date and time settings on all other Orchestrators in your environment.

For more information, see the R81.20 Gaia Administration Guide > Chapter System Management > Section Time.

8

Make sure all Orchestrators in your environment can communicate with each other.

Connect to the command line on the reverted Orchestrator (in our example, "1_1").

Send pings to other Orchestrator(s):

  • In a Single Site environment:

    ping 1_2

  • In a Dual Site environment:

    ping 1_2

    ping 2_1

    ping 2_2

9

Make sure the Security Group Members can pass traffic to each other:

  1. Connect to the command line on the Security Group.

  2. If your default shell is /etc/gclish (Gaia gClish), then go to the Expert mode:

    expert

  3. Examine the cluster state of the Security Group Members.

    On the SMO Security Group Member, run:

    cphaprob state

    The output must show that all Security Group Members are active.

  4. Send pings between Security Group Members:

    1. Connect to one of the Security Group Members

      (in our example, we connect to the first one - "1_1"):

      member 1_1

    2. On this Security Group Member, send ping to any other Security Group Member

      (in our example, we send pings to the second one - "1_2" / "2_2"):

      • In a Single Site environment:

        ping 1_2

      • In a Dual Site environment:

        ping 1_2

        ping 2_2

10

On each Security Group Member, make sure all links are up in the Security Group:

  1. Connect to the command line on the Security Group.

  2. Examine the state of links:

    asg_if