Troubleshooting

Issue

Solution

Enable debugging on each Cross AZ ClusterClosed Two or more Security Gateways that work together in a redundant configuration - High Availability, or Load Sharing. Member

From the Cross AZ Cluster Member (either one), run in Expert mode:

python3 $FWDIR/scripts/aws_ha_cli.py stop

python3 $FWDIR/scripts/aws_ha_cli.py --debug reconf

Debug output is written to:

$FWDIR/log/aws_had.elg

To disable debugging, you MUST run the following command on each Cluster MemberClosed Security Gateway that is part of a cluster.:

python3 $FWDIR/scripts/aws_ha_cli.py restart

Test the environment

To test the Cross AZ Cluster environment, run in Expert mode:

python3 $FWDIR/scripts/aws_ha_test.py

This runs tests that verify:

  • A Primary DNS server is configured.

  • DNS resolution works.

  • Access is available from the Cross AZ Cluster Member to the AWSClosed Amazon® Web Services. Public cloud platform that offers global compute, storage, database, application and other cloud services. metadata service (HTTP to 169.254.169.254).

  • The instance set up includes an IAM role.

  • IAM credentials are available.

  • Access from the Cross AZ Cluster Member to the AWS web service endpoint (over TCP port 443) is available.

  • The IAM credentials allow the instance to make API calls into AWS.

  • The Cross AZ Cluster is configured with at least one internal interface.

  • For each Cross AZ Cluster Member interface, there exists a corresponding AWS ENI (Elastic Network Interface) sharing the same primary private address.

  • All Cross AZ Cluster Member interfaces have the source and, or destination check disabled.

  • Compares the system clock to the time reported by AWS.

  • The $FWDIR/conf/aws_cross_az_cluster.json file is up to date.

  • All private secondary IP addresses on Cluster active member have associated Elastic IP addresses.

Extract Cross AZ Cluster information

From the Cross AZ Cluster Member, run in Expert mode:

cphaconf aws_mode

Extract Cross AZ Cluster state

From the Cross AZ Cluster Member, run in Expert mode:

cphaprob stat

Example output:

Cluster Mode: High Availability (Active Up) with IGMP Membership

Number Unique Address Assigned Load State

1 (local) 10.0.1.20 0% Active

2 10.0.1.30 00% Standby

Output of cphaprob stat command on both must show the same information (except the "(local)" string).

Permissions required for the Cross AZ Cluster Members IAM role

Required Permissions

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Action": [
				"ec2:AssignPrivateIpAddresses",
				"ec2:AssociateAddress",
				"ec2:CreateRoute",
				"ec2:DescribeNetworkInterfaces",
				"ec2:DescribeRouteTables",
				"ec2:ReplaceRoute"
			],
			"Resource": "*",
			"Effect": "Allow"
		}
	]
}

If the IAM roles, are not configured correctly, they prevent communication from the Cross AZ Cluster Members to AWS to make networking changes if a Cross AZ Cluster Member failure occurs.

During failover, the AWS route tables do not change their route from the failed member to standby active member

Issues with Cross AZ Cluster behaviour

Check that the script responsible for communication with AWS is running on each Cross AZ Cluster Member.

On the Cross AZ Cluster Member (either one), run in Expert mode:

cpwd_admin list | grep -E "PID|AWS_HAD"

The output should have a line similar to:

Notes:

  • The script must appear in the output.

  • The "STAT" column must show "E" ("Executing").

  • The "#START" column must show "1" - This is how many times the Check Point WatchDog started this script.

Cross AZ Cluster with Multiple Elastic IP addresses: Not all Elastic IP addresses move to a new Cluster member after failover.

  1. Run: python3 $FWDIR/scripts/aws_ha_cli.py restart to update the Cross AZ Cluster map file.

  2. If the output of the python3 $FWDIR/scripts/aws_ha_conf.py show command on the two Cluster members is not the same, on the two Cluster members:

    1. Connect with SSH to the Cluster Member.

    2. In Expert mode, run: rm -f $FWDIR/conf/aws_cross_az_cluster.json

    3. Run: python3 $FWDIR/scripts/aws_ha_cli.py restart

    These commands recreate the Cross AZ Cluster map file with identical content on both members.

Policy installation on Cluster fails with error: Policy installation failed on gateway. Cluster policy installation failed (see sk125152).

For example:

This error can happen when the Cluster is not configured exactly as described in this guide.