Onboarding Kubernetes Clusters
You can onboard a Kubernetes cluster to CloudGuard. On the process completion, you can see clusters, nodes, pods, and other additional resources in the CloudGuard Assets page. Then you can run compliance assessments on them, as well as use the data for additional security functionality, such as Runtime Protection, Image Assurance, etc.
The cluster can be on an on-premises host or in a cloud environment, including managed Kubernetes environments such as AKS on Azure, EKS on AWS, and GKE on GCP Cloud.
As part of the onboarding process, CloudGuard agents are deployed on the cluster. The CloudGuard agents send encrypted information back to the CloudGuard server over the internet.
For information on Kubernetes versions and container requirements, see Kubernetes Containers.
Onboarding a Cluster Manually
Follow the steps below to manually onboard a Kubernetes cluster to CloudGuard:

-
In the CloudGuard portal, open Assets > Environments.
-
Click Get Started with Kubernetes or, from the top right, select ADD NEW > Kubernetes Cluster / OpenShift / Tanzu.
-
Enter a name for the cluster, as it later appears in CloudGuard.
-
Follow the onscreen instructions to complete these steps:
-
Define a Service Account by one of these methods:
-
Select an existing Service Account with its associated API Key
-
Enter a Service Account manually
-
Click Add Service Account to create a new account
-
-
Enter a name for the Kubernetes namespace in which the agent is to be deployed or preserve the default name - checkpoint.
-
Select what kind of monitoring and security checks you wish to have for your Kubernetes cluster by default. You can add any of these features later. Read more about each feature on a dedicated page:
- Posture Management - for details, see Posture Management (mandatory feature)
-
Image Assurance - for details, see Image Assurance
-
Admission Control - for details, see Admission Control
-
Runtime Protection - for details, see Kubernetes Runtime Protection
-
Threat Intelligence - for details, see Intelligence
-
-
Click Next to proceed to the next step.

-
Select the Organizational Units with which the onboarded cluster will be associated. If no Org Unit is selected, the root (top-level) unit is used.
-
Click Next.

-
Follow the onscreen instructions and apply Helm. Alternatively, you can follow the Non-Helm instructions to deploy the agents. This generates a YAML file for deployment with kubectl commands.
- Click Next.

-
Verify the deployment status. The status is dynamically updated as the agents come online.
-
Your Kubernetes cluster has been successfully created
-
It is waiting for the agent to initiate communication
-
You can skip the validation by clicking the Finish button
-
Wait for the deployment completion according to Cluster and Agent Status or click Finish to skip the process.
CloudGuard informs you that:
After the agent is deployed, CloudGuard accesses the cluster through the agent to obtain information about the assets and synchronize with it. This takes a few minutes based on the time needed to download the images to the cluster and the number of assets in the cluster.
The Onboarding Summary page is updated automatically with the change of the cluster status.
Cluster Status
Available options of the cluster status:
-
Pending - CloudGuard has not received communication from the agents.
-
Initializing - CloudGuard is receiving communication from some of the agents. The progress bar shows how many agents are up and ready.
Note - During this state, if the number of running pods does not change for 10 minutes, the indicator pauses and status changes to TIME OUT. In this case, verify the agents status on the cluster to make sure they do not have any issues. For example, agents can be stuck due to missing resources (memory or CPU). After you solve the issue, you can resume the validation or skip the validation process entirely.
-
Error – There are agents in the Error state. Click Finish to complete the process. You can go to the cluster page to see which agents have the Error state and browse their Kubernetes logs for issues.
When all the agents are running, the cluster status changes to SUCCESS, and the onboarding process finishes successfully.
Agent Status
On the cluster page, for each feature, you can see its agents status:
-
Pending – The agent has never communicated with CloudGuard.
Note - There is a limitation for DaemonSet agents. During the cluster status calculation, tolerations settings are not taken into account. Agents from excluded nodes are considered Pending which can lead to a false error state for the cluster.
-
Initializing – Status of an agent that comes online and initiates communication with the CloudGuard portal. The agent has a limited time period to report a successful self-test. If the agent does not report it back in time, the status is changed Error due to timeout.
-
Warning – Status of an agent that successfully finished its initialization, while it is based on an old image. See Upgrade the Agent for how to fix this issue.
-
Error – Status of agents that failed their self-test, sent an error message, or suffered a loss of connectivity for at least one hour.
-
Pending cleanup – Disabled features that still have an agent which sends data appear with the Pending cleanup status.
Onboarding a Cluster with Automation
Follow these steps to automate the onboarding process from the command line:
-
Create or update these environmental variables: $API_KEY, $API_SECRET, $CLUSTER_NAME, where the API Key and Secret are generated on the CloudGuard portal (see V2 API).
-
Run this command to create a Kubernetes account on CloudGuard:
curl -s -X POST https://api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account --header 'Content-Type: application/json' --header 'Accept: application/json' -d "{\"name\" : \"$CLUSTER_NAME\"}" --user $API_KEY:$API_SECRET)
-
Extract the Cluster ID from the response:
CLUSTER_ID=$(echo $CREATION_RESPONSE | jq -r '.id')
-
Enable the required features:
Copycurl -X POST https:// api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/imageAssurance/enable
--user $API_KEY:$API_SECRET
curl -X POST https:// api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/admissionControl/enable
--user $API_KEY:$API_SECRET
curl -X POST https:// api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/runtimeProtection/enable
--user $API_KEY:$API_SECRET -
Run these commands on each cluster:
Copyhelm install asset-mgmt cloudguard --repo https://raw.githubusercontent.com/CheckPointSW/charts/master/repository/
--set-string credentials.user=$API_KEY
--set-string credentials.secret=$API_SECRET
--set-string clusterID=$CLUSTER_ID
--set addons.imageScan.enabled={true|false}
--set addons.admissionControl.enabled={true|false}
--set addons.runtimeProtection.enabled={true|false} --namespace $NAMESPACE
|
Note - The *.enabled flags can be set to false or omitted if you do not want to enable the corresponding features. |
For Non-Helm automation, run this command in step 5:
kubectl run cloudguard-install --rm --image alpine/helm --tty
--stdin --quiet --restart=Never --command
– helm template asset-mgmt cloudguard
--repo https://raw.githubusercontent.com/CheckPointSW/charts/master/repository/
--set credentials.user=$API_KEY
--set credentials.secret=$API_SECRET
--set clusterID=$CLUSTER_ID
--set addons.imageScan.enabled={true|false}
--set addons.admissionControl.enabled={true|false}
--set addons.runtimeProtection.enabled={true|false}
--namespace $NAMESPACE
--set containerRuntime=containerd > cloudguard-install.yaml
kubectl apply -f cloudguard-install.yaml
-
If your cluster uses a Docker or CRI-O runtime environment, change the
containerRuntime
flag to:--set containerRuntime=docker
or--set containerRuntime=cri-o
. -
If your cluster platform is OpenShift 4+ or Tanzu, before output redirection, add:
--set platform=openshift
or --set platform=tanzu
.
Example

#!/bin/bash
# Create environmental variables for the API Key, API Secret, and Cluster name.
export API_KEY=372e5df3-db03-432d-bf46-cb4261efb317
export API_SECRET=on3cs1ambgv4tnbs6kiwb1ws
export CLUSTER_NAME=auto-cluster
export NAMESPACE=checkpoint
export IMAGE_SCAN_ENABLED=true
export ADMISSION_CONTROL_ENABLED=true
export RUNTIME_PROTECTION_ENABLED=true
# Onboard the cluster
export CREATION_RESPONSE=$(curl -s -X POST https:// api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/ --header 'Content-Type: application/json' --header 'Accept: application/json' \-d "{\"name\" : \"$CLUSTER_NAME\"}" --user $API_KEY:$API_SECRET)
export CLUSTER_ID=$(echo $CREATION_RESPONSE | jq -r '.id')
curl -X POST https:// api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/imageAssurance/enable
--user $API_KEY:$API_SECRET
curl -X POST https:// api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/admissionControl/enable
--user $API_KEY:$API_SECRET
curl -X POST https:// api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/runtimeProtection/enable
--user $API_KEY:$API_SECRET
# If the designated namespace already exists, comment out the following line
kubectl create namespace $NAMESPACE
# Deploy the Asset Management agent on the cluster using Helm
export NAMESPACE=checkpoint
helm install asset-mgmt cloudguard --repo https://raw.githubusercontent.com/CheckPointSW/charts/master/repository/ --set-string credentials.user=$API_KEY --set-string credentials.secret=$API_SECRET --set-string clusterID=$CLUSTER_ID --set addons.imageScan.enabled=$IMAGE_SCAN_ENABLED --set addons.admissionControl.enabled=$ADMISSION_CONTROL_ENABLED --set addons.runtimeProtection.enabled=$RUNTIME_PROTECTION_ENABLED --namespace $NAMESPACE
# Check the status of the cluster
curl -s https:// api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account//$CLUSTER_ID/AccountSummary --user $API_KEY:$API_SECRET
Upgrade the Agent
Agreed assumptions:
-
The environmental variables $API_KEY, $API_SECRET, $CLUSTER_NAME, $NAMESPACE have the same values as during onboarding
-
Image Assurance and Admission Control are enabled
For agents installed with Helm 3, use the command below to upgrade all agents to the latest version:
helm upgrade asset-mgmt cloudguard --repo
https://raw.githubusercontent.com/CheckPointSW/charts/master/repository/
--set-string credentials.user=$API_KEY
--set-string credentials.secret=$API_SECRET
--set-string clusterID=$CLUSTER_ID
--set addons.imageScan.enabled={true|false}
--set addons.admissionControl.enabled={true|false}
--set addons.runtimeProtection.enabled={true|false}
--namespace $NAMESPACE
Uninstall the Agent
During the process of onboarding, CloudGuard generates the cloudguard-install.yaml file that you use to uninstall the agents.
With Helm:
|
With kubectl:
|
|
Note - To install agents again after you have uninstalled them, follow Step 3 - Deploy the agent on the cluster and not the upgrade procedure. |
Troubleshooting: Cluster behind a Gateway
If the traffic passes from the cluster to the Internet through a Security Gateway with HTTPS inspection, you have to configure a customer CA (Certificate Authority) certificate for the agents.
-
Put the customer Base64 PEM-encoded CA certificate in a configmap in the relevant namespace.
For example:
kubectl -n <namespace>create configmap ca-store --from-file=custom_ca.cer=<PATH_TO_CA_CERTIFICATE_FILE>
-
Mount the file to the containers at the corresponding locations as appears below:
Container
Pod
Location
inventory
inventory
custom/custom_ca.cer
engine
imagescan-engine
/etc/ssl/cert.pem
fluentbit
imagescan-engine and imagescan-daemon
/etc/ssl/certs/ca-certificates.crt