Onboarding Kubernetes Clusters

You can onboard a KubernetesClosed Kubernetes, often abbreviated as “K8s”, orchestrates containerized applications to run on a cluster of hosts. cluster to CloudGuard. On the process completion, you can see clusters, nodes, pods, and other resources on the CloudGuard Assets page. Then you can run compliance assessments on them and use the data for more security functionality, such as Runtime Protection, Image Assurance, etc.

The cluster can be on an on-premises host or in a cloud environment with managed Kubernetes environments such as AKS on AzureClosed Collection of integrated cloud services that developers and IT professionals use to build, deploy, and manage applications through a global network of data centers managed by Microsoft®., EKSClosed Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS and on-premises. on AWSClosed Amazon® Web Services. Public cloud platform that offers global compute, storage, database, application and other cloud services., and GKE on GCPClosed Google® Cloud Platform - a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, Google Drive, and YouTube. Cloud.

As part of the onboarding process, CloudGuard agents are deployed on the cluster. The CloudGuard agents send encrypted information back to the CloudGuard server over the Internet.

For information on Kubernetes versions and container requirements, see Kubernetes Containers.

Onboarding a Cluster Manually

Follow the steps below to manually onboard a Kubernetes cluster to CloudGuard:

Onboarding a Cluster with Automation

Automation with the CLI

For the onboarding automation, you need a CloudGuard service account with onboarding permissions, so the service account must have a role with the Manage Resources Permissions (or, at least, the Onboarding Permissions).

Follow these steps to automate the onboarding process from the command line:

  1. With the above-mentioned service account, create or update these environmental variables: $API_KEY, $API_SECRET, $CLUSTER_NAME, where the API Key and Secret are generated on the CloudGuard portal (see V2 API).

  2. Run this command to create a Kubernetes account on CloudGuard:

    curl -s -X POST https://api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account --header 'Content-Type: application/json' --header 'Accept: application/json' -d "{\"name\" : \"$CLUSTER_NAME\"}" --user $API_KEY:$API_SECRET)

    Note - This and other commands below use api.us1.cgn.portal.checkpoint.com as an API endpoint for Infinity Portal users in the US region. For the full list of the API server endpoints in your region, see Which CloudGuard endpoints do I have to allow on my network?.

  3. Extract the Cluster ID from the response:

    CLUSTER_ID=$(echo $CREATION_RESPONSE | jq -r '.id')

  4. Enable the required features:

    Copy
    curl -X POST https://api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/imageAssurance/enable --user $API_KEY:$API_SECRET
    curl -X POST https://api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/admissionControl/enable --user $API_KEY:$API_SECRET
    curl -X POST https://api.us1.cgn.portal.checkpoint.com/v2/kubernetes/account/$CLUSTER_ID/runtimeProtection/enable --user $API_KEY:$API_SECRET

Inactive Kubernetes Clusters

CloudGuard deletes inactive environments when a year (365 days) passed since any of the environment's agents has communicated with CloudGuard. An agent is required to communicate with CloudGuard at least once in the past.

Note - Environments with agents that communicated with errors are not removed.

Installing the Agent

During the onboarding process, you install the CloudGuard agent on the cluster with Helm. For the agent installation, permissions of the preconfigured Kubernetes Agent role are sufficient (see Roles). The Helm command is shown on the third page of the onboarding wizard; see STEP 3 - Deploy the agent on the cluster.

Example:

Copy
helm install asset-mgmt cloudguard --repo https://raw.githubusercontent.com/CheckPointSW/charts/master/repository/ --set-string credentials.user=$API_KEY --set-string credentials.secret=$API_SECRET --set-string clusterID=$CLUSTER_ID --set addons.imageScan.enabled={true|false} --set addons.admissionControl.enabled={true|false} --set addons.runtimeProtection.enabled={true|false} --namespace $NAMESPACE

You can set the *.enabled flags to false or omit them if it is not necessary to enable the corresponding features.

If you do not have Helm installed in your environment, use the command below to generate a YAML file for the agent installation with kubectl.

Copy
kubectl run cloudguard-install --rm --image alpine/helm --tty --stdin --quiet --restart=Never --command – helm template asset-mgmt cloudguard --repo https://raw.githubusercontent.com/CheckPointSW/charts/master/repository/ --set credentials.user=$API_KEY --set credentials.secret=$API_SECRET --set clusterID=$CLUSTER_ID --set addons.imageScan.enabled={true|false} --set addons.admissionControl.enabled={true|false} --set addons.runtimeProtection.enabled={true|false} --namespace $NAMESPACE --set containerRuntime=containerd --kube-version <KUBERNETES-VERSION> > cloudguard-install.yaml

kubectl apply -f cloudguard-install.yaml

Installation with a Values File

You can use a YAML file as an alternative or in addition to the --set command line parameters during the Helm chart installation. Use the --values <file> or -f <file> flags in the Helm installation command. This is helpful when you have many changes to the default installation parameters or when it is necessary to specify complex or nested values.

See the default file format in CloudGuard repo, as well as the description of the configurable values.

Heterogeneous Node Pools

When a cluster contains multiple node pools with different configurations, it is sometimes necessary to configure the CloudGuard agent differently for each node pool. For example, one node pool can have small nodes (for example, four CPUs per node), while another can have very big nodes (32 CPUs per node). In such a cluster, it is practical to adjust the configuration of the CloudGuard agent's DaemonSetsClosed DaemonSet ensures a copy of a Pod is running across a set of nodes in a cluster. Used to deploy system daemons such as log collectors and monitoring agents that typically must run on every Node. for each node pool. Below are some examples of when different DaemonSets configurations in different node pools are beneficial:

  • Different resource allocation (for example, allocate more CPU for runtime daemon on nodes with more CPUs)

  • Different container runtimes (for example, nodes running Docker against nodes running containerd)

  • Different architecture

The CloudGuard agent's Helm chart allows to set up multiple DaemonSet configurations for different node pools with the use of the daemonConfigurationOverrides property. It is available under each addons.<feature>.daemon section of the YAML file. This property is an array that specifies multiple override configurations in addition to the default configuration specified under the daemon section.

For each section of overrides, Helm creates a new DaemonSet in the cluster, with the specified configuration.

In addition:

  • The overrides inherit from the default daemon configuration (addons.<feature>.daemon object), and each value set on it applies likewise to the override configurations, unless explicitly changed.

  • The name of each configuration override must be unique (case-insensitive). Non-unique names like “configExample” and “ConfigEXAMPLE” overwrite one another.

  • Each configuration must have a nodeSelector field defined, otherwise, the command fails.

  • Make sure that the nodeSelector fields do not overlap, and a node fits only one configuration. The node that matches more than one configuration will have additional daemons.

Agent Version Life Cycle

Each CloudGuard agent has its recommended and/or minimal required version, which CloudGuard recommends use. New versions of the agents are released when they accumulate significant content, including new capabilities, fixed vulnerabilities, etc.

To verify the agent's version:

  1. Select an environment and click to open its page.

  2. In the Blades tab, expand the module's details. In the Version column, see the version number.

  3. In the Status column, see the agent's status:

    1. Warning / Agent is not up to date - The agent version is below recommended

    2. Error / Agent version is not supported - The agent version is below minimal

      This status appears on the environment page and in the applicable API (agentSummary APIs).

When an agent accumulates significant content, CloudGuard recommends upgrading it - see Upgrading the Agent. The agent status changes from OK to Warning.

When an agent has many issues or sufficient time passes after the outdated agent status is moved to Warning, CloudGuard changes the minimal version. The agent status changes from Warning to Error.

Upgrading the Agent

Assumptions:

  • The environment variables $API_KEY, $API_SECRET, $CLUSTER_ID, $NAMESPACE have the same values as during onboarding

  • Image Assurance and Admission Control are enabled

For agents installed with Helm 3, use the command below to upgrade all agents to the latest version:

helm upgrade --install asset-mgmt cloudguard --repo https://raw.githubusercontent.com/CheckPointSW/charts/master/repository/ --set-string credentials.user=$API_KEY --set-string credentials.secret=$API_SECRET --set-string clusterID=$CLUSTER_ID --set addons.imageScan.enabled={true|false} --set addons.admissionControl.enabled={true|false} --set addons.runtimeProtection.enabled={true|false} --set datacenter=usea1 --namespace $NAMESPACE --create-namespace

Downgrading the Agent

If you want to use a previous version of the agent (not recommended), you can downgrade the agent with standard Helm procedures, specifying the desired Helm chart version. Use the helm rollback or helm upgrade commands.

Uninstalling the Agent

During the process of onboarding, CloudGuard generates the cloudguard-install.yaml file that you use to uninstall the agents.

With Helm:

helm uninstall asset-mgmt --namespace $NAMESPACE

With kubectl:

kubectl delete -f cloudguard-install.yaml --namespace $NAMESPACE

Note - To install agents again after you have uninstalled them, follow STEP 3 - Deploy the agent on the cluster and not the upgrade procedure.