Image Assurance Troubleshooting

Verify the Agent Installation Status

Installation of the Inventory agent is a basic requirement to run Image Assurance.

Central Agent Environment Variables

Name

Workloads

Default Value

Max Value

Comments

LOG_LEVEL

imagescan-engine

imagescan-daemon

imagescan-list

info

N/A

Possible values: debug, trace, warn, error

The same for the imagescan-daemon agent

CP_IMAGESCAN_INTERNAL_PROTO

imagescan-engine

imagescan-daemon

imagescan-list

HTTPS

N/A

If you set it as HTTP, agents use HTTP and not HTTPS for communication in the cluster.

CP_IMAGESCAN_SCAN_TIMEOUT

imagescan-engine

N/A

24h

By default, the scan timeout is set in by the CloudGuard engine. It is possible to override this value.

The timeout is in seconds.

IMAGE_TRANSFER_TIMEOUT_SECONDS

imagescan-engine

imagescan-daemon

N/A

24h

KubernetesClosed Kubernetes, often abbreviated as “K8s”, orchestrates containerized applications to run on a cluster of hosts. image is transferred between imagescan-daemon and imagescan-engine pods to be scanned by the imagescan-engine.

This variable configures the timeout for the transfer operation. The timeout is in seconds.

Important - Define the environment variable for two workloads and use the same value for each of them.

To configure the environment variables:

  1. Edit the applicable imagescan workload and add one or more applicable variables with valid values. Use one of these methods:

    • Edit a running deployment, for example:

      kubectl edit deployment asset-mgmt-imagescan-image -n checkpoint

    • Set the environment variables with the HelmClosed A Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters. commands.

Note - When you set manually more than one environment variable, increase accordingly the index of the variable (0 and 1 in the above).

Istio

As Istio adds HTTPS proxies which break mutual TLS between ImageScan engine and ImageScan daemon agents. Change the protocol they use to connect to HTTP. It is done through environment variables passed to ImageScan Engine deployment and ImageScan daemon DaemonSet: CP_IMAGESCAN_INTERNAL_PROTO=HTTP. For this, append the below lines to the Helm installation or upgrade command (index 0 is used assuming no other environment variables are changed):

--set-string addons.imageScan.daemon.env[0].name=CP_IMAGESCAN_INTERNAL_PROTO,addons.imageScan.daemon.env[0].value="HTTP" \

--set-string addons.imageScan.engine.env[0].name=CP_IMAGESCAN_INTERNAL_PROTO,addons.imageScan.engine.env[0].value="HTTP" \

If the list deployment exists, add this line:

--set-string addons.imageScan.list.env[0].name=CP_IMAGESCAN_INTERNAL_PROTO,addons.imageScan.list.env[0].value="HTTP" \

Low Rate of Image Scan

By default, there is only one image scan engine that scans images sequentially. You can increase the rate of image scanning. For this, deploy more image scan engines on your cluster.

  • Add this parameter to the Helm command:

    --set addons.imageScan.engine.replicaCount=<number-of-scanners>

Common Errors

  1. When the engine and daemon pods cannot connect and the ImageScan engine reports a timeout error, it can be because of the cluster configuration.

    handleImageListTask returned error: get image list failed from agent at 172.17.0.3 (ubuntu2004): from https://consec1-imagescan-daemon:8443/imagelist: Get "https://172.17.0.3:8443/imagelist: dial tcp 172.17.0.3:8443: i/o timeout

  2. When the engine and daemon pods cannot connect and the ImageScan engine reports a certificate validation error, it can occur one time (for each connection with the ImageScan daemon pod) after upgrading Helm. This occurs because pods are restarted one after the other, and some can use certificates from before for internal communication, and some possibly already use the new ones.

  3. Sometimes the Image Assurance Agent status shows the error message: "No images found, please set containerRuntime to be docker", although the cluster worked before. It is possible that your cluster has recently changed its containerRuntime.

    Solution: Run the Helm upgrade command again to solve the issue.